Visual Studio Code Extensions for better programming

Visual Studio Code has become "The Editor" for many in software development and it has many extensions which you can use to extend the functionality for your needs and customize it. Here’s a short list of the extensions I use for frontend (React, JavaScript, Node.js), backend (GraphQL, Python, Node.js, Java, PHP, Docker) and database (PostgreSQL, MongoDB) development.

General

editorconfig
Attempts to override user/workspace settings with settings found in .editorconfig files.

Visual Studio IntelliCode
Provides AI-assisted development features for Python, TypeScript/JavaScript and Java developers in Visual Studio Code, with insights based on understanding your code context combined with machine learning.

GitLens
Visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain valuable insights via powerful comparison commands, and so much more.
Git Blame
See Git Blame information in the status bar for the currently selected line.

Local History
Plugin for maintaining local history of files.

Language and technology specific

ESlint
Integrates ESLint into VS Code.

Prettier
Opinionated code formatter which enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.

Python
Linting, Debugging (multi-threaded, remote), Intellisense, Jupyter Notebooks, code formatting, refactoring, unit tests, and more.

PHP Intelephense
PHP code intelligence for Visual Studio Code is a high performance PHP language server packed full of essential features for productive PHP development.

Java Extension Pack
Popular extensions for Java development and more.

Docker
Makes it easy to build, manage, and deploy containerized applications from Visual Studio Code. It also provides one-click debugging of Node.js, Python, and .NET Core inside a container.

Markdown All in One
All you need to write Markdown (keyboard shortcuts, table of contents, auto preview and more)
Markdownlint
Includes a library of rules to encourage standards and consistency for Markdown files.
Markdown Preview Enhanced
Provides you with many useful functionalities such as automatic scroll sync, math typesetting, mermaid, PlantUML, pandoc, PDF export, code chunk, presentation writer, etc.

Prettify JSON
Prettify ugly JSON inside VSCode.

PlantUML
Rich PlantUML support for Visual Studio Code.

HashiCorp Terraform
Syntax highlighting and autocompletion for Terraform

Database

PostgreSQL
Query tool for PostgreSQL databases. While there is a database explorer it is NOT meant for creating/dropping databases or tables. The explorer is a visual aid for helping to craft your queries.

MongoDB
Makes it easy to work with MongoDB.

GraphQL
Adds syntax highlighting, validation, and language features like go to definition, hover information and autocompletion for graphql projects. This extension also works with queries annotated with gql tag.
GraphQL for VSCode
VSCode extension for GraphQL schema authoring & consumption.
Apollo GraphQL for VS Code
Rich editor support for GraphQL client and server development that seamlessly integrates with the Apollo platform.

Javascript

Babel
JavaScript syntax highlighting for ES201x, React JSX, Flow and GraphQL.

Jest
Use Facebook's Jest with pleasure.

npm
Supports running npm scripts defined in the package.json file and validating the installed modules against the dependencies defined in the package.json.

User Interface specific

indent-rainbow
Simple extension to make indentation more readable

Rainbow Brackets
Rainbow colors for the round brackets, the square brackets and the squiggly brackets.

vscode-icons
Icons for filetypes in file browser.

Other tips

VScode Show Full Path in Title Bar
With Code open, hit: Command+ , “window.title”: “{activeEditorLong}activeEditorLong{separator}${rootName}”

Slow integrated terminal in macOS
codesign --remove-signature /Applications/Visual\ Studio\ Code.app/Contents/Frameworks/Code\ Helper\ (Renderer).app

Running static analysis tools for PHP

We all write bug free code but analyzing your code is still important part of software development if for some reason there could've been some mishap with typing. Here's a short introduction how to run static analysis for PHP code.

Static analysis tools for PHP

The curated list of static analysis tools for PHP show you many options for doing analysis. Too much you say? Yes but fortunately you can start with some tools and continue with the specific needs you have.

You can run different analysis tools by installing them with composer or you can use the Toolbox which helps to discover and install tools. You can use it as a Docker container.

First, fetch the docker image with static analysis tools for PHP:

$ docker pull jakzal/phpqa:<your php version>
e.g.
$ docker pull jakzal/phpqa:php7.4-alpine

PHPMD: PHP Mess Detector

One of the tools provided in the image is PHPMD which aims to be a PHP equivalent of the well known Java tool PMD. PHPMD can be seen as an user friendly and easy to configure frontend for the raw metrics measured by PHP Depend.

It looks for several potential problems within that source like:

  • Possible bugs
  • Suboptimal code
  • Overcomplicated expressions
  • Unused parameters, methods, properties

You can install the phpmd with composer: composer require phpmd/phpmd. Then run it with e.g. ./vendor/bin/phpmd src html unusedcode --reportfile phpmd.html

Or run the command below which runs phpmd in a docker container and mounts the current working directory as a /project.

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine \
    phpmd src html cleancode,codesize,controversial,design,naming,unusedcode --reportfile phpmd.html

You can also make your custom rules to reduce false positives: phpmd.test.xml

<?xml version="1.0"?>
<ruleset name="VV PHPMD rule set"
         xmlns="http://pmd.sf.net/ruleset/1.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://pmd.sf.net/ruleset/1.0.0
                     http://pmd.sf.net/ruleset_xml_schema.xsd"
         xsi:noNamespaceSchemaLocation="
                     http://pmd.sf.net/ruleset_xml_schema.xsd">
    <description>
        Custom rule set that checks my code.
    </description>
<rule ref="rulesets/codesize.xml">
    <exclude name="CyclomaticComplexity"/>
    <exclude name="ExcessiveMethodLength"/>
    <exclude name="NPathComplexity"/>
    <exclude name="TooManyMethods"/>
    <exclude name="ExcessiveClassComplexity"/>
    <exclude name="ExcessivePublicCount"/>
    <exclude name="TooManyPublicMethods"/>
    <exclude name="TooManyFields"/>
</rule>
<rule ref="rulesets/codesize.xml/TooManyFields">
    <properties>
        <property name="maxfields" value="21"/>
    </properties>
</rule>
<rule ref="rulesets/cleancode.xml">
    <exclude name="StaticAccess"/>
    <exclude name="ElseExpression"/>
    <exclude name="MissingImport" />
</rule>
<rule ref="rulesets/controversial.xml">
    <exclude name="CamelCaseParameterName" />
    <exclude name="CamelCaseVariableName" />
    <exclude name="Superglobals" />
</rule>
<rule ref="rulesets/design.xml">
    <exclude name="CouplingBetweenObjects" />
    <exclude name="NumberOfChildren" />
</rule>
<rule ref="rulesets/design.xml/NumberOfChildren">
    <properties>
        <property name="minimum" value="20"/>
    </properties>
</rule>
<rule ref="rulesets/naming.xml">
    <exclude name="ShortVariable"/>
    <exclude name="LongVariable"/>
</rule>
<rule ref="rulesets/unusedcode.xml">
    <exclude name="UnusedFormalParameter"/>
</rule>
<rule ref="rulesets/codesize.xml/ExcessiveClassLength">
    <properties>
        <property name="minimum" value="1500"/>
    </properties>
</rule>
</ruleset>

Then run your analysis with:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpmd src html phpmd.test.xml unusedcode --reportfile phpmd.html

You get a list of found issues formatted to a HTML file

PHPMD Report

PHPStan - PHP Static Analysis Tool

"PHPstan focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code. It moves PHP closer to compiled languages in the sense that the correctness of each line of the code can be checked before you run the actual line."

Installing with composer: composer require --dev phpstan/phpstan

Or run on Docker container:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpstan analyse --level 1 src

By default you will get a report to console formatted to a table and grouped errors by file, colorized. For human consumption.

PHPStan report

By default PHPStan is performing only the most basic checks and you can pass a higher rule level through the --level option (0 is the loosest and 8 is the strictest) to analyse code more thoroughly. Start with 0 and increase the level as you go fixing possible issues.

PHPStan found some more issues which PHPMD didn't find but the output of the PHPStan could be better. There's a Web UI for browsing found errors and you can click and open your editor of choice on the offending line but you've to pay for it. PHPStan Pro costs 7 EUR for individuals monthly, 70 EUR for teams.

VS Code extension for PHP

If you're using Visual Studio Code for PHP programming there are some extensions to help you.

PHP Intelephense
PHP code intelligence for Visual Studio Code provides better intellisense then VS Code builtin and also does some signature checking etc. The extension has also premium version for some additional features.

What software and hardware I use

There was a discussion in Koodiklinikka Slack about what software people use and that people have made "/uses" pages for that purpose. And inspired by Wes Bos /uses from "Syntax" Podcast here's my list.

Check my /uses page to see what software and hardware I use for full-stack development in JavaScript, Node.js, Java, Kotlin, GraphQL, PostgreSQL and more. The list excludes the tools from different customers like using GitLab, Rocket.Chat, etc.

For more choices check uses.tech.

Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."

jwt.io

You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/CN=ruleoftech.com" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
OR
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

" 
{
"keys": [
….,
{
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"
}
]
}
"

Convert the PEM to JWK format with e.g. pem-jwk or with pem_to_jwks.py. The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'

...

To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!

Keep Maven dependencies up to date

Software development projects come usually with lots of dependencies and keeping them up to date can be burdensome if done manually. Fortunately there are tools to help you. For Node.js projects there are e.g. npm-check and npm-check-updates and for Maven projects there are OWASP/Dependency-Check and Versions Maven plugins. Here's a short introduction how to setup your Maven project to automatically check dependencies for vulnerabilities and if there's outdated dependencies.

OWASP/Dependency-Check

OWASP dependency-check is an open source solution the OWASP Top 10 2013 entry: "A9 - Using Components with Known Vulnerabilities".

Dependency-check can currently be used to scan Java and .NET applications to identify the use of known vulnerable components. The dependency-check plugin is, by default, tied to the verify or site phase depending on if it is configured as a build or reporting plugin.

The example below can be executed using mvn verify:

<project>
     ...
     <build>
         ...
         <plugins>
             ...
<plugin> 
    <groupId>org.owasp</groupId> 
    <artifactId>dependency-check-maven</artifactId> 
    <version>5.0.0-M3</version> 
    <configuration>
        <failBuildOnCVSS>8</failBuildOnCVSS>
        <skipProvidedScope>true</skipProvidedScope> 
        <skipRuntimeScope>true</skipRuntimeScope> 
    </configuration> 
    <executions> 
        <execution> 
            <goals> 
                <goal>check</goal> 
            </goals> 
        </execution> 
    </executions> 
</plugin>
            ...
         </plugins>
         ...
     </build>
     ...
</project>

The example fails the build for CVSS greater than or equal to 8 and skips scanning the provided and runtime scoped dependencies.

Versions Maven Plugin

The Versions Maven Plugin is the de facto standard way to manage versions of artifacts in a project's POM. From high-level comparisons between remote repositories up to low-level timestamp-locking for SNAPSHOT versions, its massive list of goals allows us to take care of every aspect of our projects involving dependencies.

The example configuration of versions-maven-plugin:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>versions-maven-plugin</artifactId>
    <version>2.7</version>
    <configuration>
        <allowAnyUpdates>false</allowAnyUpdates>
        <allowMajorUpdates>false</allowMajorUpdates>
        <allowMinorUpdates>false</allowMinorUpdates>
        <processDependencyManagement>false</processDependencyManagement>
    </configuration>
</plugin>

You could use goals that modify the pom.xml as described in the usage documentation but often it's easier to check versions manually  as you might not be able to update all of the suggested dependencies.

The display-dependency-updates goal will check all the dependencies used in your project and display a list of those dependencies with newer versions available.

Check new dependencies with:

mvn versions:display-dependency-updates

Check new plugin versions with:

mvn versions:display-plugin-updates

Summary

Using OWASP/Dependency-Check in your Continuous Integration build flow to automatically check dependencies for vulnerabilities and running periodically Versions Maven Plugin to check if there are outdated dependencies helps you to keep your project up to date and secure. Small but important things to remember while developing and maintaining a software project.

Tracking vulnerabilities and keeping Node.js packages up to date

Software evolves quickly and new versions of libraries are released but how do you keep track of updated dependencies and vulnerable libraries? Managing dependencies has always been somewhat a pain point but an important part of software development as it's better to be tracking vulnerabilities and running fresh packages than being pwned.

There are couple of tools for JavaScript projects which use npm to manage dependencies to check new versions and some tools to track vulnerabilities. Here's a short introduction to npm audit, depcheck, npm-check-updates and npm-check to help you on your way.

If your project is using yarn adjust your workflow accordingly. There's for example yarn audit and yarn-check to match tools for npm. And it goes without saying that don't use npm if your project uses yarn.

Running security audit with npm audit

From version 6 onwards npm comes build with audit command which checks for vulnerabilities in your dependencies and runs automatically when you install a package with npm install. You can also run npm audit manually on your locally installed packages to conduct a security audit of the package and produce a report of dependency vulnerabilities and suggested patches.

The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. It checks direct dependencies, devDependencies, bundledDependencies, and optionalDependencies, but does not check peerDependencies.

If your npm registry doesn't support npm audit, like Artifactory, you can pass in the --registry flag to point to public npm. The downside is that now you can't audit private packages that are on the Artifactory registry.

$ npm audit --registry=https://registry.npmjs.org

"Running npm audit will produce a report of security vulnerabilities with the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities."

Example: partial output of npm audit run

Using npm audit is useful also in Continuous Integration as it will return a non-zero response code if security vulnerabilities are found.

For more information read npm's Auditing dependencies for security vulnerabilities.

Updating packages with npm outdated

It's recommended to regularly update the local packages your project depends on to improve your code as improvements to its dependencies are made. In your project root directory, run the update command and then outdated. There should not be any output.

$ npm update
$ npm outdated 
Example of results from npm outdated

You can also update globally-installed packages. To see which global packages need to be updated run outdated first with --depth=0.

$ npm outdated -g --depth=0
$ npm outdated -g

For more information read updating packages downloaded from the registry.

Check updates with npm-check-updates

Package.json contains dependencies with semantic versioning policy and to find newer versions of package dependencies than what your package.json allows you need tools like npm-check-updates. It can upgrade your package.json dependencies to the latest versions, ignoring specified versions while maintaining your existing semantic versioning policies.

Install npm-check-updates globally with:

$ npm install -g npm-check-updates 

And run it with:

$ ncu

The result shows any new dependencies for the project in the current directory. See documentation for i.a. configuration files for filtering and excluding dependencies.

Example of results from ncu

And finally you can run ncu -u to upgrade the package.json.

Check updates with npm-check

Similar tool to npm-check-updates is npm-check which additionally gives more information about the version changes available and also lets you interactively pick which packages to update instead of an all or nothing approach. It checks for outdated, incorrect, and unused dependencies.

Install npm-check globally with:

$ npm i -g npm-check

Now you can run the command inside your project directory:

$ npm-check
Or
$ npm-check --registry=https://registry.npmjs.org

It will display all possible updates with information about the type of update, project URL, commands, and will attempt to check if the package is still in use. You can easily parse through the results and see what packages might be safe to update. When updates are required it will return a non-zero response code that you can use in your CI tools.

The check for unused dependencies uses depcheck and isn't able to foresee all ways dependencies can be used so be vary with careless removing of packages.

To see an interactive UI for choosing which modules to update run:

$ npm-check –u

Analyze dependencies with depcheck

Your package.json is filled with dependencies and some of them might be useless or even missing from package.json. Depcheck is a tool for analyzing the dependencies in a project to see how each dependency is used, which dependencies are useless, and which dependencies are missing. It does not only recognizes the dependencies in JavaScript files, but also supports i.a. React JSX and Typescript.

Install depcheck with:

$ npm install -g depcheck
And with additional syntax support for Typescript
$ npm install -g depcheck typescript

Run depcheck with:

$ depcheck [directory]
Example of results from depcheck

Summary

tl;dr;

  1. Use npm audit in your CI pipeline
  2. Update dependencies with npm outdated
  3. Check new versions of dependencies with either npm-check-updates or npm-check
  4. Analyze dependencies with depcheck

Notes of Best Practices for writing Cypress tests

Cypress is a nice tool for end-to-end tests and it has good documentation also for Best Practices including "Cypress Best Practices" talk by Brian Mann at Assert(JS) 2018. Here are my notes from the talk combined with the Cypress documentation. This article assumes you know and have Cypress running.

In short:

  • Set state programmatically, don't use the UI to build up state.
  • Write specs in isolation, avoid coupling.
  • Don't limit yourself trying to act like a user.
  • Tests should always be able to be run independently and still pass.
  • Only test what you control.
  • Use data-* attributes to provide context to your selectors.
  • Clean up state before tests run (not after).

Organizing tests

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

"Writing and Organizing tests" documentation just tells you the basics how you should organize your tests. You should organize tests by pages and by components as you should test components individually if possible. So the folder structure for tests might look like.

├ articles
├── article_details_spec.js
├── article_new_spec.js
├── article_list_spec.js
├ author
├── author_details_spec.js
├ shared
├── header_spec.js
├ user
├── login_spec.js
├── register_spec.js
└── settings_spec.js

Selecting Elements

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors and insulate them from CSS or JS changes.

Add data-* attributes to make it easier to target elements.

For example:

<button id="main" class="btn btn-large" name="submit"
  role="button" data-cy="submit">Submit</button>

Writing Tests

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

Best practice when writing tests on Cypress is to iterate on a single one at a time, i.a.

describe('/login', () => {

  beforeEach() => {
    // Wipe out state from the previous tests
    cy.visit('/#/login')
  }

  it('requires email', () =>
    cy.get('form').contains('Sign in').click()
    cy.get('.error-messages')
    .should('contain', 'email can\'t be blank')
  })

  it('requires password', () => {
    cy.get('[data-test=email]').type('joe@example.com{enter}')
    cy.get('.error-messages')
    .should('contain', 'password can\'t be blank')
  })

  it('navigates to #/ on successful login', () => {
    cy.get('[data-test=email]').type('joe@example.com')
    cy.get('[data-test=password]').type('joe{enter}')
    cy.hash().should('eq', '#/')
  })

})

Note that we don't add assertions about the home page because we're on the login spec, that's not our responsibility. We'll leave that for the home page which is the article spec.

Controlling State

"abstraction, reusability and decoupling"

- Don't use the UI to build up state
+ Set state directly / programmatically

Now you have the login spec done and it's the cornerstone for every single test you will do. So how do you use it in e.g. settings spec? For not to copy & paste login steps to each of your tests and duplicating code you could use custom command: cy.login(). But using custom command for login fails at testing in isolation, adds 0% more confidence and accounts for 75% of the test duration. You need to log in without using the UI. And to do that depends of how your app works. For example you can check for JWT token in the App and in Cypress make a silent (HTTP) request.

So your custom login command becomes:

Cypress.Commands.add('login', () => {
  cy.request({
    method: 'POST',
    url: 'http://localhost:3000/api/users/login',
    body: {
      user: {
        email: 'joe@example.com',
        password: 'joe',
      }
    }
  })
  .then((resp) => {
    window.localStorage.setItem('jwt', resp.body.user.token)
  })
})

Setting state programmatically isn't always as easy as making requests to endpoint. You might need to manually dispatch e.g. Vue actions to set desired values for the application state in the store. Cypress documentation has good example of how you can test Vue web applications with Vuex data store & REST backend.

Visiting external sites

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

Try to avoid requiring a 3rd party server. When necessary, always use cy.request() to talk to 3rd party servers via their APIs like testing log in when your app uses another provider via OAuth. Or you could try stub out the OAuth provider. Cypress has recipes for different approaches.

Add multiple assertions

- Don't create "tiny" tests with a single assertion and acting like you’re writing unit tests.
+ Add multiple assertions and don’t worry about it

Cypress runs a series of async lifecycle events that reset state between tests. Resetting tests is much slower than adding more assertions.

it('validates and formats first name', function () {
    cy.get('#first')
      .type('johnny')
      .should('have.attr', 'data-validation', 'required')
      .and('have.class', 'active')
      .and('have.value', 'Johnny')
  })

Clean up state before tests run

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

When your tests end - you are left with your working application at the exact point where your test finished. If you remove your application's state after each test, then you lose the ability to use your application in this mode or debug your application or write a partial tests.

Unnecessary Waiting

- Don't wait for arbitrary time periods using cy.wait(Number).
+ Use route aliases or assertions to guard Cypress from proceeding until an explicit condition is met.

For example waiting explicitly for an aliased route:

cy.server()
cy.route('GET', /users/, [{ 'name': 'Maggy' }, { 'name': 'Joan' }]).as('getUsers')
cy.get('#fetch').click()
cy.wait('@getUsers')     // <--- wait explicitly for this route to finish
cy.get('table tr').should('have.length', 2)

No constraints

You've native access to everything so don't limit yourself trying to act like a user. You can e.g.

  • Control Time: cy.clock(), e.g. control how your app responds to system time, force set timeouts and set intervals to fire when you want them to.
  • Stub Objects: cy.stub(), force callbacks to fire, assert things are called with right arguments.
  • Modify Stores: cy.window(), e.g. dispatch events, like logout.

Set global baseUrl

+ Set a baseUrl in your configuration file.

Adding a baseUrl in your configuration allows you to omit passing the baseUrl to commands like cy.visit() and cy.request().

Without baseUrl set, Cypress loads main window in localhost + random port. As soon as it encounters a cy.visit(), it then switches to the url of the main window to the url specified in your visit. This can result in a ‘flash’ or ‘reload’ when your tests first start. By setting the baseUrl, you can avoid this reload altogether.

Assertions should be obvious

"A good practice is to force an assertion to fail and see if the error message and the output is enough to know why. It is easiest to put a .only on the it block you're evaluating. This way the application will stop where a screenshot is normally taken and you're left to debug as if you were debugging a real failure. Thinking about the failure case will help the person who has to work on a failing test." (Best practices for maintainable tests)

<code>
it.only('check for tab descendants', () => {
  cy
    .get('body')
    .should('have.descendants', '[data-testid=Tab]') // expected '' to have descendants '[data-testid=Tab]'
    .find('[data-testid=Tab]')
    .should('have.length', 2) // expected '[ <div[data-testid=tab]>, 4 more... ]' to have a length of 2 but got 5
});
</code>

Explore the environment

You can pause the test execution by using debugger keyword. Make sure the DevTools are open.

it('bar', function () {
   debugger
   // explore "this" context
 })

Running in CI

If you're running in Cypress in CI and need to start and stop your web server there's recipes showing you that.

Try the start-server-and-test module. It's good to note that when using e2e-cypress plugin for vue-cli it starts the app automatically for Cypress.

If your videos taken during cypress run freeze when running on CI then increase the CPU resources, see: #4722

Adjust the compression level on cypress.json to minimal with "videoCompression": 0 or disable it with "videoCompression": false. Disable recording with "video": false.

Record success and failure videos

Cypress captures videos from test runs and whenever a test fails you can watch the failure video side by side with the video from the last successful test run. The differences in the subject under test are quickly obvious as Bahtumov's tips suggests.

If you're using e.g. GitLab CI you can configure it to keep artifacts from failed test runs for 1 week, while keeping videos from successful test runs only for a 3 days.

artifacts:
    when: on_failure
    expire_in: '1 week'
    untracked: true
    paths:
      - cypress/videos
      - cypress/screenshots
  artifacts:
    when: on_success
    expire_in: '3 days'
    untracked: true
    paths:
      - cypress/screenshots

Helpful practices

Disable ServiceWorker

ServiceWorkers are great but they can really affect your end-to-end tests by introducing caching and coupling tests. If you want to disable the service worker caching you need to remove or delete navigator.serviceWorker when visiting the page with cy.visit.

it('disable serviceWorker', function () {
  cy.visit('index.html', {
    onBeforeLoad (win) {
      delete win.navigator.__proto__.serviceWorker
    }
  })
})

Note: once deleted, the SW stays deleted in the window, even if the application navigates to another URL.

Get command log on failure

In the headless CI mode, you can get a JSON file for each failed test with the log of all commands. All you need is cypress-failed-log project and include it from your cypress/support/index.js file.

Conditional logic

Sometimes you might need to interact with a page element that does not always exist. For example there might a modal dialog the first time you use the website. You want to close the modal dialog. But the modal is not shown the second time around and the above code will fail.

In order to check if an element exists without asserting it, use the proxied jQuery function Cypress.$:

const $el = Cypress.$('.greeting')
if ($el.length) {
  cy.log('Closing greeting')
  cy.get('.greeting')
    .contains('Close')
    .click()
}
cy.get('.greeting')
  .should('not.be.visible')

Summary

- Don't use the UI to build up state
+ Set state directly / programmatically

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

- Don't limit yourself trying to act like a user
+ You have native access to everything

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors

- Don't create tests with a single assertion
+ Add multiple assertions and don’t worry about it

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

+ Set a baseUrl in your configuration file.

More to read

Use cypress-testing-library which encourage good testing practices through simple and complete custom Cypress commands and utilities.

Set up intelligent code completion for Cypress commands and assertions by adding a triple-slash directive to the head of your JavaScript or TypeScript testing spec file. This will turn the IntelliSense on a per file basis.

/// <reference types="Cypress" />

Read What I’ve Learned Using Cypress.io for the Past Three Weeks if you need a temporary workaround for iframes and testing file uploads as for now Cypress does not natively support those.

And of course Gleb Bahmutov's blog is useful resource for practical things like Tips and tricks post.

Automate validating code changes with Git hooks

What could be more annoying than committing code changes to repository and noticing afterwards that formatting isn't right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately with small enhancements to your development workflow you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.

Git Hooks

Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. They're a built-in feature and run locally. Hook scripts are only limited by a developer's imagination. Some example hook scripts include:

  • pre-commit: Check the commit for linting errors.
  • pre-receive: Enforce project coding standards.
  • post-commit: Email team members of a new commit.
  • post-receive: Push the code to production.

Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You're free to change or update these scripts as necessary, and Git will execute them when those events occur.

Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready for commit or pushing to remote repository.

For more reading about Git hooks you can check missing Git hooks documentation, read the basics and check tutorial how to use Git hooks on local Git clients and Git servers.

Pre-commit

One productive way to use Git hooks is pre-commit framework for managing and maintaining multi-language pre-commit hooks. Read tips for using a pre-commit hook.

Pre-commit is nice for example running linters to ensure that your changes conform to coding standards. All you need is to install pre-commit and then add hooks.

Installing pre-commit, ktlint and pre-commit-hook on MacOS with Homebrew:

$ brew install pre-commit
$ brew install ktlint
$ ktlint --install-git-pre-commit-hook

For example the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The "export PATH=/usr/local/bin:$PATH" is for SourceTree to find git on MacOS.

#!/bin/sh
export PATH=/usr/local/bin:$PATH
# https://github.com/shyiko/ktlint pre-commit hook
git diff --name-only --cached --relative | grep '\.kt[s"]\?$' | xargs ktlint -F --relative .
if [ $? -ne 0 ]; then exit 1; else git add .; fi

The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.

Maven projects

Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.

It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project's initial lifecycle phase. It's good to notice that the plugin rewrites hooks.

Usage Example:

<build>
    <plugins>
	<plugin>
	    <groupId>org.sandbox</groupId>
	    <artifactId>githook-maven-plugin</artifactId>
	    <version>1.0.0</version>
	    <executions>
	        <execution>
	            <goals>
	                <goal>install</goal>
	            </goals>
	            <configuration>
	                <hooks>
	                    <pre-commit>
	                         echo running validation build
	                         exec mvn clean install
	                    </pre-commit>
	                </hooks>
	            </configuration>
	        </execution>
	    </executions>
	</plugin>
    </plugins>
</build>

Git hooks for Node.js projects

On Node.js projects you can define scripts in package.json and run them with npm which enables an another approach to running Git hooks.

🐶 Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.

Installing Husky is like any other npm library

npm install husky --save-dev

The following configuration on your package.json runs lint (e.g. eslint with --fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to remote repository.

"husky": {
   "hooks": {
     "pre-commit": "npm run lint",
     "pre-push": "npm run lint && npm run test"
   }
}

Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.

Summary

Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.

This article was originally published at 15.7.2019 on Gofore's blog.

Ignoring files and folders in Subversion with propset

Before committing code to the Subversion repository we always set the svn:ignore property on the directory to prevent some files and directories to be checked in. You would usually want to exclude the IDE project files and the target/ directory.

It's useful to put all the ignored files and directories into a file: .svnignore. Your .svnignore could look like:

*.iml
target/*

Put the .svnignore file in the project folder and commit it to your repository so the ignored files are shared between committers.

Now reference the file with the -F option:
$ svn propset svn:ignore -F .svnignore.

Of course I hope everyone has by now moved to git and uses .gitignore for this same purpose.

Best Practices of forking git repository and continuing development

Sometimes there's a need to fork a git repository and continue development with your own additions. It's recommended to make pull request to upstream so that everyone could benefit of your changes but in some situations it's not possible or feasible. When continuing development in forked repo there's some questions which come to mind when starting. Here's some questions and answers I found useful when we forked a repository in Github and continued to develop it with our specific changes.

Repository name: new or fork?

If you're releasing your own package (to e.g. npm or mvn) from the forked repository with your additions then it's logical to also rename the repository to that package name.

If it's a npm package and you're using scoped packages then you could also keep the original repository name.

Keeping master and continuing developing on branch?

Using master is the sane thing to do. You can always sync your fork with an upstream repository. See: syncing a fork

Generally you want to keep your local master branch as a close mirror of the upstream master and execute any work in feature branches (that might become pull requests later).

How you should do versioning?

Suppose that the original repository (origin) is still in active development and does new releases. How should you do versioning in your forked repository as you probably want to bring the changes done in the origin to your fork? And still maintain semantic versioning.

In short, semver doesn't support prepending or appending strings to version. So adding your tag to the version number from the origin which your version is following breaks the versioning. So, you can't use something like "1.0.0@your-org.0.1" or "1.0.0-your-org.1". This has been discussed i.a. semver #287. The suggestion was to use a build meta tag to encode the other version as shown in semver spec item-10. But the downside is that "Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence."

If you want to keep relation the original package version and follow semver then your options are short. The only option is to use build meta tag: e.g. "1.0.0+your-org.1".

It seems that when following semantic versioning your only option is to differ from origin version and continue as you go.

If you don't need to or want to follow semver you can track upstream version and mark your changes using similar markings as semver pre-releases: e.g. "1.0.0-your-org.1".

npm package: scoped or unscoped?

Using scoped packages is a good way to signal official packages for organizations. Example of using scoped packages can be seen from Storybook.

It's more of a preference and naming conventions of your packages. If you're using something like your-org-awesome-times-ahead-package and your-org-patch-the-world-package then using scoped packages seems redundant.

Who should be the author?

At least add yourself to contributors in package.json.

Forking only for patching npm library?

Don't fork, use patch-package which lets app authors instantly make and keep fixes to npm dependencies. Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn. Now you don't need to wait around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

This post was originally published on Gofore Group blog at 11.2.2019.