Reset Hasura migrations and squash files

Using GraphQL for creating REST APIs is nowadays popular and there are different tools you can use. One of them is Hasura which is an open-source engine that gives you realtime GraphQL APIs on new or existing Postgres databases. Hasura is quite easy to work with but if your GraphQL schemas change a lot it creates plentiful of migration files. This has some unwanted consequences (for example slowing down the hasura migrate apply or even blocking it). Here’s some notes how to reset the state and create new migrations from the state that is on the server.

Note: From Hasura 1.0.0 onwards squashing is easier with hasura migrate squash command. It's still in preview. But before Hasura 1.0.0 version you have to squash migrations manually and this blog post explains how. The results are the same: squashing multiple migrations into a single one.

Hasura documentation provides a good guide how to squash migrations but in practice there are couple of other things you may need to address. So let’s combine the steps Hasura gives and some extra steps.

Reset Hasura migrations

First make a backup branch:

  1. $ git checkout master
  2. Create a backup branch:
    $ git checkout -b backup/migrations-before-resetting-20XX-XX-XX
  3. Update the backup branch to origin:
    $ git push origin backup/migrations-before-resetting-20XX-XX-XX

We are assuming you've local Hasura running on Docker with something like the following docker-compose.yml

version: "3.6"
services:
  postgres:
    image: postgres:11-alpine
    restart: always
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data
    command: postgres -c max_locks_per_transaction=2000
  graphql-engine:
    image: hasura/graphql-engine:v1.0.0-beta.6
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
      HASURA_GRAPHQL_ADMIN_SECRET: changeme
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
volumes:
  db_data:

Create local instance of Hasura with up to date migrations:

  1. $ docker-compose down -v
  2. $ docker-compose up
  3. $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Reset migrations to master:

  1. git checkout master
  2. git checkout -b reset-hasura-migrations
  3. rm -rf migrations/*

Reset the migration history on server. On hasura SQL console, http://localhost:8080/console:

TRUNCATE hdb_catalog.schema_migrations;

Setup fresh migrations by taking the schema and metadata from the server. By default init only takes public schema if others not mentioned with the --schema "your schema" parameter. Note down the version for later use.

  1. Create migration file:
    $ hasura migrate create "init" --from-server
  2. Mark the migration as applied on this server:
    $ hasura migrate apply --version "" --skip-execution
  3. Verify status of migrations, should show only one migration with Present status:
    $ hasura migrate status
  4. You have brand new migrations now!

Resetting migrations on other environments

  1. Checkout the reset branch on local machine:
    $ git checkout -b reset-hasura-migrations
  2. Reset the migration history on remote server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to remote server:
    $ hasura migrate apply --version "<version>" --skip-execution

Local environment Hasura status

For other developers please refer these instructions in order to get the backend into same state.

Option 1: Keep old data

  1. Checkout the backup branch on local machine:
    $ git checkout backup/migrations-before-resetting-20XX-XX-XX
  2. Reset the migration history on local server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to local server:
    $ hasura migrate apply --version "<version>" --skip-execution

Option 2: Remove all and start from beginning

  1. Clean up the old docker volumes:
    $ docker-compose down -v
  2. Start up services:
    $ docker-compose up
  3. Checkout master:
    $ git checkout master
  4. Apply migrations:
    $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Possible extra steps

Now your Hasura migrations and database tables are in one migration init file but sometimes things don’t work out when applying it to empty database. We are using Hasura audit-trigger and had to reorder the SQL clauses done by the migrate init and add some missing parts.

  1. Move schema creations after audit clauses
  2. Move audit.audit_table(target_table regclass) to last audit clause and copy it from audit.sql
  3. Add pg_trgm extension as done previously (fixes "operator does not exist: text <%!t(MISSING)ext" in public.search_customers_by_name)
  4. Drop session constraints / index before creating new
  5. Create session table only if not exists

Monthly notes 47

Issue 47: 30.1.2020

War Stories

#Y2038 problem. "It's *already here*. Fix your stuff."
In many systems time is represented as number of seconds passed since 00:00:00 UTC on 1 Jan 1970 and stored as signed 32-bit integer. Such implementations can't encode times after 03:14:07 UTC on 19 January 2038. (from @walokra)

Ops Lessons We All Learn The Hard Way
Good Twitter thread of lessons learned on Ops.

Web

Front-End Performance Checklist 2020
Great resource to read for better front-end performance. Remember, if you don’t measure it, you can’t improve it 🚀 To get you started: use i.a. page speed and lighthouse to see where you stand.

JavaScript

20 ways to become better node.js developer in 2020
tl;dr; Sleep more; Use Jest & Ava; GraphQL!; Check Nest.js; Gradual deployment; Test in production? Learn Docker & Kubernetes; Read vulnerable code; Use monitoring; CI with quality tools;

Kubernetes

How Soon We Forget: Security in the Age of Docker & Kubernetes
Good starting point for hardening your containers and Kubernetes cluster. tl;dr; Running as non-root. Read-only file system. Not terminating TLS too soon. Setting resources limits (Denial of service). Use Kubernetes policies. (from nicolas_frankel)

Software Development

Goodbye, Clean Code
AHA! Avoid Hasty Abstractions. Prefer duplication over the wrong abstraction. Check also Dan’s tweet’s thread.

Remote working tips
tl;dr; the thread: 1) activity signal to start work day 2) frequent small breaks 3) dedicate a space for work (not sofa/bed) 4) take sick days 5) connect with other humans 6) non-project connection point with peers 7) block out distractions 8) go out for lunch.

Tools

tiny-helpers.dev
Collection of useful single-purpose online tools that are useful for web devs. (from @stefanjudis)

Insomnia
"Debug APIs like a human, not a robot". If you don't like Postman then try Insomnia which says to be powerful HTTP and GraphQL tool belt and open source. Seems that it doesn't have similar scripting and testing features as Postman though.

Something different

Finding the best bicycle chain
tl;dr; No major reason for not to use drivetrain manufacturer’s recommended chain. The lubricant you use will play the most critical role in drivetrain durability. Run a good lube and keep your drivetrain clean.

Tracking vulnerabilities and keeping Node.js packages up to date

Software evolves quickly and new versions of libraries are released but how do you keep track of updated dependencies and vulnerable libraries? Managing dependencies has always been somewhat a pain point but an important part of software development as it's better to be tracking vulnerabilities and running fresh packages than being pwned.

There are couple of tools for JavaScript projects which use npm to manage dependencies to check new versions and some tools to track vulnerabilities. Here's a short introduction to npm audit, depcheck, npm-check-updates and npm-check to help you on your way.

If your project is using yarn adjust your workflow accordingly. There's for example yarn audit and yarn-check to match tools for npm. And it goes without saying that don't use npm if your project uses yarn.

Running security audit with npm audit

From version 6 onwards npm comes build with audit command which checks for vulnerabilities in your dependencies and runs automatically when you install a package with npm install. You can also run npm audit manually on your locally installed packages to conduct a security audit of the package and produce a report of dependency vulnerabilities and suggested patches.

The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. It checks direct dependencies, devDependencies, bundledDependencies, and optionalDependencies, but does not check peerDependencies.

If your npm registry doesn't support npm audit, like Artifactory, you can pass in the --registry flag to point to public npm. The downside is that now you can't audit private packages that are on the Artifactory registry.

$ npm audit --registry=https://registry.npmjs.org

"Running npm audit will produce a report of security vulnerabilities with the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities."

Example: partial output of npm audit run

Using npm audit is useful also in Continuous Integration as it will return a non-zero response code if security vulnerabilities are found.

For more information read npm's Auditing dependencies for security vulnerabilities.

Updating packages with npm outdated

It's recommended to regularly update the local packages your project depends on to improve your code as improvements to its dependencies are made. In your project root directory, run the update command and then outdated. There should not be any output.

$ npm update
$ npm outdated 
Example of results from npm outdated

You can also update globally-installed packages. To see which global packages need to be updated run outdated first with --depth=0.

$ npm outdated -g --depth=0
$ npm outdated -g

For more information read updating packages downloaded from the registry.

Check updates with npm-check-updates

Package.json contains dependencies with semantic versioning policy and to find newer versions of package dependencies than what your package.json allows you need tools like npm-check-updates. It can upgrade your package.json dependencies to the latest versions, ignoring specified versions while maintaining your existing semantic versioning policies.

Install npm-check-updates globally with:

$ npm install -g npm-check-updates 

And run it with:

$ ncu

The result shows any new dependencies for the project in the current directory. See documentation for i.a. configuration files for filtering and excluding dependencies.

Example of results from ncu

And finally you can run ncu -u to upgrade the package.json.

Check updates with npm-check

Similar tool to npm-check-updates is npm-check which additionally gives more information about the version changes available and also lets you interactively pick which packages to update instead of an all or nothing approach. It checks for outdated, incorrect, and unused dependencies.

Install npm-check globally with:

$ npm i -g npm-check

Now you can run the command inside your project directory:

$ npm-check
Or
$ npm-check --registry=https://registry.npmjs.org

It will display all possible updates with information about the type of update, project URL, commands, and will attempt to check if the package is still in use. You can easily parse through the results and see what packages might be safe to update. When updates are required it will return a non-zero response code that you can use in your CI tools.

The check for unused dependencies uses depcheck and isn't able to foresee all ways dependencies can be used so be vary with careless removing of packages.

To see an interactive UI for choosing which modules to update run:

$ npm-check –u

Analyze dependencies with depcheck

Your package.json is filled with dependencies and some of them might be useless or even missing from package.json. Depcheck is a tool for analyzing the dependencies in a project to see how each dependency is used, which dependencies are useless, and which dependencies are missing. It does not only recognizes the dependencies in JavaScript files, but also supports i.a. React JSX and Typescript.

Install depcheck with:

$ npm install -g depcheck
And with additional syntax support for Typescript
$ npm install -g depcheck typescript

Run depcheck with:

$ depcheck [directory]
Example of results from depcheck

Summary

tl;dr;

  1. Use npm audit in your CI pipeline
  2. Update dependencies with npm outdated
  3. Check new versions of dependencies with either npm-check-updates or npm-check
  4. Analyze dependencies with depcheck

Notes from security in the age of Docker & Kubernetes

Security is always the more obscure part of software development and while container runtimes provide good isolation from the host operating system when using Docker and running containers in Kubernetes, you should not assume to be free from exploits. Remember to use the best practices when you were not using containers.

Here is my notes from How Soon We Forget: Security in the Age of Docker & Kubernetes article which looked at some common regressions in security practices associated with the migration to Docker and Kubernetes and suggested ways to avoid them. And to continue the topic with notes from Taking the Scissors away: make your Kubernetes Cluster safe for DevOps talk which gives good advice and looks at some of the concepts of forcing security of the application workloads both from conceptual and practical points of view. Also Best Practices for Kubernetes deployment and Securing a cluster are worth reading.

These notes don't explain things so it's worth reading either the documents or the articles mentioned above.

Running as non-root

"One of the most common and easiest security lapses to address is running binaries as root."

Use non-root Docker images. It requires effort and is easier for greenfield projects.

In Kubernetes, you can enforce running containers as non-root using the pod and container security context.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        securityContext:
          allowPrivilegeEscalation: false
          privileged: false
      securityContext:
        fsGroup: 2866
        runAsNonRoot: true
        runAsUser: 2866

Use read-only file system

"Do you really need to write files within a container?"

In Kubernetes, set the root file system to read-only using the pod security context and create an emptyDir volume to mount at /tmp.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - env:
        - name: TMPDIR
          value: /tmp
        image: my/app:1.0.0
        name: app
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      volumes:
      - emptyDir: {}
        name: tmp

Protect against Denial of service

"Setting resources limits for your containers protects against a host of denial of service attacks."

With resource quotas you can limit a container to e.g. half a CPU and half a GiB of memory. Kubernetes deployment specification would look like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        resources:
          limits:
            cpu: 500m
            memory: 512Mi

Health and readiness checks

"It's a good idea to make sure if your application is not healthy that it shuts down properly so it can be replaced. Kubernetes can help you with this if your application can respond to health and readiness checks and you configure them in your pod specification."

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3
        name: app
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ready
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3

The liveness probe should indicate if the application is running and readiness probe should indicate if the application can service requests. Read more from Kubernetes documentation.

Use Kubernetes policies

"Kubernetes provides network and pod security policies that give you control over what pods can communicate with each other and what types of pods can be started, respectively."

Pod Security Policies allow you to control what capabilities pods can have. When pod security policies are enabled, Kubernetes will only start pods that satisfy the constraints of the pod security policies.

They say that Pod Security Policy is actually one of the most difficult things to configure properly in Kubernetes cluster. For example it's easy to completely cap your cluster: you can't create any pods.

An example of a pod security policy that enforces some of the best practices mentioned: non-privileged containers, allow only read-only filesystem, minimum set of allowed volumes and don't use host’s network, pid or ipc namespaces.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: best-practices
spec:
  # non-privileged containers
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  runAsUser:
    rule: MustRunAsNonRoot
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  # restrict file systems
  readOnlyRootFilesystem: true
  volumes:
    - configMap
    - emptyDir
    - projected
    - secret
    - downwardAPI
    - persistentVolumeClaim
  # limit interaction with host
  hostNetwork: false
  hostIPC: false
  hostPID: false

Network Policies

"Network policies allow you to define ingress and egress rules, i.e., firewall rules, for your pods using IP CIDR ranges and Kubernetes label selectors for pods and namespaces, similar to how Kubernetes service resources select pods."

For example you can create a network policy which will deny ingress from pods in other namespaces but allow pods within the namespace to communicate with each other.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
  namespace: mine
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

There is a GitHub repository of common network policies to help you get started using network policies.

Namespaces

Use namespaces and ensure that you've set the following defaults:

Summary

"defense in depth" is still important even in the world of containers. The container is not safe. The operating system is not safe. The host is not safe. The network is not safe.

How Soon We Forget: Security in the Age of Docker & Kubernetes

Notes of Best Practices for writing Cypress tests

Cypress is a nice tool for end-to-end tests and it has good documentation also for Best Practices including "Cypress Best Practices" talk by Brian Mann at Assert(JS) 2018. Here are my notes from the talk combined with the Cypress documentation. This article assumes you know and have Cypress running.

In short:

  • Set state programmatically, don't use the UI to build up state.
  • Write specs in isolation, avoid coupling.
  • Don't limit yourself trying to act like a user.
  • Tests should always be able to be run independently and still pass.
  • Only test what you control.
  • Use data-* attributes to provide context to your selectors.
  • Clean up state before tests run (not after).

Organizing tests

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

"Writing and Organizing tests" documentation just tells you the basics how you should organize your tests. You should organize tests by pages and by components as you should test components individually if possible. So the folder structure for tests might look like.

├ articles
├── article_details_spec.js
├── article_new_spec.js
├── article_list_spec.js
├ author
├── author_details_spec.js
├ shared
├── header_spec.js
├ user
├── login_spec.js
├── register_spec.js
└── settings_spec.js

Selecting Elements

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors and insulate them from CSS or JS changes.

Add data-* attributes to make it easier to target elements.

For example:

<button id="main" class="btn btn-large" name="submit"
  role="button" data-cy="submit">Submit</button>

Writing Tests

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

Best practice when writing tests on Cypress is to iterate on a single one at a time, i.a.

describe('/login', () => {

  beforeEach() => {
    // Wipe out state from the previous tests
    cy.visit('/#/login')
  }

  it('requires email', () =>
    cy.get('form').contains('Sign in').click()
    cy.get('.error-messages')
    .should('contain', 'email can\'t be blank')
  })

  it('requires password', () => {
    cy.get('[data-test=email]').type('joe@example.com{enter}')
    cy.get('.error-messages')
    .should('contain', 'password can\'t be blank')
  })

  it('navigates to #/ on successful login', () => {
    cy.get('[data-test=email]').type('joe@example.com')
    cy.get('[data-test=password]').type('joe{enter}')
    cy.hash().should('eq', '#/')
  })

})

Note that we don't add assertions about the home page because we're on the login spec, that's not our responsibility. We'll leave that for the home page which is the article spec.

Controlling State

"abstraction, reusability and decoupling"

- Don't use the UI to build up state
+ Set state directly / programmatically

Now you have the login spec done and it's the cornerstone for every single test you will do. So how do you use it in e.g. settings spec? For not to copy & paste login steps to each of your tests and duplicating code you could use custom command: cy.login(). But using custom command for login fails at testing in isolation, adds 0% more confidence and accounts for 75% of the test duration. You need to log in without using the UI. And to do that depends of how your app works. For example you can check for JWT token in the App and in Cypress make a silent (HTTP) request.

So your custom login command becomes:

Cypress.Commands.add('login', () => {
  cy.request({
    method: 'POST',
    url: 'http://localhost:3000/api/users/login',
    body: {
      user: {
        email: 'joe@example.com',
        password: 'joe',
      }
    }
  })
  .then((resp) => {
    window.localStorage.setItem('jwt', resp.body.user.token)
  })
})

Setting state programmatically isn't always as easy as making requests to endpoint. You might need to manually dispatch e.g. Vue actions to set desired values for the application state in the store. Cypress documentation has good example of how you can test Vue web applications with Vuex data store & REST backend.

Visiting external sites

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

Try to avoid requiring a 3rd party server. When necessary, always use cy.request() to talk to 3rd party servers via their APIs like testing log in when your app uses another provider via OAuth. Or you could try stub out the OAuth provider. Cypress has recipes for different approaches.

Add multiple assertions

- Don't create "tiny" tests with a single assertion and acting like you’re writing unit tests.
+ Add multiple assertions and don’t worry about it

Cypress runs a series of async lifecycle events that reset state between tests. Resetting tests is much slower than adding more assertions.

it('validates and formats first name', function () {
    cy.get('#first')
      .type('johnny')
      .should('have.attr', 'data-validation', 'required')
      .and('have.class', 'active')
      .and('have.value', 'Johnny')
  })

Clean up state before tests run

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

When your tests end - you are left with your working application at the exact point where your test finished. If you remove your application's state after each test, then you lose the ability to use your application in this mode or debug your application or write a partial tests.

Unnecessary Waiting

- Don't wait for arbitrary time periods using cy.wait(Number).
+ Use route aliases or assertions to guard Cypress from proceeding until an explicit condition is met.

For example waiting explicitly for an aliased route:

cy.server()
cy.route('GET', /users/, [{ 'name': 'Maggy' }, { 'name': 'Joan' }]).as('getUsers')
cy.get('#fetch').click()
cy.wait('@getUsers')     // <--- wait explicitly for this route to finish
cy.get('table tr').should('have.length', 2)

No constraints

You've native access to everything so don't limit yourself trying to act like a user. You can e.g.

  • Control Time: cy.clock(), e.g. control how your app responds to system time, force set timeouts and set intervals to fire when you want them to.
  • Stub Objects: cy.stub(), force callbacks to fire, assert things are called with right arguments.
  • Modify Stores: cy.window(), e.g. dispatch events, like logout.

Set global baseUrl

+ Set a baseUrl in your configuration file.

Adding a baseUrl in your configuration allows you to omit passing the baseUrl to commands like cy.visit() and cy.request().

Without baseUrl set, Cypress loads main window in localhost + random port. As soon as it encounters a cy.visit(), it then switches to the url of the main window to the url specified in your visit. This can result in a ‘flash’ or ‘reload’ when your tests first start. By setting the baseUrl, you can avoid this reload altogether.

Assertions should be obvious

"A good practice is to force an assertion to fail and see if the error message and the output is enough to know why. It is easiest to put a .only on the it block you're evaluating. This way the application will stop where a screenshot is normally taken and you're left to debug as if you were debugging a real failure. Thinking about the failure case will help the person who has to work on a failing test." (Best practices for maintainable tests)

<code>
it.only('check for tab descendants', () => {
  cy
    .get('body')
    .should('have.descendants', '[data-testid=Tab]') // expected '' to have descendants '[data-testid=Tab]'
    .find('[data-testid=Tab]')
    .should('have.length', 2) // expected '[ <div[data-testid=tab]>, 4 more... ]' to have a length of 2 but got 5
});
</code>

Explore the environment

You can pause the test execution by using debugger keyword. Make sure the DevTools are open.

it('bar', function () {
   debugger
   // explore "this" context
 })

Running in CI

If you're running in Cypress in CI and need to start and stop your web server there's recipes showing you that.

Try the start-server-and-test module. It's good to note that when using e2e-cypress plugin for vue-cli it starts the app automatically for Cypress.

If your videos taken during cypress run freeze when running on CI then increase the CPU resources, see: #4722

Adjust the compression level on cypress.json to minimal with "videoCompression": 0 or disable it with "videoCompression": false. Disable recording with "video": false.

Record success and failure videos

Cypress captures videos from test runs and whenever a test fails you can watch the failure video side by side with the video from the last successful test run. The differences in the subject under test are quickly obvious as Bahtumov's tips suggests.

If you're using e.g. GitLab CI you can configure it to keep artifacts from failed test runs for 1 week, while keeping videos from successful test runs only for a 3 days.

artifacts:
    when: on_failure
    expire_in: '1 week'
    untracked: true
    paths:
      - cypress/videos
      - cypress/screenshots
  artifacts:
    when: on_success
    expire_in: '3 days'
    untracked: true
    paths:
      - cypress/screenshots

Helpful practices

Disable ServiceWorker

ServiceWorkers are great but they can really affect your end-to-end tests by introducing caching and coupling tests. If you want to disable the service worker caching you need to remove or delete navigator.serviceWorker when visiting the page with cy.visit.

it('disable serviceWorker', function () {
  cy.visit('index.html', {
    onBeforeLoad (win) {
      delete win.navigator.__proto__.serviceWorker
    }
  })
})

Note: once deleted, the SW stays deleted in the window, even if the application navigates to another URL.

Get command log on failure

In the headless CI mode, you can get a JSON file for each failed test with the log of all commands. All you need is cypress-failed-log project and include it from your cypress/support/index.js file.

Conditional logic

Sometimes you might need to interact with a page element that does not always exist. For example there might a modal dialog the first time you use the website. You want to close the modal dialog. But the modal is not shown the second time around and the above code will fail.

In order to check if an element exists without asserting it, use the proxied jQuery function Cypress.$:

const $el = Cypress.$('.greeting')
if ($el.length) {
  cy.log('Closing greeting')
  cy.get('.greeting')
    .contains('Close')
    .click()
}
cy.get('.greeting')
  .should('not.be.visible')

Summary

- Don't use the UI to build up state
+ Set state directly / programmatically

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

- Don't limit yourself trying to act like a user
+ You have native access to everything

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors

- Don't create tests with a single assertion
+ Add multiple assertions and don’t worry about it

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

+ Set a baseUrl in your configuration file.

More to read

Use cypress-testing-library which encourage good testing practices through simple and complete custom Cypress commands and utilities.

Set up intelligent code completion for Cypress commands and assertions by adding a triple-slash directive to the head of your JavaScript or TypeScript testing spec file. This will turn the IntelliSense on a per file basis.

/// <reference types="Cypress" />

Read What I’ve Learned Using Cypress.io for the Past Three Weeks if you need a temporary workaround for iframes and testing file uploads as for now Cypress does not natively support those.

And of course Gleb Bahmutov's blog is useful resource for practical things like Tips and tricks post.

Monthly notes 46

December is full Christmas carrols and hassle before holidays. So, take a short break and learn to master Kubernetes, become better human and developer and make remote (working) a success. Also think about privacy. Good reading and happy holidays!

Issue 46, 17.12.2019

Cloud

Mastering the KUBECONFIG file
Good tips like Auto-$KUBECONFIG based on directory with direnv; Know which context you’re pointing at with kube-ps1; Save GKE contexts to separate files. (from @walokra)

Tutorial: Debug Your Kubernetes Apps (youtube)
Debug your Kubernetes apps tutorial from KubeCon. Slides: https://aws-samples.github.io/debug-k8s-apps/#/, code: https://github.com/aws-samples/debug-k8s-apps. Covers cluster design, networking, kubectl, pods, lb & ingress, monitoring, resource reservation and stateful sets. (from @ArunGupta)

JavaScript

20 ways to become a better Node.js developer in 2020
"20 skills, technologies and considerations on choosing between them. Picking the right tools became one of our greatest challenges — the Node.js ecosystem has matured and present attractive options in almost every field. Vanilla or TypeScript? Ava, Mocha or Jest? Express, Fastify or Koa? or maybe Nest?"

Learning

Things You Should Read To Become A Better Human & Developer
"As developers, we are creators of systems and worlds. However, to be effective at our jobs, we need to understand these systems and worlds we’re creating. When we read, we expand the borders that define our domain of knowledge."

Don’t Learn to Code — Learn to Automate
"avoid thinking of writing code as the goal and learn to solve problems."

A Guide to Distributed Teams
How thoughtful systems (and lots of emoji) make for happy, efficient teams—whether your desks are distributed across floors, cities, or continents. Hacker News comments

How to Make Remote a Success
"It's all about sharing and communicating". E.g. Write down everything: knowledge base to blog posts, make weekly notes; Make everyone feel connected: smarter meetings, daily check-ins/check-outs. Hacker News comments

Privacy

You’re Tracked Everywhere You Go Online. Use This Guide to Fight Back
Advertisers are tracking and monitoring your behavior almost everywhere you go online. Here's how to (mostly) stop it. (from @TimHerrera)

Privolta Consent Study: Google
Great example how to quantify the degree to which 'dark patterns' dominate privacy consent interactions online. (from @ashk4n)

Tools

Falco
Falco is an automatic, easy-to-use Web Performance auditing tool. Open Source WebPageTest runner which helps you monitor, analyze, and optimize your websites. (from @PHacks)

Fx
Command-line tool and terminal JSON viewer. "If you’ve got some files full of JSON that you want to process, Fx will slice and dice it however you want, including using JavaScript one-liners to add a bit of logic to the process." (from DB Weekly #284)

Monthly notes 45

Snow is covering the ground and hibernation period starts? Or more time inside reading and learning new things? Here's monthly notes for Octorber.

Issue 45, 30.10.2019

Software Development

What qualities make up a 1x engineer?
I can relate to this.

My favourite Git commit
Good example how git commit messages should be done especially if the change is ambiguous. Doing explanatory commits need extra effort than just “Fixed it” but it pays out later. (from @walokra)

DevOps

A Practical Framework for DevSecOps
Nice overview to key #DevSecOps domains and activities. “With a limited budget start with Monitoring and Responding. Then focus on how to prevent vulnerabilities from being introduced in the first place.” (from @walokra)

Docker for Pentesters
Docker has completely changed my workflow, and I wrote up 10 examples and scripts for how pentesters can leverage Docker to speed up testing. Lmk how you use Docker - this could be a series! (from @walokra)

iOS

Announcing my Shortcuts Library, featuring 150 Siri Shortcuts to use with iOS 13
With iOS 13, Shortcuts is installed by default on every device – hundreds of millions of people will inevitably use this app now. And you can control them with Siri.

Technology

The secret life of GPS trackers
"We decided to take a look at several child (GPS) trackers available on Amazon, eBay, and Alibaba to see how they stood up to our scrutiny."

Something different

Dumbass Home 2.0
Excellent overview to “Smart” home and available solutions. "S in IoT stands for Security” so use separate WiFi, Zigbee, hub with Raspberry Pi, Raspbee & Home Assistant (or Hue/SmartThings), gadgets from Trådfri, Xiaomi (~), Philips & Osram with discount. (from @walokra)

Automate validating code changes with Git hooks

What could be more annoying than committing code changes to repository and noticing afterwards that formatting isn't right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately with small enhancements to your development workflow you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.

Git Hooks

Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. They're a built-in feature and run locally. Hook scripts are only limited by a developer's imagination. Some example hook scripts include:

  • pre-commit: Check the commit for linting errors.
  • pre-receive: Enforce project coding standards.
  • post-commit: Email team members of a new commit.
  • post-receive: Push the code to production.

Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You're free to change or update these scripts as necessary, and Git will execute them when those events occur.

Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready for commit or pushing to remote repository.

For more reading about Git hooks you can check missing Git hooks documentation, read the basics and check tutorial how to use Git hooks on local Git clients and Git servers.

Pre-commit

One productive way to use Git hooks is pre-commit framework for managing and maintaining multi-language pre-commit hooks. Read tips for using a pre-commit hook.

Pre-commit is nice for example running linters to ensure that your changes conform to coding standards. All you need is to install pre-commit and then add hooks.

Installing pre-commit, ktlint and pre-commit-hook on MacOS with Homebrew:

$ brew install pre-commit
$ brew install ktlint
$ ktlint --install-git-pre-commit-hook

For example the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The "export PATH=/usr/local/bin:$PATH" is for SourceTree to find git on MacOS.

#!/bin/sh
export PATH=/usr/local/bin:$PATH
# https://github.com/shyiko/ktlint pre-commit hook
git diff --name-only --cached --relative | grep '\.kt[s"]\?$' | xargs ktlint -F --relative .
if [ $? -ne 0 ]; then exit 1; else git add .; fi

The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.

Maven projects

Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.

It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project's initial lifecycle phase. It's good to notice that the plugin rewrites hooks.

Usage Example:

<build>
    <plugins>
	<plugin>
	    <groupId>org.sandbox</groupId>
	    <artifactId>githook-maven-plugin</artifactId>
	    <version>1.0.0</version>
	    <executions>
	        <execution>
	            <goals>
	                <goal>install</goal>
	            </goals>
	            <configuration>
	                <hooks>
	                    <pre-commit>
	                         echo running validation build
	                         exec mvn clean install
	                    </pre-commit>
	                </hooks>
	            </configuration>
	        </execution>
	    </executions>
	</plugin>
    </plugins>
</build>

Git hooks for Node.js projects

On Node.js projects you can define scripts in package.json and run them with npm which enables an another approach to running Git hooks.

🐶 Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.

Installing Husky is like any other npm library

npm install husky --save-dev

The following configuration on your package.json runs lint (e.g. eslint with --fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to remote repository.

"husky": {
   "hooks": {
     "pre-commit": "npm run lint",
     "pre-push": "npm run lint && npm run test"
   }
}

Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.

Summary

Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.

This article was originally published at 15.7.2019 on Gofore's blog.

Monthly notes 44

Summer holidays are over and it's time to get back to work and monthly notes. I spent almost whole August enjoying nature, mountain biking, hiking and coaching young mountainbikers. Less computers, more relaxing. This month's notes are about writing great Docker images, validate code using git hooks, log management, story about npm registry, working remotely and effective Kotlin. Happy reading.

Issue 44, 6.9.2019

Microservices

How to write great Docker container images
It's easy with these great tips and examples. I would add that use small base image like Alpine Linux if possible. (from @walokra)

Kubernetes: A Detailed Example of Deployment of a Stateful Application
Article goes through the overview to Kubernetes by covering "What are the design principles and architecture of Kubernetes?" and "How to use Kubernetes, and a simple example." (from @java)

Software Development

Automate validating code changes with Git hooks
What could be more annoying than committing code changes and noticing afterwards that the formatting isn’t right or tests are failing? Read these tips how automate validating code changes with git hooks and make your flow smooth.

Fast log management for your apps
Nicolas Frankel talked at Berlin Buzzwords about logging. Good overview to the issue. TL;DR; no computation to logs, filesystem matters, asynchronous vs. reliability, no expensive meta-data, schema on write, send JSON.

Use morning hours for open source and improving

You're better at your work when you're improving your technical craft.

JavaScript

Story of money and ownership and control
"the economics of open source [in JavaScript, Node.js and npm]". Important point of views to problems with (privately controlled) [npm] package registry. (from @walokra)

Team work

11 Best Practices for Working Remotely
Good tips for working remotely. The biggest hurdles are communication, social opportunities and loneliness and isolation. "With consistent effort, you can overcome the challenges of remote work and create a healthy, happy, productive environment for yourself and for your team." (from @dunjardl)

If you ever have to lead a remote dev team…
The Remote Workflow:simple, transparent, predictable, frictionless. (from @ThePracticalDev)

Books

Effective Kotlin beta release
Adding this to my reading list! "First official version of Effective Kotlin is finally in distribution (as an ebook)". Having read Effective Java this book is totally worth it.

Something different

Watch 14 minutes of new Cyberpunk 2077 gameplay footage
A new look at different gameplay styles for the upcoming open-world RPG.

Monthly notes 43

Issue 43, 25.7.2019

Microservices

How to write great container images
Article shows the principles of what writes consider "Dockerfile best practices", and simultaneously walks through them with a real example. I would add that use small base image like Alpine Linux if possible.

Micro Frontends
The article describes breaking up frontend monoliths into many smaller, more manageable pieces, and how this architecture can increase the effectiveness and efficiency of teams working on frontend code. As well as talking about the various benefits and costs, it covers some of the implementation options that are available, and dives deep into a full example application that demonstrates the technique.

Performance

Performance Analysis Methodology
Informative presentation of Performance Analysis Methodology by Brendan Gregg at LISA '12. Focus on the USE method which all staff can use for identifying common bottlenecks and errors. Check for: Utilization, Saturation, Errors. (from walokra)

Fast log management for your apps
You've migrated your application to Reactive Microservices to get the last ounce of performance from your servers. But what about logs? Logs can be one of the few roadblocks on the road to ultimate performance. Nicolas Frankel shows in his talk at Berlin Buzzwords 2019 some insider tips and tricks taken from our experience put you on the track toward fast(er) log management.

JavaScript

single-spa
A javascript framework for front-end microservices.

Node.js Memory Management in Container Environments
Best practices for managing memory in container-based Node apps. (from JavaScript Daily)

CTU JavaScript Guide
Opinionated guide to ground rules for an application’s JavaScript code, such that it’s highly readable and consistent across different developers on a team. The focus is put on quality and coherence across the different pieces of your application.

Security

Nginx Admin's Handbook
nginx is a powerful web server but with great power comes great responsibility (to configure it for security and performance). "Nginx Admin's Handbook" is a good collection of rules, helpers, notes and papers, best practices and recommendations to achieve it. (from walokra)

GOTCHA: Taking phishing to a whole new level
Without X-FRAME-OPTIONS you can build a  UI redressing attack that allows attackers to extract valuable information from API endpoints. tl; dr; extract chars with CSS, add captcha form, scramble chars, get user to fill in the password-captcha.

Staying Safe on GitHub: The Ultimate GitHub Security Tools Roundup
Nice overview to #security tools for #GitHub repositories. GitHub Security Alerts is provided by default, additionally use one of these: Snyk, WhiteSource Bolt, Sonatype DepShield. (from walokra)

Something different

It's Summer and there's plenty of Natural Parks in Finland. Go and create your Summer adventure in the wilderness. From Southern Archipelago to Northern Fells: Pallas-Yllästunturi, UKK, Pyhä-Luosto, Koli, Nuuksio.