Reset Hasura migrations and squash files

Using GraphQL for creating REST APIs is nowadays popular and there are different tools you can use. One of them is Hasura which is an open-source engine that gives you realtime GraphQL APIs on new or existing Postgres databases. Hasura is quite easy to work with but if your GraphQL schemas change a lot it creates plentiful of migration files. This has some unwanted consequences (for example slowing down the hasura migrate apply or even blocking it). Here’s some notes how to reset the state and create new migrations from the state that is on the server.

Note: From Hasura 1.0.0 onwards squashing is easier with hasura migrate squash command. It's still in preview. But before Hasura 1.0.0 version you have to squash migrations manually and this blog post explains how. The results are the same: squashing multiple migrations into a single one.

Hasura documentation provides a good guide how to squash migrations but in practice there are couple of other things you may need to address. So let’s combine the steps Hasura gives and some extra steps.

Reset Hasura migrations

First make a backup branch:

  1. $ git checkout master
  2. Create a backup branch:
    $ git checkout -b backup/migrations-before-resetting-20XX-XX-XX
  3. Update the backup branch to origin:
    $ git push origin backup/migrations-before-resetting-20XX-XX-XX

We are assuming you've local Hasura running on Docker with something like the following docker-compose.yml

version: "3.6"
services:
  postgres:
    image: postgres:11-alpine
    restart: always
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data
    command: postgres -c max_locks_per_transaction=2000
  graphql-engine:
    image: hasura/graphql-engine:v1.0.0-beta.6
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
      HASURA_GRAPHQL_ADMIN_SECRET: changeme
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
volumes:
  db_data:

Create local instance of Hasura with up to date migrations:

  1. $ docker-compose down -v
  2. $ docker-compose up
  3. $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Reset migrations to master:

  1. git checkout master
  2. git checkout -b reset-hasura-migrations
  3. rm -rf migrations/*

Reset the migration history on server. On hasura SQL console, http://localhost:8080/console:

TRUNCATE hdb_catalog.schema_migrations;

Setup fresh migrations by taking the schema and metadata from the server. By default init only takes public schema if others not mentioned with the --schema "your schema" parameter. Note down the version for later use.

  1. Create migration file:
    $ hasura migrate create "init" --from-server
  2. Mark the migration as applied on this server:
    $ hasura migrate apply --version "" --skip-execution
  3. Verify status of migrations, should show only one migration with Present status:
    $ hasura migrate status
  4. You have brand new migrations now!

Resetting migrations on other environments

  1. Checkout the reset branch on local machine:
    $ git checkout -b reset-hasura-migrations
  2. Reset the migration history on remote server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to remote server:
    $ hasura migrate apply --version "<version>" --skip-execution

Local environment Hasura status

For other developers please refer these instructions in order to get the backend into same state.

Option 1: Keep old data

  1. Checkout the backup branch on local machine:
    $ git checkout backup/migrations-before-resetting-20XX-XX-XX
  2. Reset the migration history on local server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to local server:
    $ hasura migrate apply --version "<version>" --skip-execution

Option 2: Remove all and start from beginning

  1. Clean up the old docker volumes:
    $ docker-compose down -v
  2. Start up services:
    $ docker-compose up
  3. Checkout master:
    $ git checkout master
  4. Apply migrations:
    $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Possible extra steps

Now your Hasura migrations and database tables are in one migration init file but sometimes things don’t work out when applying it to empty database. We are using Hasura audit-trigger and had to reorder the SQL clauses done by the migrate init and add some missing parts.

  1. Move schema creations after audit clauses
  2. Move audit.audit_table(target_table regclass) to last audit clause and copy it from audit.sql
  3. Add pg_trgm extension as done previously (fixes "operator does not exist: text <%!t(MISSING)ext" in public.search_customers_by_name)
  4. Drop session constraints / index before creating new
  5. Create session table only if not exists

Tracking vulnerabilities and keeping Node.js packages up to date

Software evolves quickly and new versions of libraries are released but how do you keep track of updated dependencies and vulnerable libraries? Managing dependencies has always been somewhat a pain point but an important part of software development as it's better to be tracking vulnerabilities and running fresh packages than being pwned.

There are couple of tools for JavaScript projects which use npm to manage dependencies to check new versions and some tools to track vulnerabilities. Here's a short introduction to npm audit, depcheck, npm-check-updates and npm-check to help you on your way.

If your project is using yarn adjust your workflow accordingly. There's for example yarn audit and yarn-check to match tools for npm. And it goes without saying that don't use npm if your project uses yarn.

Running security audit with npm audit

From version 6 onwards npm comes build with audit command which checks for vulnerabilities in your dependencies and runs automatically when you install a package with npm install. You can also run npm audit manually on your locally installed packages to conduct a security audit of the package and produce a report of dependency vulnerabilities and suggested patches.

The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. It checks direct dependencies, devDependencies, bundledDependencies, and optionalDependencies, but does not check peerDependencies.

If your npm registry doesn't support npm audit, like Artifactory, you can pass in the --registry flag to point to public npm. The downside is that now you can't audit private packages that are on the Artifactory registry.

$ npm audit --registry=https://registry.npmjs.org

"Running npm audit will produce a report of security vulnerabilities with the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities."

Example: partial output of npm audit run

Using npm audit is useful also in Continuous Integration as it will return a non-zero response code if security vulnerabilities are found.

For more information read npm's Auditing dependencies for security vulnerabilities.

Updating packages with npm outdated

It's recommended to regularly update the local packages your project depends on to improve your code as improvements to its dependencies are made. In your project root directory, run the update command and then outdated. There should not be any output.

$ npm update
$ npm outdated 
Example of results from npm outdated

You can also update globally-installed packages. To see which global packages need to be updated run outdated first with --depth=0.

$ npm outdated -g --depth=0
$ npm outdated -g

For more information read updating packages downloaded from the registry.

Check updates with npm-check-updates

Package.json contains dependencies with semantic versioning policy and to find newer versions of package dependencies than what your package.json allows you need tools like npm-check-updates. It can upgrade your package.json dependencies to the latest versions, ignoring specified versions while maintaining your existing semantic versioning policies.

Install npm-check-updates globally with:

$ npm install -g npm-check-updates 

And run it with:

$ ncu

The result shows any new dependencies for the project in the current directory. See documentation for i.a. configuration files for filtering and excluding dependencies.

Example of results from ncu

And finally you can run ncu -u to upgrade the package.json.

Check updates with npm-check

Similar tool to npm-check-updates is npm-check which additionally gives more information about the version changes available and also lets you interactively pick which packages to update instead of an all or nothing approach. It checks for outdated, incorrect, and unused dependencies.

Install npm-check globally with:

$ npm i -g npm-check

Now you can run the command inside your project directory:

$ npm-check
Or
$ npm-check --registry=https://registry.npmjs.org

It will display all possible updates with information about the type of update, project URL, commands, and will attempt to check if the package is still in use. You can easily parse through the results and see what packages might be safe to update. When updates are required it will return a non-zero response code that you can use in your CI tools.

The check for unused dependencies uses depcheck and isn't able to foresee all ways dependencies can be used so be vary with careless removing of packages.

To see an interactive UI for choosing which modules to update run:

$ npm-check –u

Analyze dependencies with depcheck

Your package.json is filled with dependencies and some of them might be useless or even missing from package.json. Depcheck is a tool for analyzing the dependencies in a project to see how each dependency is used, which dependencies are useless, and which dependencies are missing. It does not only recognizes the dependencies in JavaScript files, but also supports i.a. React JSX and Typescript.

Install depcheck with:

$ npm install -g depcheck
And with additional syntax support for Typescript
$ npm install -g depcheck typescript

Run depcheck with:

$ depcheck [directory]
Example of results from depcheck

Summary

tl;dr;

  1. Use npm audit in your CI pipeline
  2. Update dependencies with npm outdated
  3. Check new versions of dependencies with either npm-check-updates or npm-check
  4. Analyze dependencies with depcheck

Notes from security in the age of Docker & Kubernetes

Security is always the more obscure part of software development and while container runtimes provide good isolation from the host operating system when using Docker and running containers in Kubernetes, you should not assume to be free from exploits. Remember to use the best practices when you were not using containers.

Here is my notes from How Soon We Forget: Security in the Age of Docker & Kubernetes article which looked at some common regressions in security practices associated with the migration to Docker and Kubernetes and suggested ways to avoid them. And to continue the topic with notes from Taking the Scissors away: make your Kubernetes Cluster safe for DevOps talk which gives good advice and looks at some of the concepts of forcing security of the application workloads both from conceptual and practical points of view. Also Best Practices for Kubernetes deployment and Securing a cluster are worth reading.

These notes don't explain things so it's worth reading either the documents or the articles mentioned above.

Running as non-root

"One of the most common and easiest security lapses to address is running binaries as root."

Use non-root Docker images. It requires effort and is easier for greenfield projects.

In Kubernetes, you can enforce running containers as non-root using the pod and container security context.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        securityContext:
          allowPrivilegeEscalation: false
          privileged: false
      securityContext:
        fsGroup: 2866
        runAsNonRoot: true
        runAsUser: 2866

Use read-only file system

"Do you really need to write files within a container?"

In Kubernetes, set the root file system to read-only using the pod security context and create an emptyDir volume to mount at /tmp.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - env:
        - name: TMPDIR
          value: /tmp
        image: my/app:1.0.0
        name: app
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      volumes:
      - emptyDir: {}
        name: tmp

Protect against Denial of service

"Setting resources limits for your containers protects against a host of denial of service attacks."

With resource quotas you can limit a container to e.g. half a CPU and half a GiB of memory. Kubernetes deployment specification would look like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        resources:
          limits:
            cpu: 500m
            memory: 512Mi

Health and readiness checks

"It's a good idea to make sure if your application is not healthy that it shuts down properly so it can be replaced. Kubernetes can help you with this if your application can respond to health and readiness checks and you configure them in your pod specification."

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3
        name: app
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ready
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3

The liveness probe should indicate if the application is running and readiness probe should indicate if the application can service requests. Read more from Kubernetes documentation.

Use Kubernetes policies

"Kubernetes provides network and pod security policies that give you control over what pods can communicate with each other and what types of pods can be started, respectively."

Pod Security Policies allow you to control what capabilities pods can have. When pod security policies are enabled, Kubernetes will only start pods that satisfy the constraints of the pod security policies.

They say that Pod Security Policy is actually one of the most difficult things to configure properly in Kubernetes cluster. For example it's easy to completely cap your cluster: you can't create any pods.

An example of a pod security policy that enforces some of the best practices mentioned: non-privileged containers, allow only read-only filesystem, minimum set of allowed volumes and don't use host’s network, pid or ipc namespaces.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: best-practices
spec:
  # non-privileged containers
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  runAsUser:
    rule: MustRunAsNonRoot
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  # restrict file systems
  readOnlyRootFilesystem: true
  volumes:
    - configMap
    - emptyDir
    - projected
    - secret
    - downwardAPI
    - persistentVolumeClaim
  # limit interaction with host
  hostNetwork: false
  hostIPC: false
  hostPID: false

Network Policies

"Network policies allow you to define ingress and egress rules, i.e., firewall rules, for your pods using IP CIDR ranges and Kubernetes label selectors for pods and namespaces, similar to how Kubernetes service resources select pods."

For example you can create a network policy which will deny ingress from pods in other namespaces but allow pods within the namespace to communicate with each other.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
  namespace: mine
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

There is a GitHub repository of common network policies to help you get started using network policies.

Namespaces

Use namespaces and ensure that you've set the following defaults:

Summary

"defense in depth" is still important even in the world of containers. The container is not safe. The operating system is not safe. The host is not safe. The network is not safe.

How Soon We Forget: Security in the Age of Docker & Kubernetes

Notes of Best Practices for writing Cypress tests

Cypress is a nice tool for end-to-end tests and it has good documentation also for Best Practices including "Cypress Best Practices" talk by Brian Mann at Assert(JS) 2018. Here are my notes from the talk combined with the Cypress documentation. This article assumes you know and have Cypress running.

In short:

  • Set state programmatically, don't use the UI to build up state.
  • Write specs in isolation, avoid coupling.
  • Don't limit yourself trying to act like a user.
  • Tests should always be able to be run independently and still pass.
  • Only test what you control.
  • Use data-* attributes to provide context to your selectors.
  • Clean up state before tests run (not after).

Organizing tests

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

"Writing and Organizing tests" documentation just tells you the basics how you should organize your tests. You should organize tests by pages and by components as you should test components individually if possible. So the folder structure for tests might look like.

├ articles
├── article_details_spec.js
├── article_new_spec.js
├── article_list_spec.js
├ author
├── author_details_spec.js
├ shared
├── header_spec.js
├ user
├── login_spec.js
├── register_spec.js
└── settings_spec.js

Selecting Elements

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors and insulate them from CSS or JS changes.

Add data-* attributes to make it easier to target elements.

For example:

<button id="main" class="btn btn-large" name="submit"
  role="button" data-cy="submit">Submit</button>

Writing Tests

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

Best practice when writing tests on Cypress is to iterate on a single one at a time, i.a.

describe('/login', () => {

  beforeEach() => {
    // Wipe out state from the previous tests
    cy.visit('/#/login')
  }

  it('requires email', () =>
    cy.get('form').contains('Sign in').click()
    cy.get('.error-messages')
    .should('contain', 'email can\'t be blank')
  })

  it('requires password', () => {
    cy.get('[data-test=email]').type('joe@example.com{enter}')
    cy.get('.error-messages')
    .should('contain', 'password can\'t be blank')
  })

  it('navigates to #/ on successful login', () => {
    cy.get('[data-test=email]').type('joe@example.com')
    cy.get('[data-test=password]').type('joe{enter}')
    cy.hash().should('eq', '#/')
  })

})

Note that we don't add assertions about the home page because we're on the login spec, that's not our responsibility. We'll leave that for the home page which is the article spec.

Controlling State

"abstraction, reusability and decoupling"

- Don't use the UI to build up state
+ Set state directly / programmatically

Now you have the login spec done and it's the cornerstone for every single test you will do. So how do you use it in e.g. settings spec? For not to copy & paste login steps to each of your tests and duplicating code you could use custom command: cy.login(). But using custom command for login fails at testing in isolation, adds 0% more confidence and accounts for 75% of the test duration. You need to log in without using the UI. And to do that depends of how your app works. For example you can check for JWT token in the App and in Cypress make a silent (HTTP) request.

So your custom login command becomes:

Cypress.Commands.add('login', () => {
  cy.request({
    method: 'POST',
    url: 'http://localhost:3000/api/users/login',
    body: {
      user: {
        email: 'joe@example.com',
        password: 'joe',
      }
    }
  })
  .then((resp) => {
    window.localStorage.setItem('jwt', resp.body.user.token)
  })
})

Setting state programmatically isn't always as easy as making requests to endpoint. You might need to manually dispatch e.g. Vue actions to set desired values for the application state in the store. Cypress documentation has good example of how you can test Vue web applications with Vuex data store & REST backend.

Visiting external sites

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

Try to avoid requiring a 3rd party server. When necessary, always use cy.request() to talk to 3rd party servers via their APIs like testing log in when your app uses another provider via OAuth. Or you could try stub out the OAuth provider. Cypress has recipes for different approaches.

Add multiple assertions

- Don't create "tiny" tests with a single assertion and acting like you’re writing unit tests.
+ Add multiple assertions and don’t worry about it

Cypress runs a series of async lifecycle events that reset state between tests. Resetting tests is much slower than adding more assertions.

it('validates and formats first name', function () {
    cy.get('#first')
      .type('johnny')
      .should('have.attr', 'data-validation', 'required')
      .and('have.class', 'active')
      .and('have.value', 'Johnny')
  })

Clean up state before tests run

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

When your tests end - you are left with your working application at the exact point where your test finished. If you remove your application's state after each test, then you lose the ability to use your application in this mode or debug your application or write a partial tests.

Unnecessary Waiting

- Don't wait for arbitrary time periods using cy.wait(Number).
+ Use route aliases or assertions to guard Cypress from proceeding until an explicit condition is met.

For example waiting explicitly for an aliased route:

cy.server()
cy.route('GET', /users/, [{ 'name': 'Maggy' }, { 'name': 'Joan' }]).as('getUsers')
cy.get('#fetch').click()
cy.wait('@getUsers')     // <--- wait explicitly for this route to finish
cy.get('table tr').should('have.length', 2)

No constraints

You've native access to everything so don't limit yourself trying to act like a user. You can e.g.

  • Control Time: cy.clock(), e.g. control how your app responds to system time, force set timeouts and set intervals to fire when you want them to.
  • Stub Objects: cy.stub(), force callbacks to fire, assert things are called with right arguments.
  • Modify Stores: cy.window(), e.g. dispatch events, like logout.

Set global baseUrl

+ Set a baseUrl in your configuration file.

Adding a baseUrl in your configuration allows you to omit passing the baseUrl to commands like cy.visit() and cy.request().

Without baseUrl set, Cypress loads main window in localhost + random port. As soon as it encounters a cy.visit(), it then switches to the url of the main window to the url specified in your visit. This can result in a ‘flash’ or ‘reload’ when your tests first start. By setting the baseUrl, you can avoid this reload altogether.

Assertions should be obvious

"A good practice is to force an assertion to fail and see if the error message and the output is enough to know why. It is easiest to put a .only on the it block you're evaluating. This way the application will stop where a screenshot is normally taken and you're left to debug as if you were debugging a real failure. Thinking about the failure case will help the person who has to work on a failing test." (Best practices for maintainable tests)

<code>
it.only('check for tab descendants', () => {
  cy
    .get('body')
    .should('have.descendants', '[data-testid=Tab]') // expected '' to have descendants '[data-testid=Tab]'
    .find('[data-testid=Tab]')
    .should('have.length', 2) // expected '[ <div[data-testid=tab]>, 4 more... ]' to have a length of 2 but got 5
});
</code>

Explore the environment

You can pause the test execution by using debugger keyword. Make sure the DevTools are open.

it('bar', function () {
   debugger
   // explore "this" context
 })

Running in CI

If you're running in Cypress in CI and need to start and stop your web server there's recipes showing you that.

Try the start-server-and-test module. It's good to note that when using e2e-cypress plugin for vue-cli it starts the app automatically for Cypress.

If your videos taken during cypress run freeze when running on CI then increase the CPU resources, see: #4722

Adjust the compression level on cypress.json to minimal with "videoCompression": 0 or disable it with "videoCompression": false. Disable recording with "video": false.

Record success and failure videos

Cypress captures videos from test runs and whenever a test fails you can watch the failure video side by side with the video from the last successful test run. The differences in the subject under test are quickly obvious as Bahtumov's tips suggests.

If you're using e.g. GitLab CI you can configure it to keep artifacts from failed test runs for 1 week, while keeping videos from successful test runs only for a 3 days.

artifacts:
    when: on_failure
    expire_in: '1 week'
    untracked: true
    paths:
      - cypress/videos
      - cypress/screenshots
  artifacts:
    when: on_success
    expire_in: '3 days'
    untracked: true
    paths:
      - cypress/screenshots

Helpful practices

Disable ServiceWorker

ServiceWorkers are great but they can really affect your end-to-end tests by introducing caching and coupling tests. If you want to disable the service worker caching you need to remove or delete navigator.serviceWorker when visiting the page with cy.visit.

it('disable serviceWorker', function () {
  cy.visit('index.html', {
    onBeforeLoad (win) {
      delete win.navigator.__proto__.serviceWorker
    }
  })
})

Note: once deleted, the SW stays deleted in the window, even if the application navigates to another URL.

Get command log on failure

In the headless CI mode, you can get a JSON file for each failed test with the log of all commands. All you need is cypress-failed-log project and include it from your cypress/support/index.js file.

Conditional logic

Sometimes you might need to interact with a page element that does not always exist. For example there might a modal dialog the first time you use the website. You want to close the modal dialog. But the modal is not shown the second time around and the above code will fail.

In order to check if an element exists without asserting it, use the proxied jQuery function Cypress.$:

const $el = Cypress.$('.greeting')
if ($el.length) {
  cy.log('Closing greeting')
  cy.get('.greeting')
    .contains('Close')
    .click()
}
cy.get('.greeting')
  .should('not.be.visible')

Summary

- Don't use the UI to build up state
+ Set state directly / programmatically

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

- Don't limit yourself trying to act like a user
+ You have native access to everything

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors

- Don't create tests with a single assertion
+ Add multiple assertions and don’t worry about it

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

+ Set a baseUrl in your configuration file.

More to read

Use cypress-testing-library which encourage good testing practices through simple and complete custom Cypress commands and utilities.

Set up intelligent code completion for Cypress commands and assertions by adding a triple-slash directive to the head of your JavaScript or TypeScript testing spec file. This will turn the IntelliSense on a per file basis.

/// <reference types="Cypress" />

Read What I’ve Learned Using Cypress.io for the Past Three Weeks if you need a temporary workaround for iframes and testing file uploads as for now Cypress does not natively support those.

And of course Gleb Bahmutov's blog is useful resource for practical things like Tips and tricks post.

Automate validating code changes with Git hooks

What could be more annoying than committing code changes to repository and noticing afterwards that formatting isn't right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately with small enhancements to your development workflow you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.

Git Hooks

Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. They're a built-in feature and run locally. Hook scripts are only limited by a developer's imagination. Some example hook scripts include:

  • pre-commit: Check the commit for linting errors.
  • pre-receive: Enforce project coding standards.
  • post-commit: Email team members of a new commit.
  • post-receive: Push the code to production.

Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You're free to change or update these scripts as necessary, and Git will execute them when those events occur.

Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready for commit or pushing to remote repository.

For more reading about Git hooks you can check missing Git hooks documentation, read the basics and check tutorial how to use Git hooks on local Git clients and Git servers.

Pre-commit

One productive way to use Git hooks is pre-commit framework for managing and maintaining multi-language pre-commit hooks. Read tips for using a pre-commit hook.

Pre-commit is nice for example running linters to ensure that your changes conform to coding standards. All you need is to install pre-commit and then add hooks.

Installing pre-commit, ktlint and pre-commit-hook on MacOS with Homebrew:

$ brew install pre-commit
$ brew install ktlint
$ ktlint --install-git-pre-commit-hook

For example the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The "export PATH=/usr/local/bin:$PATH" is for SourceTree to find git on MacOS.

#!/bin/sh
export PATH=/usr/local/bin:$PATH
# https://github.com/shyiko/ktlint pre-commit hook
git diff --name-only --cached --relative | grep '\.kt[s"]\?$' | xargs ktlint -F --relative .
if [ $? -ne 0 ]; then exit 1; else git add .; fi

The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.

Maven projects

Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.

It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project's initial lifecycle phase. It's good to notice that the plugin rewrites hooks.

Usage Example:

<build>
    <plugins>
	<plugin>
	    <groupId>org.sandbox</groupId>
	    <artifactId>githook-maven-plugin</artifactId>
	    <version>1.0.0</version>
	    <executions>
	        <execution>
	            <goals>
	                <goal>install</goal>
	            </goals>
	            <configuration>
	                <hooks>
	                    <pre-commit>
	                         echo running validation build
	                         exec mvn clean install
	                    </pre-commit>
	                </hooks>
	            </configuration>
	        </execution>
	    </executions>
	</plugin>
    </plugins>
</build>

Git hooks for Node.js projects

On Node.js projects you can define scripts in package.json and run them with npm which enables an another approach to running Git hooks.

🐶 Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.

Installing Husky is like any other npm library

npm install husky --save-dev

The following configuration on your package.json runs lint (e.g. eslint with --fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to remote repository.

"husky": {
   "hooks": {
     "pre-commit": "npm run lint",
     "pre-push": "npm run lint && npm run test"
   }
}

Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.

Summary

Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.

This article was originally published at 15.7.2019 on Gofore's blog.

Ignoring files and folders in Subversion with propset

Before committing code to the Subversion repository we always set the svn:ignore property on the directory to prevent some files and directories to be checked in. You would usually want to exclude the IDE project files and the target/ directory.

It's useful to put all the ignored files and directories into a file: .svnignore. Your .svnignore could look like:

*.iml
target/*

Put the .svnignore file in the project folder and commit it to your repository so the ignored files are shared between committers.

Now reference the file with the -F option:
$ svn propset svn:ignore -F .svnignore.

Of course I hope everyone has by now moved to git and uses .gitignore for this same purpose.

Best Practices for Version Control in 8 steps

Using version control is an essential part of modern software development and using it efficiently should be part of every developer's tool kit. Knowing the basic rules makes it even more useful. Here are some best practices that help you on your way.

tl; dr;

  1. Commit logical changesets (atomic commits)
  2. Commit Early, Commit Often
  3. Write Reasonable Commit Messages
  4. Don't Commit Generated Sources
  5. Don't Commit Half-Done Work
  6. Test Before You Commit
  7. Use Branches
  8. Agree on a Workflow
Simplified Git Flow
Simplified Git Flow (source: buildazure)

Commit logical changesets (atomic commits)

A commit should be a wrapper for related changes. Make sure your change reflects a single purpose: the fixing of a specific bug, the addition of a new feature, or some particular task. Small commits make it easier for other developers to understand the changes and roll them back if something went wrong.

Your commit will create a new revision number which can forever be used as a "name" for the change. You can mention this revision number in bug databases, or use it as an argument to merge should you want to undo the change or port it to another branch. Git makes it easy to create very granular commits.

So if you do many changes to multiple logical components at the same time, commit them in separate parts. That way it's easier to follow changes and their history. So working with features A, B and C and fixing bugs 1, 2 and 3 should make at least 6 commits.

Commit Early , Commit Often

It is recommended to commit code to version control often which keeps your commits small and, again, helps you commit only related changes. It also allows you to share your code more frequently with others.

It's easier for everyone to integrate changes regularly and avoid having merge conflicts. Having few large commits and sharing them rarely, in contrast, makes it hard to solve conflicts.

"If the code isn't checked into source control, it doesn't exist."

Coding Horror

Write Reasonable Commit Messages

Always write some reasonable comment on your commit. It should be short and descriptive and tell what was changed and why.

Begin your message with a short summary of your changes (up to 50 characters as a guideline). Separate it from the following body by including a blank line.

It is also useful to add some prefix to your message like Fix or Add, depending on what kind of changes you did. Use the imperative, present tense ("change", not "changed" or "changes") to be consistent with generated messages from commands like git merge.

If fixing some bug or making some feature and it has a JIRA ticket, add the ticket identifier as a prefix.

For example: "Fix a few bugs in the interface. Added an ID field. Removed a couple unnecessary functions. Refactored the context check." or "Fix bad allocations in image processing routines".

Not like this: "Fixed some bugs."

The body of your message should provide detailed answers to the following questions: What was the motivation for the change? How does it differ from the previous implementation?

"If the changes you made are not important enough to comment on, they probably are not worth committing either."

loop label

Don’t Commit Generated Sources

Don't commit files which are generated dynamically or which are user dependent. Like target folder or IDEA's .iml files or Eclipse's .settings and .project files. They change depending what the user likes and don't relate to project's code.

Also project's binary files and Javadocs are files that don't belong to version control.

Don't Commit Half-Done Work

You should only commit code when it's completed. Split the feature's implementation into logical chunks and remember to commit early and often. Use branches or consider using Git's Stash feature if you need a clean working copy (to check out a branch, pull in changes, etc.).

On the other hand you should never leave the office without commiting your changes.

"It's better to have a broken build in your working repository than a working build on your broken hard drive."

loop label

Test Before You Commit

You should only commit code which is tested and passes tests. And this includes code formatting with linters. Write tests and run tests to make sure the feature or bug fix really is completed and has no side effects (as far as one can tell).

Having your code tested is even more important when it comes to pushing / sharing your code with others.

Use Branches

Branching is one of Git's most powerful features – and this is not by accident: quick and easy branching was a central requirement from day one. Branches are the perfect tool to help you avoid mixing up different lines of development.

You should use branches extensively in your development workflows: for new features, bug fixes and ideas.

Agree on a Workflow

Git lets you pick from a lot of different workflows: long-running branches, topic branches, merge or rebase, git-flow.

Which one you choose depends on a couple of factors: your project, your overall development and deployment workflows and (maybe most importantly) on your and your teammates' personal preferences. However you choose to work, just make sure to agree on a common workflow that everyone follows.

Atlassian has done good article of comparing workflows to suit your needs and covers centralized, feature Branch, gitflow and forking workflows.

Summary

Using version control is usually and fortunately an acknowledged best practice and part of software development. By using even couple of the above practices makes working with the code much more pleasant. Adopting at least "Commit logical changesets" and "Reasonable Commit Messages" helps a lot.

Code quality metrics for Kotlin project on SonarQube

Code quality in software development projects is important and a good metric to follow. Code coverage, technical debt, vulnerabilities in dependencies and conforming to code style rules are couple of things you should follow. There are some de facto tools you can use to visualize things and one of them is SonarQube. Here's a short technical note of how to setup it on Kotlin project and visualize metrics from different tools.

Including what analysis SonarQube's default plugins provide we are also using Detekt for static source code analysis and OWASP Dependency-Check to detect publicly disclosed vulnerabilities contained within project dependencies.

Visualizing Kotlin project metrics on SonarQube

SonarQube is nice graphical tool to visualize different metrics of your project. Lately it has started to support also Kotlin with SonarKotlin plugin and sonar-kotlin plugin. From typical Java project you need some extra settings to get things working. It's also good to notice that the support for Kotlin isn't quite yet there and sonar-kotlin provides better information i.e. what comes to code coverage

Steps to integrate reporting to Sonar with maven build:

  • Add configuration in project pom.xml: Surefire, Failsafe, jaCoCo, Detekt, Dependency-Check
  • Run Sonar in Docker
  • Maven build with sonar:sonar option
  • Check Sonar dashboard
SonarQube overview
SonarQube project overview

Configure Kotlin project

Configure your Kotlin project built with Maven to have test reporting and static analysis. We are using Surefire to run unit tests, Failsafe for integration tests and JaCoCo generates reports for e.g. SonarQube. See the full pom.xml from example project (coming soon).

Test results reporting

pom.xml

<properties> 
<sonar.coverage.jacoco.xmlReportPaths>${project.build.directory}/site/jacoco/jacoco.xml</sonar.coverage.jacoco.xmlReportPaths> 
</properties> 

<build> 
    <plugins>
        <plugin>
            <groupId>org.jacoco</groupId>
            <artifactId>jacoco-maven-plugin</artifactId>
            <executions>
                <execution>
                    <id>default-prepare-agent</id>
                    <goals>
                        <goal>prepare-agent</goal>
                    </goals>
                </execution>
                <execution>
                    <id>pre-integration-test</id>
                    <goals>
                        <goal>prepare-agent-integration</goal>
                    </goals>
                </execution>
                <execution>
                    <id>jacoco-site</id>
                    <phase>verify</phase>
                    <goals>
                        <goal>report</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
                <skipTests>${unit-tests.skip}</skipTests>
                <excludes>
                    <exclude>**/*IT.java</exclude>
                    <exclude>**/*IT.kt</exclude>
                    <exclude>**/*IT.class</exclude>
                </excludes>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-failsafe-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>integration-test</goal>
                        <goal>verify</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <skipTests>${integration-tests.skip}</skipTests>
                <includes>
                    <include>**/*IT.class</include>
                </includes>
                <runOrder>alphabetical</runOrder>
            </configuration>
        </plugin>
    </plugins> 

    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <version>0.8.3</version>
            </plugin>
        </plugins>
    </pluginManagement>

... 
</build> 

Static code analysis with Detekt

Detekt static code analysis configuration as AntRun. There's also unofficial Maven plugin for Detekt. It's good to notice that there are some "false positive" findings on Detekt and you can either customize detekt rules or suppress findings if they are intentional such as @Suppress("MagicNumber").

Detekt code smells
Detekt code smells

pom.xml

<properties> 
    <sonar.kotlin.detekt.reportPaths>${project.build.directory}/detekt.xml</sonar.kotlin.detekt.reportPaths> 
</properties> 

<build> 
... 
<plugins> 
<plugin> 
    <groupId>org.apache.maven.plugins</groupId> 
    <artifactId>maven-antrun-plugin</artifactId> 
    <version>1.8</version> 
    <executions> 
        <execution> 
            <!-- This can be run separately with mvn antrun:run@detekt --> 
            <id>detekt</id> 
            <phase>verify</phase> 
            <configuration> 
                <target name="detekt"> 
                    <java taskname="detekt" dir="${basedir}" 
                          fork="true" 
                          failonerror="false" 
                          classname="io.gitlab.arturbosch.detekt.cli.Main" 
                          classpathref="maven.plugin.classpath"> 
                        <arg value="--input"/> 
                        <arg value="${basedir}/src"/> 
                        <arg value="--filters"/> 
                        <arg value=".*/target/.*,.*/resources/.*"/> 
                        <arg value="--report"/> 
                        <arg value="xml:${project.build.directory}/detekt.xml"/> 
                    </java> 
                </target> 
            </configuration> 
            <goals> 
                <goal>run</goal> 
            </goals> 
        </execution> 
    </executions> 
    <dependencies> 
        <dependency> 
            <groupId>io.gitlab.arturbosch.detekt</groupId> 
            <artifactId>detekt-cli</artifactId> 
            <version>1.0.0-RC14</version> 
        </dependency> 
    </dependencies> 
</plugin> 
</plugins> 
... 
</build> 

Dependency checks

Dependency check with OWASP Dependency-Check Maven plugin

OWASP Dependency-Check
OWASP Dependency-Check

pom.xml

<properties> 
    <dependency.check.report.dir>${project.build.directory}/dependency-check</dependency.check.report.dir> 
    <sonar.host.url>http://localhost:9000/</sonar.host.url> 
    <sonar.dependencyCheck.reportPath>${dependency.check.report.dir}/dependency-check-report.xml</sonar.dependencyCheck.reportPath>
    <sonar.dependencyCheck.htmlReportPath>${dependency.check.report.dir}/dependency-check-report.html</sonar.dependencyCheck.htmlReportPath>
</properties> 

<build> 
... 
<plugins> 
<plugin> 
    <groupId>org.owasp</groupId> 
    <artifactId>dependency-check-maven</artifactId> 
    <version>4.0.2</version> 
    <configuration> 
        <format>ALL</format> 
        <skipProvidedScope>true</skipProvidedScope> 
        <skipRuntimeScope>true</skipRuntimeScope> 
        <outputDirectory>${dependency.check.report.dir}</outputDirectory> 
    </configuration> 
    <executions> 
        <execution> 
            <goals> 
                <goal>check</goal> 
            </goals> 
        </execution> 
    </executions> 
</plugin> 
</plugins> 
... 
</build>

Sonar scanner to run with Maven

pom.xml

<build> 
... 
    <pluginManagement> 
        <plugins> 
            <plugin> 
                <groupId>org.sonarsource.scanner.maven</groupId> 
                <artifactId>sonar-maven-plugin</artifactId> 
                <version>3.6.0.1398</version> 
            </plugin> 
        </plugins> 
    </pluginManagement> 
... 
</build> 

Running Sonar with Kotlin plugin

Create a SonarQube server with Docker

$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube 

There's also OWASP docker image for SonarQube which adds several community plugins to enable SAST. But for our purposes the "plain" SonarQube works nicely.

Use the Kotlin plugin which comes with SonarQube (SonarKotlin) or install the sonar-kotlin plugin which shows information differently. If you want to use sonar-kotlin and are using the official Docker image for SonarQube then you've to first remove the SonarKotlin plugin.

Using sonar-kotlin

$ git clone https://github.com/arturbosch/sonar-kotlin 
$ cd sonar-kotlin 
$ mvn package  
$ docker exec -it sonarqube sh -c "ls /opt/sonarqube/extensions/plugins" 
$ docker exec -it sonarqube sh -c "rm /opt/sonarqube/extensions/plugins/sonar-kotlin-plugin-1.5.0.315.jar" 
$ docker cp target/sonar-kotlin-0.5.2.jar sonarqube:/opt/sonarqube/extensions/plugins 
$ docker stop sonarqube 
$ docker start sonarqube 

Adding dependency-check-sonar-plugin to SonarQube

$ curl -JLO https://github.com/SonarSecurityCommunity/dependency-check-sonar-plugin/releases/download/1.2.1/sonar-dependency-check-plugin-1.2.1.jar 
$ docker cp sonar-dependency-check-plugin-1.2.1.jar sonarqube:/opt/sonarqube/extensions/plugins 
$ docker stop sonarqube 
$ docker start sonarqube 

Run test on project and scan with Sonar

The verify phase runs your tests and should generate i.a. jacoco.xml under target/site/jacoco and detekt.xml.

$ mvn clean verify sonar:sonar

Access Sonar via http://localhost:9000/

Code quality metrics? So what?

You now have metrics on Sonar to show to stakeholders but what should you do with those numbers?

One use case is to set quality gates on SonarQube to check that a set of conditions must be met before project can be released into production. Ensuring code quality of "new" code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis. By default the rules are: coverage on new code < 80%; percentage of duplicated lines on new code > 3; maintainability, reliability or security rating is worse than A.

Best Practices of forking git repository and continuing development

Sometimes there's a need to fork a git repository and continue development with your own additions. It's recommended to make pull request to upstream so that everyone could benefit of your changes but in some situations it's not possible or feasible. When continuing development in forked repo there's some questions which come to mind when starting. Here's some questions and answers I found useful when we forked a repository in Github and continued to develop it with our specific changes.

Repository name: new or fork?

If you're releasing your own package (to e.g. npm or mvn) from the forked repository with your additions then it's logical to also rename the repository to that package name.

If it's a npm package and you're using scoped packages then you could also keep the original repository name.

Keeping master and continuing developing on branch?

Using master is the sane thing to do. You can always sync your fork with an upstream repository. See: syncing a fork

Generally you want to keep your local master branch as a close mirror of the upstream master and execute any work in feature branches (that might become pull requests later).

How you should do versioning?

Suppose that the original repository (origin) is still in active development and does new releases. How should you do versioning in your forked repository as you probably want to bring the changes done in the origin to your fork? And still maintain semantic versioning.

In short, semver doesn't support prepending or appending strings to version. So adding your tag to the version number from the origin which your version is following breaks the versioning. So, you can't use something like "1.0.0@your-org.0.1" or "1.0.0-your-org.1". This has been discussed i.a. semver #287. The suggestion was to use a build meta tag to encode the other version as shown in semver spec item-10. But the downside is that "Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence."

If you want to keep relation the original package version and follow semver then your options are short. The only option is to use build meta tag: e.g. "1.0.0+your-org.1".

It seems that when following semantic versioning your only option is to differ from origin version and continue as you go.

If you don't need to or want to follow semver you can track upstream version and mark your changes using similar markings as semver pre-releases: e.g. "1.0.0-your-org.1".

npm package: scoped or unscoped?

Using scoped packages is a good way to signal official packages for organizations. Example of using scoped packages can be seen from Storybook.

It's more of a preference and naming conventions of your packages. If you're using something like your-org-awesome-times-ahead-package and your-org-patch-the-world-package then using scoped packages seems redundant.

Who should be the author?

At least add yourself to contributors in package.json.

Forking only for patching npm library?

Don't fork, use patch-package which lets app authors instantly make and keep fixes to npm dependencies. Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn. Now you don't need to wait around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

This post was originally published on Gofore Group blog at 11.2.2019.

Learning and Staying Current in Software Development

Software development is one of the professions where you have to keep your knowledge up to date and follow what happens in the field. Staying current in the field and expanding your horizons can be achieved with different ways and one good way I have used is to follow different news sources, newsletters, listening podcasts and attending meetups. Here is my opinionated selection of resources to learn, share ideas, newsletters, meetups and other things for software developers. Meetups and some things are Finnish related.

News

There are some good sites to follow what happens in technology. They provide community powered links and discussions.

Podcasts

Podcasts provide nice resource for gathering experiences and new information how things can be done and what's happening and coming up in software development. I commute daily about an hour and time flies when you find good episodes to listen. Here's my selection of podcast relating to software development.

General

  • Software Engineering Daily: "The world through the lens of software" (iTunes)
  • Software Engineering Radio: "Targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast". (feed)
  • ShopTalk: "An internet radio show about the internet starring Dave Rupert and Chris Coyier." (iTunes)
  • Full Stack Radio: "Every episode, Adam Wathan is joined by a guest to talk about everything from product design and user experience to unit testing and system administration." (feed)

Front-end

  • Syntax: "A Tasty Treats Podcast for Web Developers." (iTunes)
  • The Changelog: "Conversations with the hackers, leaders, and innovators of software development."
  • React Podcast: "Conversations about React with your favorite developers."
  • Brainfork: "A podcast about mental health & tech"

In Finnish

  • ATK-hetki: "Vesa Vänskä ja Antti Akonniemi keskustelevat teknologiasta, bisneksestä ja itsensä kehittämisestä."
  • Webbidevaus: "Puheradiota webbikehityksestä suomeksi! Juontajina Antti Mattila ja Riku Rouvila."

Newsletters

Normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersects the stack you’re using and topics you’re interested at.

The power of newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. Well curated newsletter to targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later.

General

Mobile development

  • iOS Dev Weekly: Hand picked round up of the best iOS development links published every Friday
  • This Week In Swift: List of the best Swift resources of the week.
  • iOS Dev nuggets: Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.

Java

Database

  • DB Weekly: A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

HTML and CSS

  • HTML5Weekly: Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.
  • CSS Weekly: Roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

  • Status code: "Keeping developers informed." weekly email newsletters on a range of programming niches (links to JavaScript weekly, DevOps weekly etc.)
  • Web Development Reading List: Weekly roundup of web development–related sources, selected by Anselm Hannemann.
  • Versioning: "Daily knowledge devs and designers need to get ahead of the game." SitePoint’s daily newsletter, which features the latest web development news.
  • Hacking UI: A weekly email with our favorite articles about design, front-end development, technology, startups, productivity and the occasional inspirational life lesson.
  • Scott Hanselman: Newsletter of Wonderful Things. Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.
  • MergeLinks: Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.
  • “How to keep up to date on: Front-End Technologies” page lists newsletters, blogs and people to follow.

JavaScript

  • JavaScript Weekly: Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.
  • Node Weekly: Once–weekly e-mail round-up of Node.js news and articles.
    A Drip of JavaScript: “One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.
  • SuperHero.js: Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.
  • State of JS: Results of yearly JavaScript surveys

User experience and design

  • UX Design Weekly: Hand picked list of the best user experience design links every week. Curated by Kenny Chen & published every Monday.
  • Sidebar.io: To satisfy your web aesthetics with list of the 5 best design links of the day. The content is manually curated by a couple great editors.
  • Userfocus: Monthly newsletter which shares an in-depth article on user experience.

Ops

  • DevOps Weekly: Weekly slice of devops news.
  • Web Operations Weekly: Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.
  • Microservice Weekly: A hand-curated weekly newsletter with the best articles on microservices.

Twitter

Following fellow developers and other people and accounts on Twitter is good way to know what's happening right now. Here's a selection of accounts I (@walokra) follow. In no particular order.

Development

  • @ThePracticalDev: Great posts from the amazing dev.to community, with some opinion and humor mixed in.
  • @CommitStrip: The blog relating the daily life of developers. Official english account.
  • @baeldung: Author of restwithspring.com and learnspringsecurity.com, passionate about REST, Security, TDD and everything in between.
  • @martinfowler: Author and international public speaker on software development, specializing in object-oriented analysis and design, UML, patterns, and agile software development methodologies.

Infosec

  • @troyhunt: Pluralsight author. Microsoft Regional Director and MVP for Developer Security. Online security, technology and “The Cloud”. Creator of @haveibeenpwned.
  • @briankrebs: Independent investigative journalist. Writes about cybercrime. Author of 'Spam Nation', a NYT bestseller. Wrote for The Washington Post '95-'09
  • @mikko: CRO at F-Secure ● TED Speaker ● Revɘrse Engineer ● Supervillain
  • @TinkerSec Infosec Hacker things
  • @Anakondantti: Mostly software security related, but occasionally other things too. I'm a white hat hacker at team ROT.
  • @SunTzuCyber: If Sun Tzu had written "The Art of Cyber War", these would be his quotes.
  • @lennyzeltser: Advances information security. Grows tech businesses. Fights malware. // VP of Products @MinervaLabs. Author and Instructor @SANSInstitute.

React scene

  • @jevakallio: @FormidableLabs, React/Native engineer, comedian, speaker, writer, improviser, Twitter Developer Expert™. Artisanal small batch free range shitposting.
  • @bebraw: Award winning founder of @survivejs and @jsterlibs. I also organize @ReactFinland.
  • @ReactJSNews: The latest React news and articles.

Design / UX

  • @steveschoger: Designer for @TightenCo and @taylorotwell ❯ Maker of heropatterns , heroicons  and zondicons  ❯  ? Design Tips
  • @UX_Grant: ? Senior Designer @ booking.com . ? Creating, Learning, Sharing ? Maker: MakersMusic.co  ?
  • @jonikorpi: Making multiplayer games using the web platform, as @vuorodesign. Previously web design at @kiskolabs.
  • @lukew: Humanizing technology. Founded: Polar (Google acquired) Bagcheck (Twitter acquired) Wrote: Mobile First, Web Form Design, Site Seeing. Worked: Yahoo, eBay, NCSA.
  • @autiomaa: Helping people, with design & technology. Front-end development, visual design, photography. Learning something new every day.
  • @skrug: Best known as the guy who wrote Don't Make Me Think (now in its 3rd edition!) and Rocket Surgery Made Easy.
  • @jnd1er: Don Norman. Design thinker, company advisor, professor, columnist, author, ... Latest book: Design of Everyday Things, Revised and Expanded.
  • @mpietila: User experience etc. Occasional smart-assery & besserwisserism. I have a history of seeing what they did there. Head of design at @qvik.

Database

Miscellanous

Java

  • @mreinhold: Chief Architect, Java Platform Group, Oracle.
  • @jodastephen: Java Champion. Developer at OpenGamma. Occasional blogger and speaker. Best known for Joda projects and JSR-310

Technology News

Meetups

You can learn much from others and to broaden your horizon it's beneficial to attend different meetups and listen how others have done things and watch war stories. Also free food and drinks.

Mostly Helsinki based

Tampere based

Community chats