Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."

jwt.io

You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/CN=ruleoftech.com" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
OR
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

" 
{
"keys": [
….,
{
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"
}
]
}
"

Convert the PEM to JWK format with e.g. pem-jwk or with pem_to_jwks.py. The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'

...

To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!

Automate versioning and changelog with release-it on GitLab CI/CD

It’s said that you should automate all the things and one of the things could be versioning your software. Incrementing the version number in your e.g. package.json is easy but it’s easier when you bundle it to your continuous integration and continuous deployment process. There are different tools you can use to achieve your needs and in this article we are using release-it. Other options are for example standard-version and semantic-release.

🚀 Automate versioning and package publishing

Using release-it with CI/CD pipeline

Release It is a generic CLI tool to automate versioning and package publishing related tasks. It’s installation requires npm but package.json is not needed. With it you can i.a. bump version (in e.g. package.json), create git commit, tag and push, create release at GitHub or GitLab, generate changelog and make a release from any CI/CD environment.

Here is an example setup how to use release-it on Node.js project with Gitlab CI/CD.

Install and configure release-it

Install release-it with npm init release-it which ask you questions or manually with npm install --save-dev release-it .

For example the package.json can look the following where commit message has been customized to have "v" before version number and npm publish is disabled (although private: true should be enough for that). You could add [skip ci] to "commitMessage" for i.a. GitLab CI/CD to skip running pipeline on release commit or use Git Push option ci.skip.

package.json
{
  "name": “example-frontend",
  "version": "0.1.2",
  "private": true,
  "scripts": {
    ...
    "release": "release-it"
  },
  "dependencies": {
    …
  },
  "devDependencies": {
    ...
    "release-it": "^12.4.3”
},
"release-it": {
    "git": {
      "tagName": "v${version}",
      "requireCleanWorkingDir": false,
      "requireUpstream": false,
      "commitMessage": "Release v%s"
    },
    "npm": {
      "publish": false
    }
  }
}

Now you can run npm run release from the command line:

npm run release
npm run release -- patch --ci

In the latter command things are run without prompts (--ci) and patch increases the 0.0.x number.

Using release-it with GitLab CI/CD

Now it’s time to combine release-it with GitLab CI/CD. Adding release-it stage is quite straigthforward but you need to do couple of things. First in order to push the release commit and tag back to the remote, we need the CI/CD environment to be authenticated with the original host and we use SSH and public key for that. You could also use private token with HTTPS.

  1. Create SSH keys as we are using the Docker executorssh-keygen -t ed25519.
  2. Create a new SSH_PRIVATE_KEY variable in "project > repository > CI / CD Settings" where and paste the content of your private key that you created to the Value field.
  3. In your "project > repository > Repository" add new deploy key where Title is something describing and Key is the content of your public key that you created.
  4. Tap "Write access allowed".

Now you’re ready for git activity for your repository in CI/CD pipeline. Your .gitlab-ci.yaml release stage could look following.

image: docker:19.03.1

stages:
  - release

Release:
  stage: release
  image: node:12-alpine
  only:
    - master
  before_script:
    - apk add --update openssh-client git
    # Using Deploy keys and ssh for pushing to git
    # Run ssh-agent (inside the build environment)
    - eval $(ssh-agent -s)
    # Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    # Create the SSH directory and give it the right permissions
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    # Don't verify Host key
    - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
    - git config user.email "gitlab-runner@your-domain.com"
    - git config user.name "Gitlab Runner"
  script:
    # See https://gist.github.com/serdroid/7bd7e171681aa17109e3f350abe97817
    # Set remote push URL
    # We need to extract the ssh/git URL as the runner uses a tokenized URL
    # Replace start of the string up to '@'  with git@' and append a ':' before first '/'
    - export CI_PUSH_REPO=$(echo "$CI_REPOSITORY_URL" | sed -e "s|.*@\(.*\)|git@\1|" -e "s|/|:/|" )
    - git remote set-url --push origin "ssh://${CI_PUSH_REPO}"
    # runner runs on a detached HEAD, checkout current branch for editing
    - git reset --hard
    - git clean -fd
    - git checkout $CI_COMMIT_REF_NAME
    - git pull origin $CI_COMMIT_REF_NAME
    # Run release-it to bump version and tag
    - npm ci
    - npm run release -- patch --ci --verbose

We are running release-it here with patch increment. If you want to skip CI pipeline on release-it commit you can either use the ci.skip Git Push option package.json git.pushArgs which tells GitLab CI/CD to not create a CI pipeline for the latest push. This way we don't need to add [skip ci] to commit message.

And now you're ready to run the pipeline with release stage and enjoy of automated patch updates to your application's version number. And you also get GitLab Releases if you want.

Setting up the script step was not so clear but fortunately people in the Internet had done it earlier and Google found a working gist and comment on GitLab issue. Interacting with git in the GitLab CI/CD could be easier and there are some feature requests for that like allowing runners to push via their CI token.

Customizing when pipelines are run

There are some more options for GitLab CI/CD pipelines if you want to run pipelines after you've tagged your version. Here's snippet of running "release" stage on commits to master branch and skipping it if commit message is for release.

Release:
  stage: release
  image: node:12-alpine
  only:
    refs:
      - master
    variables:
      # Run only on master and commit message doesn't start with "Release v"
      - $CI_COMMIT_MESSAGE !~ /^Release v.*/
  before_script:
    ...
  script:
    ...

Now we can build a new container for the deployment of our application after it has been tagged and version bumped. Also we are reading the package.json version for tagging the image.

variables:
  PACKAGE_VERSION: $(cat package.json | grep version | head -1 | awk -F= "{ print $2 }" | sed 's/[version:,\",]//g' | tr -d '[[:space:]]')

Build dev:
  before_script:
    - export VERSION=`eval $PACKAGE_VERSION`
  stage: build
  script:
    - >
      docker build
      --pull
      --tag your-docker-image:latest
      --tag your-docker-image:$VERSION.dev
      .
    - docker push your-docker-image:latest
    - docker push your-docker-image:$VERSION.dev
  only:
    refs:
      - master
    variables:
      # Run only on master and commit message starts with "Release v"
      - $CI_COMMIT_MESSAGE =~ /^Release v.*/

Using release-it on detached HEAD

In the previous example we made a checkout to current branch for editing as the runner runs on detached HEAD. You can use the detached HEAD as shown below but the downside is that you can't create GitLab Releases from the pipeline as it fails to "ERROR Response code 422 (Unprocessable Entity)". This is because (I suppose) it doesn't make git push as it's done in manually with git.

Then the .gitlab-ci.yml is following:

...
script:
    - export CI_PUSH_REPO=$(echo "$CI_REPOSITORY_URL" | sed -e "s|.*@\(.*\)|git@\1|" -e "s|/|:/|" )
    - git remote set-url --push origin "ssh://${CI_PUSH_REPO}"
    # gitlab-runner runs on a detached HEAD, create a temporary local branch for editing
    - git checkout -b ci
    # Run release-it to bump version and tag
    - npm ci
    - DEBUG=release-it:* npm run release -- patch --ci --verbose --no-git.push
    # Push changes to originating branch
    # Always return true so that the build does not fail if there are no changes
    - git push --follow-tags origin ci_processing:${CI_COMMIT_REF_NAME} || true

Reset Hasura migrations and squash files

Using GraphQL for creating REST APIs is nowadays popular and there are different tools you can use. One of them is Hasura which is an open-source engine that gives you realtime GraphQL APIs on new or existing Postgres databases. Hasura is quite easy to work with but if your GraphQL schemas change a lot it creates plentiful of migration files. This has some unwanted consequences (for example slowing down the hasura migrate apply or even blocking it). Here’s some notes how to reset the state and create new migrations from the state that is on the server.

Note: From Hasura 1.0.0 onwards squashing is easier with hasura migrate squash command. It's still in preview. But before Hasura 1.0.0 version you have to squash migrations manually and this blog post explains how. The results are the same: squashing multiple migrations into a single one.

Hasura documentation provides a good guide how to squash migrations but in practice there are couple of other things you may need to address. So let’s combine the steps Hasura gives and some extra steps.

Reset Hasura migrations

First make a backup branch:

  1. $ git checkout master
  2. Create a backup branch:
    $ git checkout -b backup/migrations-before-resetting-20XX-XX-XX
  3. Update the backup branch to origin:
    $ git push origin backup/migrations-before-resetting-20XX-XX-XX

We are assuming you've local Hasura running on Docker with something like the following docker-compose.yml

version: "3.6"
services:
  postgres:
    image: postgres:11-alpine
    restart: always
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data
    command: postgres -c max_locks_per_transaction=2000
  graphql-engine:
    image: hasura/graphql-engine:v1.0.0-beta.6
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
      HASURA_GRAPHQL_ADMIN_SECRET: changeme
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
volumes:
  db_data:

Create local instance of Hasura with up to date migrations:

  1. $ docker-compose down -v
  2. $ docker-compose up
  3. $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Reset migrations to master:

  1. git checkout master
  2. git checkout -b reset-hasura-migrations
  3. rm -rf migrations/*

Reset the migration history on server. On hasura SQL console, http://localhost:8080/console:

TRUNCATE hdb_catalog.schema_migrations;

Setup fresh migrations by taking the schema and metadata from the server. By default init only takes public schema if others not mentioned with the --schema "your schema" parameter. Note down the version for later use.

  1. Create migration file:
    $ hasura migrate create "init" --from-server
  2. Mark the migration as applied on this server:
    $ hasura migrate apply --version "" --skip-execution
  3. Verify status of migrations, should show only one migration with Present status:
    $ hasura migrate status
  4. You have brand new migrations now!

Resetting migrations on other environments

  1. Checkout the reset branch on local machine:
    $ git checkout -b reset-hasura-migrations
  2. Reset the migration history on remote server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to remote server:
    $ hasura migrate apply --version "<version>" --skip-execution

Local environment Hasura status

For other developers please refer these instructions in order to get the backend into same state.

Option 1: Keep old data

  1. Checkout the backup branch on local machine:
    $ git checkout backup/migrations-before-resetting-20XX-XX-XX
  2. Reset the migration history on local server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to local server:
    $ hasura migrate apply --version "<version>" --skip-execution

Option 2: Remove all and start from beginning

  1. Clean up the old docker volumes:
    $ docker-compose down -v
  2. Start up services:
    $ docker-compose up
  3. Checkout master:
    $ git checkout master
  4. Apply migrations:
    $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Possible extra steps

Now your Hasura migrations and database tables are in one migration init file but sometimes things don’t work out when applying it to empty database. We are using Hasura audit-trigger and had to reorder the SQL clauses done by the migrate init and add some missing parts.

  1. Move schema creations after audit clauses
  2. Move audit.audit_table(target_table regclass) to last audit clause and copy it from audit.sql
  3. Add pg_trgm extension as done previously (fixes "operator does not exist: text <%!t(MISSING)ext" in public.search_customers_by_name)
  4. Drop session constraints / index before creating new
  5. Create session table only if not exists

Ignoring files and folders in Subversion with propset

Before committing code to the Subversion repository we always set the svn:ignore property on the directory to prevent some files and directories to be checked in. You would usually want to exclude the IDE project files and the target/ directory.

It's useful to put all the ignored files and directories into a file: .svnignore. Your .svnignore could look like:

*.iml
target/*

Put the .svnignore file in the project folder and commit it to your repository so the ignored files are shared between committers.

Now reference the file with the -F option:
$ svn propset svn:ignore -F .svnignore.

Of course I hope everyone has by now moved to git and uses .gitignore for this same purpose.

Best Practices of forking git repository and continuing development

Sometimes there's a need to fork a git repository and continue development with your own additions. It's recommended to make pull request to upstream so that everyone could benefit of your changes but in some situations it's not possible or feasible. When continuing development in forked repo there's some questions which come to mind when starting. Here's some questions and answers I found useful when we forked a repository in Github and continued to develop it with our specific changes.

Repository name: new or fork?

If you're releasing your own package (to e.g. npm or mvn) from the forked repository with your additions then it's logical to also rename the repository to that package name.

If it's a npm package and you're using scoped packages then you could also keep the original repository name.

Keeping master and continuing developing on branch?

Using master is the sane thing to do. You can always sync your fork with an upstream repository. See: syncing a fork

Generally you want to keep your local master branch as a close mirror of the upstream master and execute any work in feature branches (that might become pull requests later).

How you should do versioning?

Suppose that the original repository (origin) is still in active development and does new releases. How should you do versioning in your forked repository as you probably want to bring the changes done in the origin to your fork? And still maintain semantic versioning.

In short, semver doesn't support prepending or appending strings to version. So adding your tag to the version number from the origin which your version is following breaks the versioning. So, you can't use something like "1.0.0@your-org.0.1" or "1.0.0-your-org.1". This has been discussed i.a. semver #287. The suggestion was to use a build meta tag to encode the other version as shown in semver spec item-10. But the downside is that "Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence."

If you want to keep relation the original package version and follow semver then your options are short. The only option is to use build meta tag: e.g. "1.0.0+your-org.1".

It seems that when following semantic versioning your only option is to differ from origin version and continue as you go.

If you don't need to or want to follow semver you can track upstream version and mark your changes using similar markings as semver pre-releases: e.g. "1.0.0-your-org.1".

npm package: scoped or unscoped?

Using scoped packages is a good way to signal official packages for organizations. Example of using scoped packages can be seen from Storybook.

It's more of a preference and naming conventions of your packages. If you're using something like your-org-awesome-times-ahead-package and your-org-patch-the-world-package then using scoped packages seems redundant.

Who should be the author?

At least add yourself to contributors in package.json.

Forking only for patching npm library?

Don't fork, use patch-package which lets app authors instantly make and keep fixes to npm dependencies. Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn. Now you don't need to wait around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

This post was originally published on Gofore Group blog at 11.2.2019.

Learning and Staying Current in Software Development

Software development is one of the professions where you have to keep your knowledge up to date and follow what happens in the field. Staying current in the field and expanding your horizons can be achieved with different ways and one good way I have used is to follow different news sources, newsletters, listening podcasts and attending meetups. Here is my opinionated selection of resources to learn, share ideas, newsletters, meetups and other things for software developers. Meetups and some things are Finnish related.

News

There are some good sites to follow what happens in technology. They provide community powered links and discussions.

Podcasts

Podcasts provide nice resource for gathering experiences and new information how things can be done and what's happening and coming up in software development. I commute daily about an hour and time flies when you find good episodes to listen. Here's my selection of podcast relating to software development.

General

  • Software Engineering Daily: "The world through the lens of software" (iTunes)
  • Software Engineering Radio: "Targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast". (feed)
  • ShopTalk: "An internet radio show about the internet starring Dave Rupert and Chris Coyier." (iTunes)
  • Full Stack Radio: "Every episode, Adam Wathan is joined by a guest to talk about everything from product design and user experience to unit testing and system administration." (feed)

Front-end

  • Syntax: "A Tasty Treats Podcast for Web Developers." (iTunes)
  • The Changelog: "Conversations with the hackers, leaders, and innovators of software development."
  • React Podcast: "Conversations about React with your favorite developers."
  • Brainfork: "A podcast about mental health & tech"

In Finnish

  • Webbidevaus: "Puheradiota webbikehityksestä suomeksi! Juontajina Antti Mattila ja Riku Rouvila."
  • Turvakäräjät: "Ajankohtaista kyberiä, kansantajuisesti. Käräjillä mukana Antti Kurittu, Laura Kankaala ja Juho Jauhiainen."
  • Herrasmieshakkerit: "Käsittelee tietoturvaan ja vääjäämättömään kybertuhoon liittyviä asioita ja ilmiöitä. Oppaana Mikko Hyppönen ja Tomi Tuominen."
  • Koodia pinnan alla: "Suomenkielinen podcast pinnan alla tapahtuvasta ohjelmistoteknologian magiasta. Puikoissa Markus Hjort ja Yrjö Kari-Koskinen."
  • ATK-hetki: "Vesa Vänskä ja Antti Akonniemi keskustelevat teknologiasta, bisneksestä ja itsensä kehittämisestä."

Newsletters

Normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersects the stack you’re using and topics you’re interested at.

The power of newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. Well curated newsletter to targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later.

General

Cloud

Mobile development

  • iOS Dev Weekly: Hand picked round up of the best iOS development links published every Friday
  • This Week In Swift: List of the best Swift resources of the week.
  • iOS Dev nuggets: Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.

Java

Database

  • DB Weekly: A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

HTML and CSS

  • HTML5Weekly: Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.
  • CSS Weekly: Roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

  • Status code: "Keeping developers informed." weekly email newsletters on a range of programming niches (links to JavaScript weekly, DevOps weekly etc.)
  • Web Development Reading List: Weekly roundup of web development–related sources, selected by Anselm Hannemann.
  • Versioning: "Daily knowledge devs and designers need to get ahead of the game." SitePoint’s daily newsletter, which features the latest web development news.
  • Hacking UI: A weekly email with our favorite articles about design, front-end development, technology, startups, productivity and the occasional inspirational life lesson.
  • Scott Hanselman: Newsletter of Wonderful Things. Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.
  • MergeLinks: Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.
  • “How to keep up to date on: Front-End Technologies” page lists newsletters, blogs and people to follow.

JavaScript

  • JavaScript Weekly: Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.
  • Node Weekly: Once–weekly e-mail round-up of Node.js news and articles.
    A Drip of JavaScript: “One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.
  • SuperHero.js: Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.
  • State of JS: Results of yearly JavaScript surveys

User experience and design

  • UX Design Weekly: Hand picked list of the best user experience design links every week. Curated by Kenny Chen & published every Monday.
  • Sidebar.io: To satisfy your web aesthetics with list of the 5 best design links of the day. The content is manually curated by a couple great editors.
  • Userfocus: Monthly newsletter which shares an in-depth article on user experience.

Ops

  • DevOps Weekly: Weekly slice of devops news.
  • Web Operations Weekly: Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.
  • Microservice Weekly: A hand-curated weekly newsletter with the best articles on microservices.

Twitter

Following fellow developers and other people and accounts on Twitter is good way to know what's happening right now. Here's a selection of accounts I (@walokra) follow. In no particular order.

Development

  • @ThePracticalDev: Great posts from the amazing dev.to community, with some opinion and humor mixed in.
  • @CommitStrip: The blog relating the daily life of developers. Official english account.
  • @baeldung: Author of restwithspring.com and learnspringsecurity.com, passionate about REST, Security, TDD and everything in between.
  • @martinfowler: Author and international public speaker on software development, specializing in object-oriented analysis and design, UML, patterns, and agile software development methodologies.

Infosec

  • @troyhunt: Pluralsight author. Microsoft Regional Director and MVP for Developer Security. Online security, technology and “The Cloud”. Creator of @haveibeenpwned.
  • @briankrebs: Independent investigative journalist. Writes about cybercrime. Author of 'Spam Nation', a NYT bestseller. Wrote for The Washington Post '95-'09
  • @mikko: CRO at F-Secure ● TED Speaker ● Revɘrse Engineer ● Supervillain
  • @TinkerSec Infosec Hacker things
  • @Anakondantti: Mostly software security related, but occasionally other things too. I'm a white hat hacker at team ROT.
  • @SunTzuCyber: If Sun Tzu had written "The Art of Cyber War", these would be his quotes.
  • @lennyzeltser: Advances information security. Grows tech businesses. Fights malware. // VP of Products @MinervaLabs. Author and Instructor @SANSInstitute.

React scene

  • @jevakallio: @FormidableLabs, React/Native engineer, comedian, speaker, writer, improviser, Twitter Developer Expert™. Artisanal small batch free range shitposting.
  • @bebraw: Award winning founder of @survivejs and @jsterlibs. I also organize @ReactFinland.
  • @ReactJSNews: The latest React news and articles.

Design / UX

  • @steveschoger: Designer for @TightenCo and @taylorotwell ❯ Maker of heropatterns , heroicons  and zondicons  ❯  ? Design Tips
  • @UX_Grant: ? Senior Designer @ booking.com . ? Creating, Learning, Sharing ? Maker: MakersMusic.co  ?
  • @jonikorpi: Making multiplayer games using the web platform, as @vuorodesign. Previously web design at @kiskolabs.
  • @lukew: Humanizing technology. Founded: Polar (Google acquired) Bagcheck (Twitter acquired) Wrote: Mobile First, Web Form Design, Site Seeing. Worked: Yahoo, eBay, NCSA.
  • @autiomaa: Helping people, with design & technology. Front-end development, visual design, photography. Learning something new every day.
  • @skrug: Best known as the guy who wrote Don't Make Me Think (now in its 3rd edition!) and Rocket Surgery Made Easy.
  • @jnd1er: Don Norman. Design thinker, company advisor, professor, columnist, author, ... Latest book: Design of Everyday Things, Revised and Expanded.
  • @mpietila: User experience etc. Occasional smart-assery & besserwisserism. I have a history of seeing what they did there. Head of design at @qvik.

Database

Miscellanous

Java

  • @mreinhold: Chief Architect, Java Platform Group, Oracle.
  • @jodastephen: Java Champion. Developer at OpenGamma. Occasional blogger and speaker. Best known for Joda projects and JSR-310

Technology News

Meetups

You can learn much from others and to broaden your horizon it's beneficial to attend different meetups and listen how others have done things and watch war stories. Also free food and drinks.

Mostly Helsinki based

Tampere based

Community chats

Two days of React Finland 2018: Day two with React and React Native

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what's hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. This is the second part of my recap of the talks and my notes which I posted to Twitter. Check out also the first part of my notes from the first day's talks.

React Finland 2018, Day 2

How React changed everything — Ken Wheeler

Second day started with keynote by Ken Wheeler. He examined how React changed the front end landscape as we know it and started it with nice time travel to the 90s with i.a. Flash, JavaScript and AngularJS. Most importantly the talk took a look at the core idea of React, and why it transcends language or rendering target and posit on what that means going forward. And last we heard about what React async: suspense and time slicing.

"Best part of React is the community"

How React changed everything

Get started with Reason — Nik Graf

The keynote also touched Reason ML and Nik Graf went into details kicking off with the basics and going into how to leverage features like variant types and pattern matching to make impossible states impossible.

Get started with Reason ML

Making Unreasonable States Impossible — Patrick Stapfer

Based on "Get started with Reason" Patrick Stapfer's talk went deeper into the world of variant types and pattern matching and put them into a practical context. The talk was nice learning by doing TicTacToe live coding. It showed how Reason ML helps you design solid APIs, which are impossible to misuse by consumers. We also got more insights into practical ReasonReact code. Presentation is available on the Internet.

Conclusion about ReasonReact:

  • More rigid design
  • More KISS (keep it simple, stupid) than DRY (don’t repeat yourself)
  • Forces edge-cases to be handled

Learning Reason by doing TicTacToe

Reactive State Machines and Statecharts — David Khourshid

David Khourshid's talk about state machines and statecharts was interesting. Functional + reactive approach to state machines can make it much easier to understand, visualize, implement and automatically create tests for complex user interfaces and flows. Model the code and automatically generate exhaustive tests for every possible permutation of the code. Things mentioned: React automata, xstate. Slides are available on the Internet.

"Model once, implement anywhere" - David Khourshid

The talk was surprisingly interesting especially for use cases as anything to make testing better is good. This might be something to look into.

ReactVR — Shay Keinan

After theory heavy presentations we got into more visual stuff: React VR. Shay Keinan presented the core concepts behind VR, showed different demonstrations, and how to get started with React VR and how to add new features from the Three.js library. React VR: Three.js + React Native = 360 and VR content. On the VR device side it was mentioned that Oculus Go, HTC Vive Focus are the big step to Virtual Reality.

"Virtual Reality's possibilities are endless. Compares to lucid dreaming." - Shay Keinan

WebVR enables web developers to create frictionless, immersive experiences and we got to see Solar demo and Three VR demo which were lit ?.

React VR

World Class experience with React Native — Michał Chudziak

I've shortly experimented with React Native so it was nice to listen Michał Chudziak's talk how to set up a friendly React Native development environment with the best DX, spot bugs in early stage and deliver continuous builds to QA. Again Redux was dropped in favour of apollo-link-state.

Work close to your team - Napoleon Hill

What makes a good Developer eXperience?

  • stability
  • function
  • clarity
  • easiness

GraphQL was mentioned to be the holy grail of frontend development and perfect with React Native. Tools for better developer experience: Haul, CircleCI, Fastlane, ESLint, Flow, Jest, Danger, Detox. Other tips were i.a to use native IDEs (XCode, Android Studio) as it helps debugging. XCode Instruments helps debug performance (check iTunes for video) and there's also Android Profiler.

World Class experience with React Native

React Finland App - Lessons learned — Toni Ristola

Every conference has to have an app and React Finland of course did a React Native app. Toni Ristola lightning talked about lessons learned. Technologies used with React Native was Ignite, GraphQL and Apollo Client ? App's source code is available on GitHub.

Lessons learned:

  • Have a designer in the team
  • Reserve enough time — doing and testing a good app takes time
  • Test with enough devices — publish alpha early

React Finland App - Lessons learned

React Native Ignite — Gant Laborde

80% of mobile app development is the same old song which can be cut short with Ignite CLI. Using Ignite, you can jump into React Native development with a popular combination of technologies, OR brew your own. Gant Laborde talked about the new Bowser version which makes things even better with Storybook, Typescript, Solidarity, mobx-state-tree and lint-staged. Slides can be found on the Internet.

Ignite
Ignite

How to use React, webpack and other buzzwords if there is no need — Varya Stepanova

Varya Stepanova's lightning talk suggested to start a side-project other than ToDo app to study new development approaches and showed what it can be in React. The example was how to generate a multilingual static website using Metalsmith, React and other modern technologies and tools which she uses to build her personal blog. Slides can be found on the Internet.

Doing meaningful side-projects is a great idea to study new things and I've used that for i.a. learning Swift with Highkara newsreader, did couple of apps for Sailfish OS and played with GraphQL and microservices while developing app with largish vehicle dataset.

After party

Summary

Two days full of talks of React, React Native, React VR and all the things that go with developing web applications with them was great experience. Days were packed with great talks, new information and everything went smoothly. The conference was nicely organized, food was good and participants got soft hoodies to go with the Allas Sea Pool ticket. The talks were all great but especially "World Class experience with React Native" and "React Native Ignite" gave new inspiration to write some app. Also "ReactVR" seemed interesting although I think Augmented Reality will be bigger thing than Virtual Reality. It was nice to hear from "The New Best Practices" talk that there really is no new best practices as the old ones still work. Just use them!

Something to try and even to take into production will be Immer, styled components and Next.js. One thing which is easy to implement is to start using lint-staged although we are linting all the things already.

One of the conference organizers and speaker, Juho Vepsäläinen, wrote Lessons Learned from the conference and many of the points he mentions are to the point. The food was nice but "there wasn't anything substantial for the afternoon break". There wasn't anything to eat after lunch but luckily I had own snacks. Vepsäläinen also mentions that "there was sometimes too much time between the presentations" but I think the longer breaks between some presentations were nice for having a quick stroll outside and have some fresh air. The venue was quite warm and the air wasn't so good in the afternoon.

The Afterparty at Sea Life Helsinki was interesting choice and it worked nicely although there wasn't so many people there. The aquarium was fishy experience and provided also some other content than refreshments. Too bad I hadn't have time to go and check the Allas Sea Pool which we got a free ticket. Maybe next time.

Thanks to the conference crew for such a good event and of course to my fellow Goforeans which attended it and had a great time!

Two days of React Finland 2018: Day one topics of React

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what's hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. Here's the first part of my notes from the talks which I posted to Twitter. Read also the second part with more of React Native.

React Finland 2018: Day 1

React Finland combined the Finnish React community with international flavor from Jani Eväkallio to Ken Wheeler and other leading talents of the community. The event was the first of its kind in Finland and consisted of a workshop day and two days of talks around the topic. It was nice that the event was single track so you didn't need to choose between interesting talks.

At work I've been developing with React couple of years and tried my hands with React Native so the topics were familiar. The conference provided crafty new knowledge to learn from and maybe even put to production. Overall the conference was great experience and everything went smoothly. Nice work from the React Finland conference team! And of course thanks to Gofore which sponsored the conference and got me a ticket.

I tweeted my notes from almost every presentation and here's a recap of the talks. I heard that the videos from the conference will be available shortly.

The New Best Practices — Jani Eväkallio

First day's keynote was by Jani Eväkallio who talked about "The New Best Practices". As the talk description wrote "When React was first introduced, it was ridiculed for going against established web development best practices as we knew them. Five years later, React is the gold standard for how we create user interfaces. Along the way, we’ve discovered a new set of tools, design patterns and programming techniques."

The new best practices were:

  • Build big things from small things
  • Write code for humans first: flow, Typescript, storybook
  • Stay close to the language:
    • helps i.a. linters
  • Always prefer simplicity
  • Don’t break things:
    • Facebook makes React API changes easy to upgrade, depreciation well in advance, migration, documentation. it's a flow, not versions. Use codemod.
  • Keep an open mind

You ask "what new best practices"? Yep, that's the thing. We don't need new best practices as the same concepts like Model-View-Controller and separation of concerns are still valid. We should use best practices which have been proved good before as they also work nicely with React philosophy. Eväkallio also talked why React will be around for a long time. It's because components and interoperable components are an innovation primitive.

The New Best Practices

Declarative state and side effects — Christian Alfoni

After the keynote it was time to get more practical and Christian Alfoni talked about how we can get help writing our business logic in a declarative manner and see what benefits it gives us. He talked about lessons learned refactoring Codesandbox.io from Redux to Cerebral and about Cerebral which provides a declarative state and side effects management for popular JavaScript frameworks. Talks slides are available on the Internet.

Alfoni also pointed to Turning the database inside out with Apache Samza. Also that Cerebral had time travel before Dan Abramov presented Live React in his talk Hot Reloading with Time Travel at react-europe 2015

Immer: Immutability made easy — Michel Weststrate

Immutable data structures are a good thing and Michel Weststrate showed Immer which is a tiny package that allows you to work with immutable data structures with unprecedented ease. Managing the state of React app is a huge deal with Redux and any help is welcome. "Immer doesn't require learning new data structures or update APIs, but instead creates a temporarily shadow tree which can be modified using the standard JavaScript APIs. The shadow tree will be used to generate your next immutable state tree."

The talk showed how to write your reducers in a much more readable way, with half the code and without requiring additional large libraries. The talk slides are available on the Internet.

Get Rich Quick With React Context — Patrick Hund

"Get Rich Quick With React Context" lightning talk by Patrick Hund didn't tell how good job opportunities you have when doing React But how with React 16.3 the context API has been completely revamped and demonstrated a good use case: Putting ad placements on your web page to get rich quick! Other use cases are localizations. Check out the slides which will tell you how easy it is to use context now and how to migrate your old context code to the new API.

There's always a better way to handle localization — Eemeli Aro

"There's always a better way to handle localization" lightning talk by Eemeli Aro told about how localization is a ridiculously difficult problem in the general case, but in the specific you can get away with really simple solutions, especially if you understand the compromises you're making.

I must have been dozing as all I got was there are also other options to store localizations than JSON like YAML and JavaScript property format especially when dealing with non-developers like translators. The talk was quite general and on abstract level and mentioned solutions to localization were react-intl, react-i18next and react-message-context.

Styled Components, SSR, and Theming — Kasia Jastrzębska

Web applications need to be styled and Kasia Jastrzębska talked about CSS-in-JS with styled-components by going through the new API, performance improvements, server side rendering with Next.js. She also showed the theming manager available with v2 of styled-components. Talk slides are available on the Internet.

Takeaways from this talk was that CSS in React app can be written as you always have or by using CSS-in-JS solutions. There are several benefits of using styled-components but I'm still thinking how styles get scattered all over components.

Universal React Apps Using Next.js — Sia Karamalegos

53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
- DoubleClick by Google, 2016

Every user’s hardware is different and processing speed can hinder user experience on client-side rendered React applications and so Sia Karamalegos talked how server-side rendering and code-splitting can drastically improve user experience. By minimizing the work that the client has to do. Performance and shipping your code matters. The talk showed how to easily build an universal React apps using the Next.js framework and walked through the concepts and code examples. Talk slides are available on the Internet.

There are lots of old (mobile) devices which especially benefit from Server Side Rendering. Next.js is a minimalistic framework for universal, server-rendered (or statically pre-rendered) React applications which enables faster page loads. Pages are server-rendered by default for initial load, you can enable prefetching future routes and there's automatic code splitting. It's also customizable so you can use own Babel and Webpack configurations and customize the server API with e.g. Express. And if you don't want to use a server Next.js can also build static web apps that you can then host on Github pages or AWS S3.

Universal React apps using Next.js

State Management in React Apps with Apollo Client — Sara Vieira

Apollo Client was one of the most mentioned framework in the conference along with Reason ML and Sara Vieira gave energetic talk how to use it for state management in React Apps. If you haven't come across Apollo Client it's caching GraphQL client and helps you to manage data coming from the server. Virieira showed how to manage local state with apollo-link-state.

The talk was fast paced and I somewhat missed the why part but at least it's easy to setup: yarn add apollo-boost graphql react-apollo. Have to see slides and demo later. Maybe the talk can be wrapped up to: "GQL all the things" and "I don't like Redux" :D

State Management with Apollo

Detox: A year in. Building it, Testing with it — Rotem Mizrachi-Meidan

Detox testing framework for React Native talk by Rotem Mizrachi-Meidan was the other talk I dozed along. Mizrachi-Meidan talked what developing and using Detox in production has taught and how Detox works and what makes it deterministic. The talk showed how mobile apps could be tested. There's a video of earlier talk on the Internet.

Detox

Make linting great again! — Andrey Okonetchnikov

One thing in software development which always gets developers to argue over stupid things is code formatting and linting. Andrey Okonetchnikov talked how "with a wrong workflow linting can be really a pain and will slow you and your team down but with a proper setup it can save you hours of manual work reformatting the code and reducing the code-review overhead."

The talk was a quick introduction how ?? lint-staged a node.js library can improve developer experience. Small tool coupled with tools that analyze and improve the code like ESLint, Stylelint, Prettier and Jest can make a big difference.

Missed talks

There was also two talks I missed: "Understanding the differences is accepting" by Sven Sauleau and "Why I YAML" by Eemeli Aro. Sauleau showed "interesting" twists of Javascript language.

Read also the second part with more of React Native.

Extracting JSON value from command line with jq and Python

Developing modern web applications you often come to around checking REST API responses and parsing JSON values. You can do it with a combination of Unix tools like sed, cut and awk but if you're allowed to install extra tools or use Python then things get easier. This post shows you couple of options for extracting JSON values with Unix tools.

There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with awk. One of those tools is jq as shown Stack Overflow. You can install it in macOS from Homebrew: brew install jq.

$ curl -s 'https://api.github.com/users/walokra' | jq -r '.name'

If you're limited to tools that are likely installed on your system such as Python, using the json module gives you the benefit of a proper JSON parser and avoiding any extra dependencies.

Python:

$ curl -s 'https://api.github.com/users/walokra' | \
    python -c "import sys, json; print(json.load(sys.stdin)['name'])"

Stack Overflow answers to the question of "Parsing JSON with Unix tools" shows you other options with standard tools like sed, cut and Awk and more exotic options with Perl, Node.js and PHP.

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files' contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here's a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job's workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project's root directory's .pre-commit-config.yaml as repository local hook. As you can see, I'm using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you're linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there "just like that". Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their "ready-to-consume" state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook's script as StackOverflow describes. One way is to check out the files in a temporary location or if you're on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.