Jailbreak detection with jail-monkey on React Native app

Mobile device operating systems often impose certain restrictions to what capabilities the user have on the device like which apps can be installed on the device and what access to information and data apps and user have on the device. The limitations can be bypassed with jailbreaking or rooting the device which might introduce security risks to your app running on the device so you might want to detect if your app is run on jailbroken device.

Jailbreak detection

Detecting and possible acting regarding the device status can be beneficial depending on your app features. The reason for detection is to either disable some, or most of the app's functionalities due to security concerns that come from a jailbroken system. You might want to e.g. prevent jailbroken devices from authenticating to protected applications. There are different methods to use and one of the easiest is to use libraries like jail-monkey.

Overall the jailbreak detection is based on several different aspects on the device which: file existence checks, URI scheme registration checks, sandbox behavior checks, abnormal services and dynamic linker inspection.

jail-monkey is a React Native library for detection if a phone has been jail-broken or rooted for iOS and Android. (note: the library is not actively maintained)

Are users claiming they are crossing the globe in seconds and collecting all the Pokeballs? Some apps need to protect themselves in order to protect data integrity. JailMonkey allows you to:

  • Identify if a phone has been jail-broken or rooted for iOS/Android.
  • Detect mocked locations for phones set in "developer mode".

Using jail-monkey is straightforward and after adding the library to your app the only unanswered question is what you want to do with the information. Do you block the app from running, limit some features or just log the information to e.g. Sentry or some analytics platform.

import JailMonkey from 'jail-monkey'

if (JailMonkey.isJailBroken()) {
  // Alternative behaviour for jail-broken/rooted devices.

jail-monkey answers to following questions:

  • can the location be mocked?
  • is the device in debug mode?
  • is the device jailbroken?
  • on iOS the reason for jailbroken
  • on Android: i.a. adb enabled, development settings mode

You show the jailbreaking status for example as a banner in the app:

Jailbreak detected!

Is it worth it?

Implementing jailbreak detection is easy but you might want to consider if it brings anything of value. Jailbreaking an iOS device does not, on its own, make it less secure but it makes harder to reason about the security properties of the device and more concerning issue is that users of jailbroken devices frequently hold off on updating their devices, as jailbreak development usually lags behind official software releases.

Jailbreak / root detection is never perfect and more or less hinders only the less technology savvy users and these kinds of checks don't contribute too much to the security side. Also it can result in false positives as jail-monkey's issue tracker shows. Hackers are usually one step ahead than detection mechanisms so relying on the jailbreaking status should be taking with a grain of salt.

Bypassing different jailbreak detection secure mechanisms is just a matter of effort as the guide for bypassing jail-monkey and jailbreak detection bypass articles show. There's also ready made tools for iOS like iHide. So if you seriously plan to detect and secure your app against jailbreaking you should implement your own tools or use some custom made library.

If you want to read more about jailbreak detection Duo has written a good analysis of jailbreak detection methods and the tools used to evade them.

Notes from React Native EU 2022

React Native EU 2022 was held couple of weeks ago and it's a conference which focuses exclusively on React Native but consists also on general topics which are universal in software development while applied to RN context. This year the online event provided great talks and especially there were many presentations about apps performance improvements, achieving better code and identifying bugs. Here are my notes from the talks I found interesting. All of the talks are available in conference stream on Youtube.

Better performance and quality code from RN EU 2022

This year the React Native EU talks had among other presentations two common topics: performance and code quality. Important aspects of software development which are often dismissed until problems arise so it was refreshing to see it talked so much about.

Here are my notes from the talks I found most interesting. Also the "Can't touch this" talk about accessibility was a good reminder that not everyone use their mobile devices by hand.

How we made our app 80% faster, a data structure story

Marin Godechot gave an interesting talk of React Native apps performance improvements and one of the crucial tools for achieving that.

Validate your assumption before investing in solutions

The learnings of the talk which went through trying different approaches to fix the performance problem was that:

  • Validate your assumption before investing in solutions
  • Performance improvements without tooling is a guessing game
  • Slow Redux selectors are bad
  • Data structures and complexity matters

The breakthrough for finding the problem was after actually measuring the performance with Datadog Real User Monitoring (RUM) and better understanding of the bottlenecks. What you can't measure, you can't improve.

The issue wasn't with useless rerenders, lists and navigation stack but with the datamodels but not the one you would guess. Although the persisted JSON stringified state with Redux was transferred between the JS side and the Native (JS <-> json <-> Bridge <-> json <-> Native) it wasn't the issue.

The big reveal was that when they instrumented Redux's selectors, reducers and sagas to monitoring tool they found that as the user permissions were handled by using the Attribute Based Access Control model the data in JSON format had grown over the years from 15 permissions per user to 10000 permissions per user which caused problems with Redux selectors. The fix was relatively simple: change array to object -> {agengy: [agency:caregivers:manage]}

What you can't measure, you can't improve

You can go EVERYWHERE but I can go FAST - holistic case study on performance

Jakub Binda made a clever comparison in his talk that apps are like cars and same modifications apply to fast cars and fast applications.

The talk starts slow but gets to speed nicely with good overview how to create faster apps. Some of the points were:

  • Reduce weight:
    • Remove dead / legacy code
    • Avoid bundling heavy JSON files (e.g. translations)
    • Keep node_modules clean
  • Keep component structure "simple"
    • More complex structure leads to longer render time and more resources consumption and more re-renders
    • Shimmers are complex and heavy (improve UX by making an impression that content appers faster than it actually does)
  • Using hooks:
    • Might lead to unexpected re-renders when: their value change, dependency array is not set correctly
    • Lead to increased resources comsumption if: their logic is complex, the consumer of the results is hidden behind the flag
  • Tooling:
    • Flipper debug tool: flame charts & performance monitoring tooling

Reducing bugs in a React codebase

Darshita Chaturvedi's hands-on and thoroughly explained talk for identifying bugs in React codebase was insightful.

Reducing bugs in a React codebase

Getting Better All the Time: How to Escape Bad Code

Josh Justice had a great presentation with hands-on example of an application with needed fixes and features and how to apply them with refactoring. And what is TDD by recreating the application.

The slides are available with rest of the TDD sequence and pointers to more resources on TDD in RN.


The key point of the talk was about what do you do about bad code? Do you work around bad code or rewrite bad code? The answer is refactoring: small changes that improve the arrangements of the code without changing its functionality. And comprehensive tests will save you with refactoring. The value is that you're making the improvements that pay off right away and code is shippable after each refactoring. But how do you get there? By using Test-Driven Development (TDD)

"TDD is too much work"
Living with bad code forever is also a lot of work

Getting Better All the Time: How to Escape Bad Code

"Make code better all the time (with refactoring and better test coverage)"

Visual Regression Testing in React Native

Rob Walker talked about visual regression testing and about using React Native OWL. Slides available.

React Native OWL:

  • Visual regression testing for React Native
  • CLI to build and run
  • Jest matcher
  • Interaction API
  • Report generator
  • Inspired by Detox
Baseline, latest, diff
Easy to approach (in theory)

Different test types:

  • Manual testing pros: great for exploratory testing, very flexible, can be outsourced
  • Manual testing cons: easy to miss things, time consuming, hard to catch small changes
  • Jest snapshot tests pros: fast to implement, fast to run, runs on CI
  • Jest snapshot tests cons: only tests in isolation, does not test flow, only comparing JSX
  • Visual regression tests pros: tests entire UI, checks multi-step flows, runs on CI
  • Visual regression tests cons: can be difficult to setup, slower to run, potentially flaky
Use case for visual regression tests

Can't touch this - different ways users interact with their mobile devices

Eevis talked about accessibility and what does this mean for a developer. The slides give a good overview to the topic.

  • Methods talked about
    • Screen reader: trouble seeing or understanding screen, speech or braille feedback, voiceover, talkback
    • Physical keyboard: i.a. tremors
    • Switch devices: movement limiting disabilites
    • Voice recognition
    • Zooming / screen magnifying: low vision
  • 4 easy to start with tips
    • Use visible focus styles
    • Annotate headings (accessibilityRole)
    • Add name, role and state for custom elements
    • Respect reduced motion
  • Key takeaways
    • Users use mobile devices differently
    • Test with different input methods
    • Educate yourself
4 easy to start with tips

Performance issues - the usual suspects

Alexande Moureaux hands-on presentation of how to use Flipper React DevTools profiler and Flamegraph and Android performance profiler to debug Android application performance issues. E.g. why the app has below 60 fps? Concentrates on debugging Android but similar approaches work also for iOS.

First make your measures deterministic, average your measures over several iterations, keep the same conditions for every measure and automate the behavior you want to test.

The presentation showed how to use Flipper React DevTools profiler Flamegraph to find which component is slow and causes e.g. rerendering. Then refactoring the view by moving and the updating component which caused rerendering. And using @perf-profiler/web-reporter to visualize the measures from JSON files.

You can record the trace Hermes profiler and open it on Google Chrome:

  • bundleInDebug: true
  • Start profiler in simulator, trace is saved in device
  • npx react-native profile-hermes pulls the trace and converts it for usable format
  • Findings: filtering tweets list by parsing weekday from tweet was slow (high complexity), replace it with Date.getDay from tweet.createdAt

Connect the app to Android Studio for profiling: finding long running animation (background skeleton content "grey boxes")

How to actually improve the performance of a RN App?

Michal Chudziak presented how to use Define, Measure, Analyze, Improve, Control (DMAIC) pattern to improve performance of a React Native application.

  • Define where we want to go, listen to customers
  • Make it measurable
  • Analyze common user paths
  • Choose priorities

Define your goals:


Measure: where are we?

  • React Profiler API
  • react-native-performance
  • Native IDEs
  • Screen capture
  • Flipper & perf monitor


  • Measurement needs to be accurate and precise

Analyze phase: how do we get there?

  • List potential root causes
  • Cause and effect diagram (fish bone diagram)
  • Narrow the list

Improve phase

  • Identify potential solutions
  • Select the best solution
  • Implement and test the solution

Control phase: are we on track?

Performance will degrade if it's not under control so create a control plan.


Monitor regressions

  • Reassure: performance regression testing (local + CI)
  • Firebase: performance monitoring on production
  • Sentry:  performance monitoring on production

Override nested NPM dependency versions

Sometimes your JavaScript project's dependency contains a library which has a vulnerability and you're left with a question how to solve the issue. If the nested dependency (with vulnerability) is already fixed but the main dependency isn't, you can use overrides field of package.json as explained in StackOverflow answer.

You'll need a recently new version of npm cli v8.3.0 (2021-12-09) which comes with Node.js v16.14.0. Corresponding node.js and npm versions can be seen in the releases table.

In my case the problem was that express-openapi-validator contained a version of multer which used dicer with vulnerable version of busboy. The vulnerability was fixed in multer 1.4.4-lts.1 and now the task was to override the nested dependency.

The overrides section in package.json file provides a way to replace a package in your dependency tree with another version, or another package entirely. These changes can be scoped as specific or as vague as desired.

For example in my case I used:

  "overrides": {
    "express-openapi-validator": {
      "multer": "1.4.4-lts.1"

The overrides section is useful if you need to make specific changes to dependencies of your dependencies, for example replacing the version of a dependency with a known security issue, replacing an existing dependency with a fork or only override a package when it's a dependency of a particular package hierarchy.

And to ensure that you've correct Node.js and npm versions in use, add the following to the package.json.

"engines": {
    "node": ">=16.14",
    "npm": ">=8.3.0"

What did we learn here? About how to "patch" dependencies and how beneficial it is to read what new features comes with your tools. But in the end this all was more or less just a drill as the nested dependency of multer in express-openapi-validator was updated to the fixed version in the same day that it was released (it took around 10 days to multer have a fix).

Using CASL and roles with persisted permissions

How do you implement user groups, roles and permissions in a multitenant environment where you have multiple organizations using the same application and each have own users and groups and roles? There are different approaches to the issue and one is to implement Attributes-based access control (ABAC) in addition with roles (RBAC). But what does that actually mean?

In short: you've groups, groups have roles and roles have permission attributes. Well, read forward and I'll explain how to tie things together using CASL: an isomorphic authorization JavaScript library which restricts what resources a given client is allowed to access.

Authorization mechanisms

First let's start with short introduction to authorization: "Authorization is the process of determining which action a user can perform in the system." Authorization mechanism grants proper privileges for a user defined by some role or other factor. Two popular types of authorization are: Role-based access control (RBAC) and Attributes-based access control (ABAC)

Role based access control (RBAC) is a simple and efficient solution to manage user permissions in an organization. We have to create roles for each user group and assign permissions for certain operations to each role. This is commonly used practice with roles suchs as "System admin", "Administrator", "Editor" and "User". The drawbacks of RBAC, especially in a complex system, is that a user can have various roles, which require more time and effort to organize. Also the more roles you create, the more security holes will be born.

Attributes-based access control (ABAC) is an authorization mechanism granting access dynamically based on user characteristics, environment context, action types, and more. Here we can have some action which is permitted to certain user, e.g. "{ action: 'edit', subject: 'Registration' }", "{ action: 'manage', subject: 'User', fields: ['givenName', 'familyName', 'email', 'phone', 'role'] }". For example, a manager can view only the users in their department; an user can not access his project’s information. ABAC is sometimes referred to as claims-based access control (CBAC) or policy-based access control (PBAC). You often see ABAC authorization mechanism in cloud computing services such as AWS and Azure.

The difference between ABAC and RBAC is that ABAC supports Boolean logic, if the user meets this condition he is allowed to do that action. Attribute-based Access Control (ABAC) enables more flexible control of access permissions as you can grant, adjust or revoke permissions instantly.

RBAC is a simple but sufficient solution to manage access in an uncomplicated system. However, it is not flexible and scalable and that is why ABAC becomes an efficient approach in this case. In the contrast, ABAC may be costly to implement. There is no all-in-one solution and it depends on your business requirements. Remember that use the right tool for the right job.

RBAC table structure can be visualized with:

Role basec access control in multitenant system

And you can add attribute based permissions to RBAC model:

Attribute based permissions for roles (CASL)

Using CASL with roles and permissions

CASL is a robust authorization library for JavaScript and including RBAC it also supports ABAC implementation by adding condition expression to the permission. That's exactly what we need in this case where we have a requirement to define different permissions for custom groups: need for roles and groups.

CASL has good resources and shows a great cookbook how to implement roles with persisted permissions. To supplement that you can check Nest.js and CASL example which also shows table structures in different stages.

The code for the following part can be seen in Gist

Table structure

  • Global Users table
  • Users are grouped by tenant in TenantUsers
  • TenantUsers have a role or roles
  • Users can be added to a group
  • Groups and roles are tenant specific
  • Permissions can be added to a role

Table structure in DBDiagram syntax

Next we create the table structure to our code with help of Sequelize ORM which describes the models, relations and associations.

We have the following models for our table structure:

  • models/rolepermissions.js
  • models/role.js
  • models/permission.js
  • models/group.js
  • models/groupuser.js

The Sequelize modeled class looks like this:

const { Model } = require('sequelize');
module.exports = (sequelize, DataTypes) => {
class Permission extends Model {
static associate() {}
toJSON() {
return {
id: this.id,
action: this.action,
subject: this.subject,
fields: this.fields,
conditions: this.conditions,
inverted: this.inverted,
system: this.system,
id: {
type: DataTypes.INTEGER,
primaryKey: true,
allowNull: false,
action: DataTypes.STRING,
subject: DataTypes.STRING,
fields: DataTypes.ARRAY(DataTypes.TEXT),
conditions: DataTypes.JSON,
inverted: DataTypes.BOOLEAN,
system: DataTypes.BOOLEAN,
modelName: 'Permission',
return Permission;

You can see all the models on the Gist.

Defining CASL permissions to user

In this example we are adding permissions directly to the JWT we are generating and signing so we don't have to fetch user's permissions every time from the database (or cache) when user makes a request. We get the claims.role from the authentication token and it can be e.g. "admin", "user" or whatever you need.

services/token.js is the class where we fetch the permissions for the role and generate the JWT. It fetches the permissions for the user's roles, gets the user groups and those groups' roles and permissions. Then it merges the permissions to be unique and adds them to the JWT.

The JWT looks something like this:

"sub": "1234567890",
"role": "user",
"groups": [1, 3],
"permissions": [
"action": "read",
"subject": "Users",
"fields": null,
"conditions": {
"id": "${user.tenant}"
"action": "edit",
"subject": "User",
"fields": null,
"conditions": {
"id": "${user.id}",
"TenantId": "${user.tenant}"
view raw jwt-sample.json hosted with ❤ by GitHub

Now we have all the permissions for the user in JWT which we can use in our services and routes. The solution isn't optimal as the JWT can grow large if user has lots of permissions.

In the JWT you can see that the conditions attribute has a placeholder. middleware/abilities.js class parses the user JWT which we added permissions in JSON format, converts the condition to CASL’s condition and creates a CASL Ability so that we can add permission checks later on to our actions.

const { AbilityBuilder, Ability } = require('@casl/ability');
const parseCondition = (template, vars) => {
if (!template) {
return null;
JSON.parse(JSON.stringify(template), (_, rawValue) => {
if (rawValue[0] !== '$') {
return rawValue;
const name = rawValue.slice(2, -1);
const value = get(vars, name);
if (typeof value === 'undefined') {
throw new ReferenceError(`Variable ${name} is not defined`);
return value;
return null;
const defineAbilitiesFor = (jwtToken) => {
const { can: allow, build } = new AbilityBuilder(Ability);
jwtToken.permissions.forEach((item) => {
const parsedConditions = parseCondition(item.conditions, { jwtToken });
allow(item.action, item.subject, item.fields, parsedConditions);
return build();
module.exports = { defineAbilitiesFor };

Now you can check the permissions in e.g. route.

const ForbiddenOperationError = {
  from: (abilities) => ({
    throwUnlessCan: (...args) => {
      if (!abilities.can(...args)) {
        throw Forbidden();

try {
    const abilities = defineAbilitiesFor(jwtToken);
    ForbiddenOperationError.from(abilities).throwUnlessCan('read', 'Users');
    const users = await UserService.getAll(user.tenant);
    return res.send({ entities: users });
} catch (e) {
    return next(e);


Implementing authorization is an important part of software system and you've several approaches to solve it. RBAC is a simple but sufficient solution to manage access in an uncomplicated system. ABAC in the other hand is flexible and scalable approach but may be costly to implement. There is no all-in-one solution and it depends on your business requirements.

Using CASL with roles, groups and permissions makes the handling of the authorization side of things easier but as you can see from the database diagram the overal complexity of the system is still quite high and we're not even touched the user interface of managing the roles, groups and permissions.

Create secure code with Secure Code Bootcamp

Software development contains many aspects which the developer has to take care and think about. One of them is information security and secure code which affects the product and its users. There are different ways to learn information security and how to create secure and quality code and this time I'll shortly go through what Secure Code Warrior Secure Code Bootcamp has to offer.

For the record other good resources I've come across are Kontras application security training for OWASP Top 10 and OWASP Top 10 API, hands-on approaches like Cyber Security Base MooC, Wargames, Hack the Box and Cybrary.

Secure Code Bootcamp

Kick-start your journey to creating more secure, quality code with Secure Code Bootcamp - our free mobile app for early-career coders.

Secure Code Bootcamp

Secure Code Warrior provides a learning platform for developers to increase their software security skills and guide each coder along their own preferred learning pathway. They have products, solutions and resources to help organization's development teams to ship quality code and also provide a free mobile app for early-career coder: Secure Code Bootcamp.

Application presents common vulnerabilities from the OWASP Top 10 and you get badges as you progress through each new challenge, unlocking new missions as your progress. It teaches you to identify vulnerable code with first short introductions and explanations for each vulnerability of how they happen and where. Each topic is presented as a mission with briefing and code inspection tasks.

OWASP Top 10 are:

The Secure Code Bootcamp covers 8 of the Top 10 list as the last two are more or less difficult to present in this gamified context, I think.

Mission briefing contains couple of minute theory lesson of the given vulnerability and teaches you what, where and how to prevent it.

After briefing you're challenged with code examples in the language you've chosen (Node.JS, Python:Django, Java:Spring, C# .NET: MVC). You practically swipe your way through code reviews by accepting or rejecting them. Reading code on mobile device screen isn't optimal but suffices for the given task. Works better for Node.js than for Java Spring.

Code inspection isn't always as easy as you would think even if you know what to look for. After succesfully inspected couple of codes you're awarded with a badge. The briefing tells you what to look for in the code but sometimes it's a guess what is asked for. The code inspection requires sometimes knowledge of the used framework and inspection is done without context for the usage. Almost every inspection I got 1 wrong which gave me 75% accuracy.


The approach to teaching security topics this way works ok if you're code oriented. You'll learn the OWASP Top 10 in practice by short theory lessons with pointers to how to prevent them and test your code inspection skills for noticing vulnerable aspects of code fragments. Having swiped through the bootcamp the code inspection parts were not always so useful.

The marketing text says "progress along multiple missions and build secure coding skills." and "Graduate with fundamental secure coding skills for your next step as a coder." and that is in my opionion a bit much to say. The bootcamp teaches the basic concepts of vulnerabilities and how they look on code but doesn't teach you to code securily.

In overall the Secure Code Bootcamp for OWASP Top 10 vulnerabilities is a good start for learning what, where, how and why vulnerabilities exists and learn to identify them. You can do the bootcamp with different languages available so replayability value is good.

Linting GraphQL Schema and queries

Analyzing code for compliance with guidelines is one part of the code quality assuarance and automating it with tools like ESLint and binding the checks with build process is common and routine operation. But what about validating GraphQL schema and queries? Here are some pointers to tools you can use to start linting your GraphQL schema.


  • eslint-plugin-graphql: Checks tagged query strings inside JavaScript, or queries inside .graphql files, against a GraphQL schema.
  • graphql-eslint: Lints both GraphQL schema and GraphQL operations.
  • graphql-schema-linter: Command line tool to validate GraphQL schema definitions against a set of rules.
  • apollo-tooling: Brings together your GraphQL clients and servers with tools for validating your schema, linting your operations and generating static types.

Using ESLint for GraphQL checks

Depending of you project structure and how you've created your schema and queries you can use different tools for linting it. eslint-plugin-graphql requires you to have the schema ready whereas graphql-eslint can do some checks and validation without the schema.

Using the graphql-eslint and configuration in package.json where the project uses code files to store GraphQL schema / operations:

"eslintConfig": {
"overrides": [
        "files": [
        "processor": "@graphql-eslint/graphql"
        "files": [
        "parser": "@graphql-eslint/eslint-plugin",
        "plugins": [
        "rules": {
          "prettier/prettier": "off",
          "@graphql-eslint/unique-fragment-name": "warn",
          "@graphql-eslint/no-anonymous-operations": "warn",
          "@graphql-eslint/no-operation-name-suffix": "error",
          "@graphql-eslint/no-case-insensitive-enum-values-duplicates": [

Linting process can be enriched and extended with GraphQL type information, if you are able to provide your GraphQL schema.

Introspection Query and schema linting

GraphQL APIs are required to be self-documenting and every standard conforming GraphQL API needs to respond to queries about its own structure. This system is called introspection. The key for GraphQL validation is doing the introspection query and getting the schema in GraphQL Schema Definition Language format. The query gets you the JSON representation of the shape of the API which you can convert to SDL. See more about GraphQL Schemas.

You can use the apollo-tooling for the introspection. First start your GraphQL service and point the apollo to it.

npx apollo client:download-schema --endpoint=http://localhost:4444/graphql schema.graphql

The tooling outputs in this case the schema in GraphQL SDL format (.graphql) and you can also get it in JSON by using the schema.json filename.

Now that you have the GraphQL schema you can lint it by using graphql-schema-linter. Create config graphql-schema-linter.config.js in your project root folder.

module.exports = {
  rules: ['enum-values-sorted-alphabetically'],
  schemaPaths: ['path/to/my/schema/files/**.graphql'],
  customRulePaths: ['path/to/my/custom/rules/*.js'],
  rulesOptions: {
    'enum-values-sorted-alphabetically': { sortOrder: 'lexicographical' }

And run graphql-schema-linter

You get results e.g.

1320:1 The type `Upload` is defined in the schema but not used anywhere.                      defined-types-are-used
74:3   The field `Company.phone_number` is not camel cased.                                   fields-are-camel-cased

Running graphql-schema-linter in CI/CD

For automating schema checks you can use the graphql-schema-linter in your CI/CD pipeline. Here's an example for GitLab CI where we run our GraphQL service inside docker.


version: '3.1'

    image: node:14-alpine
    container_name: graphql-service-introspection
    restart: unless-stopped
      NODE_ENV: development
      ENVIRONMENT: introspection
      PORT: 4000
      - "4000:4000"
    user: node
      - ./:/home/node
    working_dir: /home/node
    command: "npm run start:docker"

Where npm run start:docker is "./node_modules/nodemon/bin/nodemon.js --legacy-watch --exec ./node_modules/@babel/node/bin/babel-node.js src/app.js"


  image: docker:19.03.1
  stage: test
    - docker:19.03.1-dind
    - apk add --no-cache docker-compose npm
    - "# Start GQL"
    - docker-compose -f docker-compose-introspection.yml up -d
    - "# Get introspection schema"
    - cd /tmp
    - npx apollo client:download-schema --endpoint=http://localhost:4000/graphql schema.graphql
    - cd $CI_PROJECT_DIR
    - docker-compose -f docker-compose-introspection.yml stop
    - docker-compose -f docker-compose-introspection.yml rm -f
    - "# Run tests"
    - npx graphql-schema-linter /tmp/schema.graphql

The results in CI/CD pipeline could look then like this:

graphql-schema-linter in CI/CD

Visual Studio Code Extensions for better programming

Visual Studio Code has become "The Editor" for many in software development and it has many extensions which you can use to extend the functionality for your needs and customize it. Here’s a short list of the extensions I use for frontend (React, JavaScript, Node.js), backend (GraphQL, Python, Node.js, Java, PHP, Docker) and database (PostgreSQL, MongoDB) development.


Attempts to override user/workspace settings with settings found in .editorconfig files.

Visual Studio IntelliCode
Provides AI-assisted development features for Python, TypeScript/JavaScript and Java developers in Visual Studio Code, with insights based on understanding your code context combined with machine learning.

Visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain valuable insights via powerful comparison commands, and so much more.
Git Blame
See Git Blame information in the status bar for the currently selected line.

Local History
Plugin for maintaining local history of files.

Language and technology specific

Integrates ESLint into VS Code.

Opinionated code formatter which enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.

Linting, Debugging (multi-threaded, remote), Intellisense, Jupyter Notebooks, code formatting, refactoring, unit tests, and more.

PHP Intelephense
PHP code intelligence for Visual Studio Code is a high performance PHP language server packed full of essential features for productive PHP development.

Java Extension Pack
Popular extensions for Java development and more.

Makes it easy to build, manage, and deploy containerized applications from Visual Studio Code. It also provides one-click debugging of Node.js, Python, and .NET Core inside a container.

Markdown All in One
All you need to write Markdown (keyboard shortcuts, table of contents, auto preview and more)
Includes a library of rules to encourage standards and consistency for Markdown files.
Markdown Preview Enhanced
Provides you with many useful functionalities such as automatic scroll sync, math typesetting, mermaid, PlantUML, pandoc, PDF export, code chunk, presentation writer, etc.

Prettify JSON
Prettify ugly JSON inside VSCode.

Rich PlantUML support for Visual Studio Code.

HashiCorp Terraform
Syntax highlighting and autocompletion for Terraform


Query tool for PostgreSQL databases. While there is a database explorer it is NOT meant for creating/dropping databases or tables. The explorer is a visual aid for helping to craft your queries.

Makes it easy to work with MongoDB.

Adds syntax highlighting, validation, and language features like go to definition, hover information and autocompletion for graphql projects. This extension also works with queries annotated with gql tag.
GraphQL for VSCode
VSCode extension for GraphQL schema authoring & consumption.
Apollo GraphQL for VS Code
Rich editor support for GraphQL client and server development that seamlessly integrates with the Apollo platform.


JavaScript syntax highlighting for ES201x, React JSX, Flow and GraphQL.

Use Facebook's Jest with pleasure.

Supports running npm scripts defined in the package.json file and validating the installed modules against the dependencies defined in the package.json.

User Interface specific

Simple extension to make indentation more readable

Rainbow Brackets
Rainbow colors for the round brackets, the square brackets and the squiggly brackets.

Icons for filetypes in file browser.

Other tips

VScode Show Full Path in Title Bar
With Code open, hit: Command+ , “window.title”: “{activeEditorLong}activeEditorLong{separator}${rootName}”

Slow integrated terminal in macOS
codesign --remove-signature /Applications/Visual\ Studio\ Code.app/Contents/Frameworks/Code\ Helper\ (Renderer).app

Running static analysis tools for PHP

We all write bug free code but analyzing your code is still important part of software development if for some reason there could've been some mishap with typing. Here's a short introduction how to run static analysis for PHP code.

Static analysis tools for PHP

The curated list of static analysis tools for PHP show you many options for doing analysis. Too much you say? Yes but fortunately you can start with some tools and continue with the specific needs you have.

You can run different analysis tools by installing them with composer or you can use the Toolbox which helps to discover and install tools. You can use it as a Docker container.

First, fetch the docker image with static analysis tools for PHP:

$ docker pull jakzal/phpqa:<your php version>
$ docker pull jakzal/phpqa:php7.4-alpine

PHPMD: PHP Mess Detector

One of the tools provided in the image is PHPMD which aims to be a PHP equivalent of the well known Java tool PMD. PHPMD can be seen as an user friendly and easy to configure frontend for the raw metrics measured by PHP Depend.

It looks for several potential problems within that source like:

  • Possible bugs
  • Suboptimal code
  • Overcomplicated expressions
  • Unused parameters, methods, properties

You can install the phpmd with composer: composer require phpmd/phpmd. Then run it with e.g. ./vendor/bin/phpmd src html unusedcode --reportfile phpmd.html

Or run the command below which runs phpmd in a docker container and mounts the current working directory as a /project.

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine \
    phpmd src html cleancode,codesize,controversial,design,naming,unusedcode --reportfile phpmd.html

You can also make your custom rules to reduce false positives: phpmd.test.xml

<?xml version="1.0"?>
<ruleset name="VV PHPMD rule set"
        Custom rule set that checks my code.
<rule ref="rulesets/codesize.xml">
    <exclude name="CyclomaticComplexity"/>
    <exclude name="ExcessiveMethodLength"/>
    <exclude name="NPathComplexity"/>
    <exclude name="TooManyMethods"/>
    <exclude name="ExcessiveClassComplexity"/>
    <exclude name="ExcessivePublicCount"/>
    <exclude name="TooManyPublicMethods"/>
    <exclude name="TooManyFields"/>
<rule ref="rulesets/codesize.xml/TooManyFields">
        <property name="maxfields" value="21"/>
<rule ref="rulesets/cleancode.xml">
    <exclude name="StaticAccess"/>
    <exclude name="ElseExpression"/>
    <exclude name="MissingImport" />
<rule ref="rulesets/controversial.xml">
    <exclude name="CamelCaseParameterName" />
    <exclude name="CamelCaseVariableName" />
    <exclude name="Superglobals" />
<rule ref="rulesets/design.xml">
    <exclude name="CouplingBetweenObjects" />
    <exclude name="NumberOfChildren" />
<rule ref="rulesets/design.xml/NumberOfChildren">
        <property name="minimum" value="20"/>
<rule ref="rulesets/naming.xml">
    <exclude name="ShortVariable"/>
    <exclude name="LongVariable"/>
<rule ref="rulesets/unusedcode.xml">
    <exclude name="UnusedFormalParameter"/>
<rule ref="rulesets/codesize.xml/ExcessiveClassLength">
        <property name="minimum" value="1500"/>

Then run your analysis with:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpmd src html phpmd.test.xml unusedcode --reportfile phpmd.html

You get a list of found issues formatted to a HTML file

PHPMD Report

PHPStan - PHP Static Analysis Tool

"PHPstan focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code. It moves PHP closer to compiled languages in the sense that the correctness of each line of the code can be checked before you run the actual line."

Installing with composer: composer require --dev phpstan/phpstan

Or run on Docker container:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpstan analyse --level 1 src

By default you will get a report to console formatted to a table and grouped errors by file, colorized. For human consumption.

PHPStan report

By default PHPStan is performing only the most basic checks and you can pass a higher rule level through the --level option (0 is the loosest and 8 is the strictest) to analyse code more thoroughly. Start with 0 and increase the level as you go fixing possible issues.

PHPStan found some more issues which PHPMD didn't find but the output of the PHPStan could be better. There's a Web UI for browsing found errors and you can click and open your editor of choice on the offending line but you've to pay for it. PHPStan Pro costs 7 EUR for individuals monthly, 70 EUR for teams.

VS Code extension for PHP

If you're using Visual Studio Code for PHP programming there are some extensions to help you.

PHP Intelephense
PHP code intelligence for Visual Studio Code provides better intellisense then VS Code builtin and also does some signature checking etc. The extension has also premium version for some additional features.

What software and hardware I use

There was a discussion in Koodiklinikka Slack about what software people use and that people have made "/uses" pages for that purpose. And inspired by Wes Bos /uses from "Syntax" Podcast here's my list.

Check my /uses page to see what software and hardware I use for full-stack development in JavaScript, Node.js, Java, Kotlin, GraphQL, PostgreSQL and more. The list excludes the tools from different customers like using GitLab, Rocket.Chat, etc.

For more choices check uses.tech.

Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."


You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/CN=ruleoftech.com" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

"keys": [
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"

Convert the PEM to JWK format with e.g. pem-jwk or with pem_to_jwks.py. The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'


To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!