Using CASL and roles with persisted permissions

How do you implement user groups, roles and permissions in a multitenant environment where you have multiple organizations using the same application and each have own users and groups and roles? There are different approaches to the issue and one is to implement Attributes-based access control (ABAC) in addition with roles (RBAC). But what does that actually mean?

In short: you've groups, groups have roles and roles have permission attributes. Well, read forward and I'll explain how to tie things together using CASL: an isomorphic authorization JavaScript library which restricts what resources a given client is allowed to access.

Authorization mechanisms

First let's start with short introduction to authorization: "Authorization is the process of determining which action a user can perform in the system." Authorization mechanism grants proper privileges for a user defined by some role or other factor. Two popular types of authorization are: Role-based access control (RBAC) and Attributes-based access control (ABAC)

Role based access control (RBAC) is a simple and efficient solution to manage user permissions in an organization. We have to create roles for each user group and assign permissions for certain operations to each role. This is commonly used practice with roles suchs as "System admin", "Administrator", "Editor" and "User". The drawbacks of RBAC, especially in a complex system, is that a user can have various roles, which require more time and effort to organize. Also the more roles you create, the more security holes will be born.

Attributes-based access control (ABAC) is an authorization mechanism granting access dynamically based on user characteristics, environment context, action types, and more. Here we can have some action which is permitted to certain user, e.g. "{ action: 'edit', subject: 'Registration' }", "{ action: 'manage', subject: 'User', fields: ['givenName', 'familyName', 'email', 'phone', 'role'] }". For example, a manager can view only the users in their department; an user can not access his project’s information. ABAC is sometimes referred to as claims-based access control (CBAC) or policy-based access control (PBAC). You often see ABAC authorization mechanism in cloud computing services such as AWS and Azure.

The difference between ABAC and RBAC is that ABAC supports Boolean logic, if the user meets this condition he is allowed to do that action. Attribute-based Access Control (ABAC) enables more flexible control of access permissions as you can grant, adjust or revoke permissions instantly.

RBAC is a simple but sufficient solution to manage access in an uncomplicated system. However, it is not flexible and scalable and that is why ABAC becomes an efficient approach in this case. In the contrast, ABAC may be costly to implement. There is no all-in-one solution and it depends on your business requirements. Remember that use the right tool for the right job.

RBAC table structure can be visualized with:

Role basec access control in multitenant system

And you can add attribute based permissions to RBAC model:

Attribute based permissions for roles (CASL)

Using CASL with roles and permissions

CASL is a robust authorization library for JavaScript and including RBAC it also supports ABAC implementation by adding condition expression to the permission. That's exactly what we need in this case where we have a requirement to define different permissions for custom groups: need for roles and groups.

CASL has good resources and shows a great cookbook how to implement roles with persisted permissions. To supplement that you can check Nest.js and CASL example which also shows table structures in different stages.

The code for the following part can be seen in Gist

Table structure

  • Global Users table
  • Users are grouped by tenant in TenantUsers
  • TenantUsers have a role or roles
  • Users can be added to a group
  • Groups and roles are tenant specific
  • Permissions can be added to a role

Table structure in DBDiagram syntax

Next we create the table structure to our code with help of Sequelize ORM which describes the models, relations and associations.

We have the following models for our table structure:

  • models/rolepermissions.js
  • models/role.js
  • models/permission.js
  • models/group.js
  • models/groupuser.js

The Sequelize modeled class looks like this:

const { Model } = require('sequelize');
module.exports = (sequelize, DataTypes) => {
class Permission extends Model {
static associate() {}
toJSON() {
return {
id: this.id,
action: this.action,
subject: this.subject,
fields: this.fields,
conditions: this.conditions,
inverted: this.inverted,
system: this.system,
};
}
}
Permission.init(
{
id: {
type: DataTypes.INTEGER,
primaryKey: true,
allowNull: false,
},
action: DataTypes.STRING,
subject: DataTypes.STRING,
fields: DataTypes.ARRAY(DataTypes.TEXT),
conditions: DataTypes.JSON,
inverted: DataTypes.BOOLEAN,
system: DataTypes.BOOLEAN,
},
{
sequelize,
modelName: 'Permission',
},
);
return Permission;
};

You can see all the models on the Gist.

Defining CASL permissions to user

In this example we are adding permissions directly to the JWT we are generating and signing so we don't have to fetch user's permissions every time from the database (or cache) when user makes a request. We get the claims.role from the authentication token and it can be e.g. "admin", "user" or whatever you need.

services/token.js is the class where we fetch the permissions for the role and generate the JWT. It fetches the permissions for the user's roles, gets the user groups and those groups' roles and permissions. Then it merges the permissions to be unique and adds them to the JWT.

The JWT looks something like this:

{
"sub": "1234567890",
"role": "user",
"groups": [1, 3],
"permissions": [
{
"action": "read",
"subject": "Users",
"fields": null,
"conditions": {
"id": "${user.tenant}"
}
},
{
"action": "edit",
"subject": "User",
"fields": null,
"conditions": {
"id": "${user.id}",
"TenantId": "${user.tenant}"
}
},
]
}
view raw jwt-sample.json hosted with ❤ by GitHub

Now we have all the permissions for the user in JWT which we can use in our services and routes. The solution isn't optimal as the JWT can grow large if user has lots of permissions.

In the JWT you can see that the conditions attribute has a placeholder. middleware/abilities.js class parses the user JWT which we added permissions in JSON format, converts the condition to CASL’s condition and creates a CASL Ability so that we can add permission checks later on to our actions.

const { AbilityBuilder, Ability } = require('@casl/ability');
const parseCondition = (template, vars) => {
if (!template) {
return null;
}
JSON.parse(JSON.stringify(template), (_, rawValue) => {
if (rawValue[0] !== '$') {
return rawValue;
}
const name = rawValue.slice(2, -1);
const value = get(vars, name);
if (typeof value === 'undefined') {
throw new ReferenceError(`Variable ${name} is not defined`);
}
return value;
});
return null;
};
const defineAbilitiesFor = (jwtToken) => {
const { can: allow, build } = new AbilityBuilder(Ability);
jwtToken.permissions.forEach((item) => {
const parsedConditions = parseCondition(item.conditions, { jwtToken });
allow(item.action, item.subject, item.fields, parsedConditions);
});
return build();
};
module.exports = { defineAbilitiesFor };

Now you can check the permissions in e.g. route.

const ForbiddenOperationError = {
  from: (abilities) => ({
    throwUnlessCan: (...args) => {
      if (!abilities.can(...args)) {
        throw Forbidden();
      }
    },
  }),
};

...
try {
    const abilities = defineAbilitiesFor(jwtToken);
    ForbiddenOperationError.from(abilities).throwUnlessCan('read', 'Users');
    const users = await UserService.getAll(user.tenant);
    return res.send({ entities: users });
} catch (e) {
    return next(e);
}
...

Summary

Implementing authorization is an important part of software system and you've several approaches to solve it. RBAC is a simple but sufficient solution to manage access in an uncomplicated system. ABAC in the other hand is flexible and scalable approach but may be costly to implement. There is no all-in-one solution and it depends on your business requirements.

Using CASL with roles, groups and permissions makes the handling of the authorization side of things easier but as you can see from the database diagram the overal complexity of the system is still quite high and we're not even touched the user interface of managing the roles, groups and permissions.

Notes from Microsoft Ignite Azure Developer Challenge

Microsoft Azure cloud computing service has grown steadily to challenge Amazon Web Services and Google Cloud Platform but until now I hadn't had a change to try it and see how it compares to other platforms I've used. So when I came across the Microsoft Ignite: Cloud Skills Challenge November 2021 I was sold and took the opportunity to go through one of the available challenges: Azure Developer Challenge. Here are my short notes about learning minor part of Azure.

The Azure Developer Challenge was for developers interested in designing, building, testing, and maintaining cloud applications and services on Microsoft Azure. Each challenge was based on a collection of Microsoft Learn modules. If you completed your challenge before it ended, you got one free Microsoft Certification exam like "AZ-204: Developing Solutions for Microsoft Azure".

Microsoft Ignite: Azure Developer Challenge

"This challenge is for developers interested in designing, building, testing, and maintaining cloud applications and services on Microsoft Azure."

Microsoft Ignite

The Azure Developer Challenge consisted of following products in Azure:

  • Azure App Service
  • Azure Functions
  • Azure Cosmos DB
  • Azure Blob storage
  • Virtual machines in Azure
  • Azure Resource Manager templates
  • Azure Container Registry
  • Azure Service Bus
  • Azure Queue storage
  • Azure Event Hubs
  • Event Grid

Learning to use those different products were done by different exercises which showed you how to do things and checked that you had done it correctly. The exercises used Azure portal where the Learn module gave you free learning environment to use. Towards the end I got my free development environment credits used for the day and had to skip some of the practicalities.

Exercise with Sandbox

After going through the introduction to different parts of Azure the Learn module practically teached you to use Azure Functions. And not much more. With Azure Functions the exercises teached to create serverless logic, execute functions with triggers, chain functions and have durable functions. You also learned to develop functions on your local machine. Azure Functions were used i.a. with Cosmos DB, webhooks and for creating an (serverless) API. The last module was about building serverless apps with Go.

In overall the learning experience was nice and the practical exercises forced you to click through the Azure Portal and get the hang of how things work. I was in a bit of a hurry to go through all of the 33 modules which was calculated to take around 21 hours. I think it took me about 10-12 hours.

Now the last step is to actually take the Certification exam. Also as the learning modules for different topics are still available I will maybe go through some more. At least the "Azure Admin Challenge" looked interesting for my purposes.

Azure Portal with Console and VS Code

Create secure code with Secure Code Bootcamp

Software development contains many aspects which the developer has to take care and think about. One of them is information security and secure code which affects the product and its users. There are different ways to learn information security and how to create secure and quality code and this time I'll shortly go through what Secure Code Warrior Secure Code Bootcamp has to offer.

For the record other good resources I've come across are Kontras application security training for OWASP Top 10 and OWASP Top 10 API, hands-on approaches like Cyber Security Base MooC, Wargames, Hack the Box and Cybrary.

Secure Code Bootcamp

Kick-start your journey to creating more secure, quality code with Secure Code Bootcamp - our free mobile app for early-career coders.

Secure Code Bootcamp

Secure Code Warrior provides a learning platform for developers to increase their software security skills and guide each coder along their own preferred learning pathway. They have products, solutions and resources to help organization's development teams to ship quality code and also provide a free mobile app for early-career coder: Secure Code Bootcamp.

Application presents common vulnerabilities from the OWASP Top 10 and you get badges as you progress through each new challenge, unlocking new missions as your progress. It teaches you to identify vulnerable code with first short introductions and explanations for each vulnerability of how they happen and where. Each topic is presented as a mission with briefing and code inspection tasks.

OWASP Top 10 are:

The Secure Code Bootcamp covers 8 of the Top 10 list as the last two are more or less difficult to present in this gamified context, I think.

Mission briefing contains couple of minute theory lesson of the given vulnerability and teaches you what, where and how to prevent it.

After briefing you're challenged with code examples in the language you've chosen (Node.JS, Python:Django, Java:Spring, C# .NET: MVC). You practically swipe your way through code reviews by accepting or rejecting them. Reading code on mobile device screen isn't optimal but suffices for the given task. Works better for Node.js than for Java Spring.

Code inspection isn't always as easy as you would think even if you know what to look for. After succesfully inspected couple of codes you're awarded with a badge. The briefing tells you what to look for in the code but sometimes it's a guess what is asked for. The code inspection requires sometimes knowledge of the used framework and inspection is done without context for the usage. Almost every inspection I got 1 wrong which gave me 75% accuracy.

Summary

The approach to teaching security topics this way works ok if you're code oriented. You'll learn the OWASP Top 10 in practice by short theory lessons with pointers to how to prevent them and test your code inspection skills for noticing vulnerable aspects of code fragments. Having swiped through the bootcamp the code inspection parts were not always so useful.

The marketing text says "progress along multiple missions and build secure coding skills." and "Graduate with fundamental secure coding skills for your next step as a coder." and that is in my opionion a bit much to say. The bootcamp teaches the basic concepts of vulnerabilities and how they look on code but doesn't teach you to code securily.

In overall the Secure Code Bootcamp for OWASP Top 10 vulnerabilities is a good start for learning what, where, how and why vulnerabilities exists and learn to identify them. You can do the bootcamp with different languages available so replayability value is good.

Starting with React Native and Expo

For some time I've wanted to experiment with React Native and mobile development outside native iOS but there has always been something on the way to get really started with it. Recently I had time to watch React Europe 2020 conference talks and "On Expo and React Native Web" by Evan Bacon got me inspired.

All the talks in React Europe 2020 can be found in their playlist on Youtube

Universal React with Expo

Expo is an open-source platform for making universal native apps for Android, iOS, and the web with JavaScript and React.

Expo

"Expo: Universal React" talk showed what Expo can do and after some hassling around with Expo init templates I got a React Native app running on iOS for reading news articles from REST API with theme-support and some navigation written with TypeScript. And it also worked on Android, Web and as a PWA.

Expo is a toolchain built around React Native to help you quickly start an app. It provides a set of tools that simplify the development and testing of React Native app and arms you with the components of users interface and services that are usually available in third-party native React Native components. With Expo you can find all of them in Expo SDK.

Understanding Expo for React Native

You can use the Expo Snack online editor to run you code in iOS, Android and Web platforms. And if you need vector icons, there's and app for searching them. Also Build icon app is crafty.

Expo has good documentation to get you started and following the documentation (and choosing the managed workflow when initing an app) you get things running on your device or on iOS and Android simulators. If you don't have a macOS or iPhone you can use their Snack playground to see how it looks on iOS.

Expo comes with a client which you can use to send the app to your device or to others for review which is very useful when testing as you can see all changes in code in Expo client without creating apk or ipa files.

One great feature of Expo is that you can quickly test and show examples of solutions with the Snack editor and run the code either on the integrated simulator or on your device.

I've previously quickly used Ignite which has a different approach to get you running and compared to that Expo is more of a platform than just tools which has it's good and not so good points. One of the main points of Expo is that it practically binds you to Expo and their platform where as if you use only tools you're "more free".

Drawbacks

The "Understanding Expo for React Native" post lists the following drawbacks in managed workflow. Some of those can be work aroud with bare workflow or with ejecting but then you lose the advantages of Expo workflow:

  1. Can't add native modules written in Objective-C, Swift, Java, Kotlin
  2. Can't use packages with native languages that require linking
  3. App has a big size as it is built with all Expo SDK solutions
  4. Often everything works well in Expo client but problems may occur in a standalone app.

Feelings

This far I've just started with Expo and getting more adjusted to React Native and writing Highlakka news client has been insightful and good experience when comparing the same app, "Highkara", written in Swift for iOS. My plan is to implement some of the features in Highlakka as in Highkara and see how it works as an universal app. Especially the PWA is interesting option and Over the Air updates with Expo as shown in React Europe 2020 talk.

The Hailakka app is now "usable" and iOS app builds nicely with Expo Turtle. The PWA runs on Netlify which is great.

Tools

What about tools to help you with React Native development? Basically you can just use VS Code and go on with it.

Flipper DevTool Platform for React Native talk at React Europe 2020 by Michel Weststrate and "Flipper — A React Native revolution" post shows one option. It's baked into React Native v0.62 but isn't yet available with Expo 41 (there's feature request for it and suggestions to use what Expo offers like React Native debugger and Redux devTools integration).

Build tools

Expo provides tools for building your application and Expo Dashboard shows your builds and their details. You can download the IPA packaged app when the build is ready and then upload it to App Store Connect.

You can build your application to different platforms with Expo cli. For example PWA with expo build:web and iOS with expo build:ios. And you can also do it from CI. Of course you still need Apple account for submitting the app to the App Store.

While your build is running you can check the queue from Turtle.

Linting GraphQL Schema and queries

Analyzing code for compliance with guidelines is one part of the code quality assuarance and automating it with tools like ESLint and binding the checks with build process is common and routine operation. But what about validating GraphQL schema and queries? Here are some pointers to tools you can use to start linting your GraphQL schema.

Resources:

  • eslint-plugin-graphql: Checks tagged query strings inside JavaScript, or queries inside .graphql files, against a GraphQL schema.
  • graphql-eslint: Lints both GraphQL schema and GraphQL operations.
  • graphql-schema-linter: Command line tool to validate GraphQL schema definitions against a set of rules.
  • apollo-tooling: Brings together your GraphQL clients and servers with tools for validating your schema, linting your operations and generating static types.

Using ESLint for GraphQL checks

Depending of you project structure and how you've created your schema and queries you can use different tools for linting it. eslint-plugin-graphql requires you to have the schema ready whereas graphql-eslint can do some checks and validation without the schema.

Using the graphql-eslint and configuration in package.json where the project uses code files to store GraphQL schema / operations:

"eslintConfig": {
...
"overrides": [
      {
        "files": [
          "*.js"
        ],
        "processor": "@graphql-eslint/graphql"
      },
      {
        "files": [
          "*.graphql"
        ],
        "parser": "@graphql-eslint/eslint-plugin",
        "plugins": [
          "@graphql-eslint"
        ],
        "rules": {
          "prettier/prettier": "off",
          "@graphql-eslint/unique-fragment-name": "warn",
          "@graphql-eslint/no-anonymous-operations": "warn",
          "@graphql-eslint/no-operation-name-suffix": "error",
          "@graphql-eslint/no-case-insensitive-enum-values-duplicates": [
            "error"
          ]
        }
      }
    ]
}

Linting process can be enriched and extended with GraphQL type information, if you are able to provide your GraphQL schema.

Introspection Query and schema linting

GraphQL APIs are required to be self-documenting and every standard conforming GraphQL API needs to respond to queries about its own structure. This system is called introspection. The key for GraphQL validation is doing the introspection query and getting the schema in GraphQL Schema Definition Language format. The query gets you the JSON representation of the shape of the API which you can convert to SDL. See more about GraphQL Schemas.

You can use the apollo-tooling for the introspection. First start your GraphQL service and point the apollo to it.

npx apollo client:download-schema --endpoint=http://localhost:4444/graphql schema.graphql

The tooling outputs in this case the schema in GraphQL SDL format (.graphql) and you can also get it in JSON by using the schema.json filename.

Now that you have the GraphQL schema you can lint it by using graphql-schema-linter. Create config graphql-schema-linter.config.js in your project root folder.

module.exports = {
  rules: ['enum-values-sorted-alphabetically'],
  schemaPaths: ['path/to/my/schema/files/**.graphql'],
  customRulePaths: ['path/to/my/custom/rules/*.js'],
  rulesOptions: {
    'enum-values-sorted-alphabetically': { sortOrder: 'lexicographical' }
  }
};

And run graphql-schema-linter

You get results e.g.

1320:1 The type `Upload` is defined in the schema but not used anywhere.                      defined-types-are-used
74:3   The field `Company.phone_number` is not camel cased.                                   fields-are-camel-cased

Running graphql-schema-linter in CI/CD

For automating schema checks you can use the graphql-schema-linter in your CI/CD pipeline. Here's an example for GitLab CI where we run our GraphQL service inside docker.

docker-compose-introspection.yml:

version: '3.1'

services:
  service-graphql-introspection:
    image: node:14-alpine
    container_name: graphql-service-introspection
    restart: unless-stopped
    environment:
      NODE_ENV: development
      ENVIRONMENT: introspection
      PORT: 4000
    ports:
      - "4000:4000"
    user: node
    volumes:
      - ./:/home/node
    working_dir: /home/node
    command: "npm run start:docker"

Where npm run start:docker is "./node_modules/nodemon/bin/nodemon.js --legacy-watch --exec ./node_modules/@babel/node/bin/babel-node.js src/app.js"

gitlab-ci.yml:

run_test:
  image: docker:19.03.1
  stage: test
  services:
    - docker:19.03.1-dind
  before_script:
    - apk add --no-cache docker-compose npm
  script:
    - "# Start GQL"
    - docker-compose -f docker-compose-introspection.yml up -d
    - "# Get introspection schema"
    - cd /tmp
    - npx apollo client:download-schema --endpoint=http://localhost:4000/graphql schema.graphql
    - cd $CI_PROJECT_DIR
    - docker-compose -f docker-compose-introspection.yml stop
    - docker-compose -f docker-compose-introspection.yml rm -f
    - "# Run tests"
    - npx graphql-schema-linter /tmp/schema.graphql

The results in CI/CD pipeline could look then like this:

graphql-schema-linter in CI/CD

Automate your dependency management using update tool

Software often consists of not just your own code but also is dependent of third party libraries and other software which has their own update cycle and new versions are released now and then with fixes to vulnerabilities and with new features. Now the question is what is your dependency management strategy and how do you automate it?

Fortunately automated dependency updates for multiple languages is a solved problem as there are several update tools to help you: Renovate, Dependabot (GitHub), Greenkeeper ($), Depfu ($) and Dependencies.io ($) to name some alternatives. In this blog post I will concentrate on using Renovate and integrate it with GitLab CI.

Renovate your dependencies

Renovate is open source tool which works with most git hosting platforms (public or self-hosted) and it's possible to host Renovate Bot yourself. It’s installable via npm/yarn or Docker Hub.

In short, the idea and workflow of dependency update tools are following:

  1. Checks for updates: pulls down your dependency files and looks for any outdated or insecure requirements.
  2. Opens pull requests: If any of your dependencies are out-of-date, tool opens individual pull requests to update each one.
  3. Review and merge: You check that your tests pass, scan the included changelog and release notes, then hit merge with confidence.

Now you just run the depedency update tool on regular basis on your continuous integration, watch how the pull requests fly and you get to keep your dependencies secure and up-to-date.

The manual chore of checking for updates, looking for changelogs, making changes, running tests, writing pull requests and more is now moved to reviewing pull requests with better confidence of what has changed.

Self-hosted in GitLab CI

Renovate Bot is a node.js application so you’ve couple of alternative ways to run it on your CI/CD environment. You can use a node docker image which installs and runs renovate, or you can use Renovate Bot's own Docker image as I chose to do. We are using docker-in-docker approach of running the Renovate docker container. That means you can start Docker containers from within an other Docker container.

First create an  account for the bot on the Gitlab instance (the best choice) or use your own account. Then generate a personal access token with the api scope for renovate to access the repositories and create the branches and merge requests containing the dependency updates.

Then create a repository for the configuration and renovate will use that repo’s CI pipelines. Paste your Gitlab token under CI / CD > Variables as a new variable and give it the name RENOVATE_TOKEN. Set it to protected and masked to hide the token from the CI logs and to only use it for Pipelines starting on protected branches (your master branch is protected by default).

You'll also need a Github access token with the repos scope for renovate to read sources and changelogs of dependencies hosted on Github. It’s not important what Github account is used as it's just needed because Github's rate-limiting would block your bot making unauthenticated requests. Paste it as an other variable with the name GITHUB_COM_TOKEN.

To configure Renovate we need to add three files to our repository:

config.js for configuring renovate:

module.exports = {
  platform: ‘gitlab’,
  endpoint: ‘https://gitlab.com/api/v4/',
  assignees: [‘your-username’],
  baseBranches: [‘master’],
  labels: ['renovate', 'dependencies', 'automated'],
  onboarding: true,
  onboardingConfig: {
    extends: ['config:base'],
  },
};

repositories.txt for repositories we want to check:

openpatch/ui-core
openpatch/template

.gitlab-ci.yml to run renovate:

default:
  image: docker:19
  services:
    - docker:19-dind

# Because our GitLab runner doesn’t have TLS certs mounted and runs on K8s
variables:
  DOCKER_HOST: tcp://localhost:2375
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ‘'

renovate:
  stage: build
  script:
    - docker run -e RENOVATE_TOKEN="$RENOVATE_TOKEN" -e GITHUB_COM_TOKEN="$GITHUB_COM_TOKEN" -v $PWD/config.js:/usr/src/app/config.js renovate/renovate:13 $(cat repositories.txt | xargs)
  only:
    - master

Now everything is finished and when you run the pipeline renovate will check the repositories in repositories.txt and create merge request if a dependency needs to be updated.

The first merge request to repository is Configure Renovate which helps you to understand and configure settings before regular Merge Requests begin.

As a last step create a Pipeline Schedule to run the pipeline every x hours or x day or whatever you like. You can do this in the bot's config project / repository under CI / CD > Schedules by creating a new schedule and chosing the frequency to run your bot.

You can also reduce noise by Package Grouping and Automerging. Here’s an example of grouping eslint themed packages and automerging them if tests pass.

Project’s renovate.json:

{
    "packageRules": [
        {
          "packagePatterns": [ "eslint" ],
          "groupName": "eslint",
          "automerge": true,
          "automergeType": "branch"
        }
      ]
}

Summary

Congratulations! You’ve now automated the dependency updating with GitLab CI. Just keep waiting for the merge requests and see if your test suites are successful.  If you are really trusting your test suite, you can even let renovate auto-merge the request, if the pipeline succeeds.

Visual Studio Code Extensions for better programming

Visual Studio Code has become "The Editor" for many in software development and it has many extensions which you can use to extend the functionality for your needs and customize it. Here’s a short list of the extensions I use for frontend (React, JavaScript, Node.js), backend (GraphQL, Python, Node.js, Java, PHP, Docker) and database (PostgreSQL, MongoDB) development.

General

editorconfig
Attempts to override user/workspace settings with settings found in .editorconfig files.

Visual Studio IntelliCode
Provides AI-assisted development features for Python, TypeScript/JavaScript and Java developers in Visual Studio Code, with insights based on understanding your code context combined with machine learning.

GitLens
Visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain valuable insights via powerful comparison commands, and so much more.
Git Blame
See Git Blame information in the status bar for the currently selected line.

Local History
Plugin for maintaining local history of files.

Language and technology specific

ESlint
Integrates ESLint into VS Code.

Prettier
Opinionated code formatter which enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.

Python
Linting, Debugging (multi-threaded, remote), Intellisense, Jupyter Notebooks, code formatting, refactoring, unit tests, and more.

PHP Intelephense
PHP code intelligence for Visual Studio Code is a high performance PHP language server packed full of essential features for productive PHP development.

Java Extension Pack
Popular extensions for Java development and more.

Docker
Makes it easy to build, manage, and deploy containerized applications from Visual Studio Code. It also provides one-click debugging of Node.js, Python, and .NET Core inside a container.

Markdown All in One
All you need to write Markdown (keyboard shortcuts, table of contents, auto preview and more)
Markdownlint
Includes a library of rules to encourage standards and consistency for Markdown files.
Markdown Preview Enhanced
Provides you with many useful functionalities such as automatic scroll sync, math typesetting, mermaid, PlantUML, pandoc, PDF export, code chunk, presentation writer, etc.

Prettify JSON
Prettify ugly JSON inside VSCode.

PlantUML
Rich PlantUML support for Visual Studio Code.

HashiCorp Terraform
Syntax highlighting and autocompletion for Terraform

Database

PostgreSQL
Query tool for PostgreSQL databases. While there is a database explorer it is NOT meant for creating/dropping databases or tables. The explorer is a visual aid for helping to craft your queries.

MongoDB
Makes it easy to work with MongoDB.

GraphQL
Adds syntax highlighting, validation, and language features like go to definition, hover information and autocompletion for graphql projects. This extension also works with queries annotated with gql tag.
GraphQL for VSCode
VSCode extension for GraphQL schema authoring & consumption.
Apollo GraphQL for VS Code
Rich editor support for GraphQL client and server development that seamlessly integrates with the Apollo platform.

Javascript

Babel
JavaScript syntax highlighting for ES201x, React JSX, Flow and GraphQL.

Jest
Use Facebook's Jest with pleasure.

npm
Supports running npm scripts defined in the package.json file and validating the installed modules against the dependencies defined in the package.json.

User Interface specific

indent-rainbow
Simple extension to make indentation more readable

Rainbow Brackets
Rainbow colors for the round brackets, the square brackets and the squiggly brackets.

vscode-icons
Icons for filetypes in file browser.

Other tips

VScode Show Full Path in Title Bar
With Code open, hit: Command+ , “window.title”: “{activeEditorLong}activeEditorLong{separator}${rootName}”

Slow integrated terminal in macOS
codesign --remove-signature /Applications/Visual\ Studio\ Code.app/Contents/Frameworks/Code\ Helper\ (Renderer).app

Running static analysis tools for PHP

We all write bug free code but analyzing your code is still important part of software development if for some reason there could've been some mishap with typing. Here's a short introduction how to run static analysis for PHP code.

Static analysis tools for PHP

The curated list of static analysis tools for PHP show you many options for doing analysis. Too much you say? Yes but fortunately you can start with some tools and continue with the specific needs you have.

You can run different analysis tools by installing them with composer or you can use the Toolbox which helps to discover and install tools. You can use it as a Docker container.

First, fetch the docker image with static analysis tools for PHP:

$ docker pull jakzal/phpqa:<your php version>
e.g.
$ docker pull jakzal/phpqa:php7.4-alpine

PHPMD: PHP Mess Detector

One of the tools provided in the image is PHPMD which aims to be a PHP equivalent of the well known Java tool PMD. PHPMD can be seen as an user friendly and easy to configure frontend for the raw metrics measured by PHP Depend.

It looks for several potential problems within that source like:

  • Possible bugs
  • Suboptimal code
  • Overcomplicated expressions
  • Unused parameters, methods, properties

You can install the phpmd with composer: composer require phpmd/phpmd. Then run it with e.g. ./vendor/bin/phpmd src html unusedcode --reportfile phpmd.html

Or run the command below which runs phpmd in a docker container and mounts the current working directory as a /project.

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine \
    phpmd src html cleancode,codesize,controversial,design,naming,unusedcode --reportfile phpmd.html

You can also make your custom rules to reduce false positives: phpmd.test.xml

<?xml version="1.0"?>
<ruleset name="VV PHPMD rule set"
         xmlns="http://pmd.sf.net/ruleset/1.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://pmd.sf.net/ruleset/1.0.0
                     http://pmd.sf.net/ruleset_xml_schema.xsd"
         xsi:noNamespaceSchemaLocation="
                     http://pmd.sf.net/ruleset_xml_schema.xsd">
    <description>
        Custom rule set that checks my code.
    </description>
<rule ref="rulesets/codesize.xml">
    <exclude name="CyclomaticComplexity"/>
    <exclude name="ExcessiveMethodLength"/>
    <exclude name="NPathComplexity"/>
    <exclude name="TooManyMethods"/>
    <exclude name="ExcessiveClassComplexity"/>
    <exclude name="ExcessivePublicCount"/>
    <exclude name="TooManyPublicMethods"/>
    <exclude name="TooManyFields"/>
</rule>
<rule ref="rulesets/codesize.xml/TooManyFields">
    <properties>
        <property name="maxfields" value="21"/>
    </properties>
</rule>
<rule ref="rulesets/cleancode.xml">
    <exclude name="StaticAccess"/>
    <exclude name="ElseExpression"/>
    <exclude name="MissingImport" />
</rule>
<rule ref="rulesets/controversial.xml">
    <exclude name="CamelCaseParameterName" />
    <exclude name="CamelCaseVariableName" />
    <exclude name="Superglobals" />
</rule>
<rule ref="rulesets/design.xml">
    <exclude name="CouplingBetweenObjects" />
    <exclude name="NumberOfChildren" />
</rule>
<rule ref="rulesets/design.xml/NumberOfChildren">
    <properties>
        <property name="minimum" value="20"/>
    </properties>
</rule>
<rule ref="rulesets/naming.xml">
    <exclude name="ShortVariable"/>
    <exclude name="LongVariable"/>
</rule>
<rule ref="rulesets/unusedcode.xml">
    <exclude name="UnusedFormalParameter"/>
</rule>
<rule ref="rulesets/codesize.xml/ExcessiveClassLength">
    <properties>
        <property name="minimum" value="1500"/>
    </properties>
</rule>
</ruleset>

Then run your analysis with:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpmd src html phpmd.test.xml unusedcode --reportfile phpmd.html

You get a list of found issues formatted to a HTML file

PHPMD Report

PHPStan - PHP Static Analysis Tool

"PHPstan focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code. It moves PHP closer to compiled languages in the sense that the correctness of each line of the code can be checked before you run the actual line."

Installing with composer: composer require --dev phpstan/phpstan

Or run on Docker container:

docker run -it --rm -v $(pwd):/project -w /project jakzal/phpqa:php7.4-alpine phpstan analyse --level 1 src

By default you will get a report to console formatted to a table and grouped errors by file, colorized. For human consumption.

PHPStan report

By default PHPStan is performing only the most basic checks and you can pass a higher rule level through the --level option (0 is the loosest and 8 is the strictest) to analyse code more thoroughly. Start with 0 and increase the level as you go fixing possible issues.

PHPStan found some more issues which PHPMD didn't find but the output of the PHPStan could be better. There's a Web UI for browsing found errors and you can click and open your editor of choice on the offending line but you've to pay for it. PHPStan Pro costs 7 EUR for individuals monthly, 70 EUR for teams.

VS Code extension for PHP

If you're using Visual Studio Code for PHP programming there are some extensions to help you.

PHP Intelephense
PHP code intelligence for Visual Studio Code provides better intellisense then VS Code builtin and also does some signature checking etc. The extension has also premium version for some additional features.

Short notes on tech 49/2020

Week 49, 2020

Development and Operations

Using SSL certificates from Let’s Encrypt in your Kubernetes Ingress via cert-manager
Walkthrough of the process of automating the issuance and renewal of certificates provided by Let's Encrypt for Kubernetes Ingress using the cert-manager add-on. (from cloudseclist.com)

Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, ipadOS, tvOS, and watchOS Apps
"Powered by Mac mini hardware and the AWS Nitro System, you can use Amazon EC2 Mac instances to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari." The downside of this is that "The instances are launched as EC2 Dedicated Hosts with a minimum tenancy of 24 hours" which is due Apple EULA and thus one CI build costs about $26. And what I read from HN the real viable option is still to use MacStadium.

Tools of the trade

cloudquery
"cloudquery transforms your cloud infrastructure into queryable SQL tables for easy monitoring, governance and security." (from cloudseclist.com)

k8s-security-policies
"Repository providing a security policies library that is used for securing Kubernetes clusters configurations. The security policies are created based on CIS Kubernetes benchmark and rules defined in Kubesec.io." (from cloudseclist.com)

alyssaxuu/screenity
"Screenity is a feature-packed screen and camera recorder for Chrome. Annotate your screen to give feedback, emphasize your clicks, edit your recording, and much more." (from Weekend Reading)

Miscellanous

Why Apple's replacement for Intel processors works really, really well
"They added Intel's memory-ordering to their CPU. When running translated x86 code, they switch the mode of the CPU to conform to Intel's memory ordering."

What software and hardware I use

There was a discussion in Koodiklinikka Slack about what software people use and that people have made "/uses" pages for that purpose. And inspired by Wes Bos /uses from "Syntax" Podcast here's my list.

Check my /uses page to see what software and hardware I use for full-stack development in JavaScript, Node.js, Java, Kotlin, GraphQL, PostgreSQL and more. The list excludes the tools from different customers like using GitLab, Rocket.Chat, etc.

For more choices check uses.tech.