2017 Retrospective

It’s January 2018 and while I’m gathering my notes for the year’s first post its’ good to look back what I wrote in 2017 and make plans for the new year. In 2017 I managed to write as leisurely as usual and put together 17 articles of which 6 are something other than monthly notes. On average I wrote 1.4 post per month. I visited some meetups, did software development and tested technology stuff. Business as usual and I presume that it’s going to continue this way also this year.

Monthly notes

Writing Monthly notes series about interesting articles I’ve come across has proved to be good way to ensure that I keep reading what happens in software development and also think about it. Collecting articles to monthly post have worked better than publishing weekly. In July I was mostly mountain biking and away from the computer so there was no Monthly notes.

Meetups

Meetup scene in Helsinki has grown and there are several interesting events you can attend almost monthly. But that said, it’s also starting to get growded and events with good topics tend to fill up quickly. I usually find myself going to events to hear war stories of Amazon Web Services, Docker, DevOps, Frontend and Mobile. It’s useful to hear how other’s do things and get new ideas. Meetups and conferences are also nice way to both freshen your thinking and get to know people working in the same field.

In Nebula Tech Thursday – Beer & DevOps we heard stories about “Cloud Analytics – Providing Insight on Application Health and Performance” and “Building a Full Devops Pipeline with Open Source Tools”. OWASP Helsinki chapter meeting #31 presented topics like “DevSec – Developers are the key to security”, “Docker Security” and “Leaking credentials – a security malpractice more common than expected”. Both events where nice and as usual Nebula Tech Thursday with great food and drinks. If you follow me on Twitter you might have noticed that I went to more meetups than I wrote about, like Solita Core and Slush.D.

Software development as usual

Microservices and Docker has evolved the way we do things and to Dockerize all the things you can run Ansible inside Docker container. You might ask why and that’s easy to answer: isolating all of the required dependencies from the host machine and to get the Ansible version we want.

For making software development more reliable I introduced git pre-commit and pre-receive hooks for validating YAML to our continuous integration process. Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive helps you to automate the check for syntax validity, for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation.

Other things

As an engineer I’m interested of technology and gadgets and sometimes I get things to test. In July I wrote about keeping data secured with iStorage datAshur Personal2 USB flash drive. It is an USB flash drive with combination of hardware encryption, physical keypad and tamper-proofing. Small external devices are easy to lose and can leave your data vulnerable if not encrypted. The hardware encrypted USB flash drive seemed to be quite crafty.

Awesome times ahead

New year, old me. Or something like that. Plans are to continue as before, write about technology, collect interesting articles, learn new things about software development and of course ride mountain bike. The training for the Enduro racing season has already started.

So, stay tuned by subscribing to the RSS feed or follow me on Twitter. Check also my other blog in Finnish.

Monthly notes 25

December has gone fast and this time the monthly notes are more about pointers to tools and resources. Especially for accessibility which is important aspect of web development. If you don’t follow front-end development actively check out the recap of it’s development in 2017. And to learn more about security it’s good to read the updated OWASP Top-10 list. Happy reading and have nice holidays!

Issue 25, 20.12.2017

Web development

A recap of front-end development in 2017
tl;dr; PWA, yarn, serverless, vue.js, css-in-js, GraphQL, React Router 4, types in JavaScript.

Pointers for better accessibility

Inclusive Components
Blog which writes about designing inclusive web interfaces, piece by piece. Trying to be a pattern library.

Web Accessibility In Mind
Resources for reading about Web accessibility.

Web Accessibility Checklist
A beginner’s guide to web accessibility.

aXe
Nice open-source tool for accessibility testing. Runs right in your web browser.

NonVisual Desktop Access
Developing for better accessibility is easier when you can test how end users “see” things. NVDA (NonVisual Desktop Access) is a free “screen reader” for Windows which enables blind and vision impaired people to use computers. It reads the text on the screen in a computerised voice.

Security

OWASP Top 10 – 2017
The Ten Most Critical Web Application Security Risks. Read the PDF.

Internet Chemotherapy
Internet Chemotherapy was a 13 month project between Nov 2016 – Dec 2017. It has been known under names such as ‘BrickerBot’, ‘bad firmware upgrade’.

Testing tools

Cypress
“Cypress is the new standard in front-end testing that every developer and QA engineer needs. No more Selenium. Lots more power.”

TestCafe
“A Node.js tool to automate end-to-end web testing. Write tests in JS or TypeScript, run them and view results.”

mountebank
Provides cross-platform, multi-protocol test doubles over the wire. Simply point your application under test to mountebank instead of the real dependency, and test like you would with traditional stubs and mocks.

Something different

The 10 Best Mountain Biking Videos of the Year

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files’ contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here’s a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job’s workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project’s root directory’s .pre-commit-config.yaml as repository local hook. As you can see, I’m using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you’re linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there “just like that”. Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their “ready-to-consume” state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook’s script as StackOverflow describes. One way is to check out the files in a temporary location or if you’re on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.

Monthly notes 24

Rain, cold winds and darkness have arrived to Finland and there’s so many good reason to stay at home with warm mug of coffee and read. This month’s notes cover how you should optimize images, how your eyes are telling lies and how to circumvent it in design. You also get pointers to security tools for Docker and running Java apps with Docker and Kubernetes. And if you haven’t migrated to HTTPS check out Troy Hunt’s happy path. Happy reading.

Issue 24, 28.11.2017

User Interface

Essential Image Optimization (ebook)a
Image optimization should be automated. It’s easy to forget, best practices change, and content that doesn’t go through a build pipeline can easily slip. Addy Osmani’s eBook has the essential information you need to get started.

Optical Effects in User Interfaces (for True Nerds)
Making optically balanced icons, correct shapes alignment, and perfect corner rounding when your eyes are telling lies. Interesting article of optical effects in User Interfaces.

Microservices

Essential (and free) security tools for Docker
Docker makes it easy for developers to package up and push out application changes, and spin up run-time environments on their own. But this also means that they can make simple but dangerous mistakes that will leave the system unsafe without anyone noticing until it is too late. Fortunately, there are some good tools that can catch many of these problems early, as part of your build pipelines and run-time configuration checks. Jim Bird has put together a short list of the essential open source tools that are available today to help you secure your Docker environment.

Deploying Java Applications with Docker and Kubernetes
A good intro to using Docker and Kubernetes for a typical Spring web application. (from Java Weekly 199)

Technical

The 6-Step “Happy Path” to HTTPS
HTTPS is now somewhat of a necessity and the path to it can be difficult but it can also be fundamentally simple. Troy Hunt details the 6-step “Happy Path”, that is the fastest, easiest way you can get HTTPS up and running right.

Fast By Default: Modern Loading Best Practices (Chrome Dev Summit 2017)
Optimizing sites to load instantly on mobile is far from trivial. Costly JavaScript can take seconds to process, we often aren’t sensitive to users data-plans, and browsers don’t know what UX-critical resources should load first. One interesting talk https://www.youtube.com/watch?v=_srJ7eHS3IM&feature=youtu.be&t=11m3s is Queryable Real User Monitoring for the web? which tells us about Chrome User Expericence Report https://blog.chromium.org/2017/10/introducing-chrome-user-experience-report.html. Dataset of real world performance as experienced by Chrome users to which you can do SQL queries.

Introducing Code Smells into Code
Code smells are hints that show you potential problems in your code. Martin Fowler describes 21 code smells and Adrian Bolboaca came up with the Brutal Refactoring Coding Game. In the game participants are asked to write the cleanest code possible. If the facilitator spots any code smell, participants must stop and immediately remove it. The post is not about the game but about code smells introduced into code. The game allows observation how and when code smells are introduced (because the whole point is to spot and remove them). (from Java Weekly 199)

Miscellanous

Becoming an accidental architect
“How does one transition from developer to accidental architect? It doesn’t happen overnight.” The article describes the journey from developer to architect and how software architects can balance technical proficiencies with an appropriate mastery of communication.

Something different

Pole Bicycles Announces New CNC-Machined ‘Machine’
Finnish bike company Pole has always stamped it’s own path and redefined how mountain bikes can be long and fast. Now they redefined how a frame is made and announced robotically CNC machined frame which is also 100% made in Finland. “The Machine is a cutting edge 29″ superbike which can be used as the one bike for everything. The travel on the bike is 180mm front and 160mm rear. The frame geometry follows Pole’s notoriously long and slack geometry with steep seat tube for better climbing. On our tests, the Machine was even easier to ride than the EVOLINK’s.”

Dockerizing all the things: Running Ansible inside Docker container

Automating things in software development is more than useful and using Ansible is one way to automate software provisioning, configuration management, and application deployment. Normally you would install Ansible to your control node just like any other application but an alternate strategy is to deploy Ansible inside a standalone Docker image. But why would you do that? This approach has benefits to i.a. operational processes.

Although Ansible does not require installation of any agents within managed nodes, the environment where Ansible is installed is not so simple to setup. In control node it requires specific Python libraries and their system dependencies. So instead of using package manager to install Ansible and it’s dependencies we just pull a Docker image.

By creating an Ansible Docker image you get the Ansible version you want and isolate all of the required dependencies from the host machine which potentially might break things in other areas. And to keep things small and clean your image uses Alpine Linux.

The Dockerfile is:

FROM alpine:3.7
 
ENV ANSIBLE_VERSION 2.5.0
 
ENV BUILD_PACKAGES \
  bash \
  curl \
  tar \
  openssh-client \
  sshpass \
  git \
  python \
  py-boto \
  py-dateutil \
  py-httplib2 \
  py-jinja2 \
  py-paramiko \
  py-pip \
  py-yaml \
  ca-certificates
 
# If installing ansible@testing
#RUN \
#	echo "@testing http://nl.alpinelinux.org/alpine/edge/testing" >> #/etc/apk/repositories
 
RUN set -x && \
    \
    echo "==> Adding build-dependencies..."  && \
    apk --update add --virtual build-dependencies \
      gcc \
      musl-dev \
      libffi-dev \
      openssl-dev \
      python-dev && \
    \
    echo "==> Upgrading apk and system..."  && \
    apk update && apk upgrade && \
    \
    echo "==> Adding Python runtime..."  && \
    apk add --no-cache ${BUILD_PACKAGES} && \
    pip install --upgrade pip && \
    pip install python-keyczar docker-py && \
    \
    echo "==> Installing Ansible..."  && \
    pip install ansible==${ANSIBLE_VERSION} && \
    \
    echo "==> Cleaning up..."  && \
    apk del build-dependencies && \
    rm -rf /var/cache/apk/* && \
    \
    echo "==> Adding hosts for convenience..."  && \
    mkdir -p /etc/ansible /ansible && \
    echo "[local]" >> /etc/ansible/hosts && \
    echo "localhost" >> /etc/ansible/hosts
 
ENV ANSIBLE_GATHERING smart
ENV ANSIBLE_HOST_KEY_CHECKING false
ENV ANSIBLE_RETRY_FILES_ENABLED false
ENV ANSIBLE_ROLES_PATH /ansible/playbooks/roles
ENV ANSIBLE_SSH_PIPELINING True
ENV PYTHONPATH /ansible/lib
ENV PATH /ansible/bin:$PATH
ENV ANSIBLE_LIBRARY /ansible/library
 
WORKDIR /ansible/playbooks
 
ENTRYPOINT ["ansible-playbook"]

The Dockerfile declares an entrypoint enabling the running container to function as a self-contained executable, working as a proxy to the ansible-playbook command.

Build the image as:

docker build -t walokra/ansible-playbook .

You can test the ansible-playbook running inside the container, e.g.:

docker run --rm -it -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook --version

The command for running e.g. site.yml playbook with ansible-playbook from inside the container:

docker run --rm -it -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook site.yml

If Ansible is interacting with external machines, you’ll need to mount an SSH key pair for the duration of the play:

docker run --rm -it \
    -v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook site.yml

To make things easier you can use shell script named ansible_helper that wraps a Docker image containing Ansible:

#!/usr/bin/env bash
docker run --rm -it \
  -v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
  -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
  -v $(pwd):/ansible_playbooks \
  -v /var/log/ansible/ansible.log \
  walokra/ansible-playbook "$@"

Point the above script to any inventory file so that you can execute any Ansible command on any host, e.g.

./ansible_helper play playbooks/deploy.yml -i inventory/dev -e 'some_var=some_value'

Now we have dockerized Ansible, isolated it’s dependencies and are not restricted to some old version which we get from Linux distribution’s package manager. Crafty, isn’t it? Check the docker-ansible-playbook repository for more information and examples with Ansible Vault.

This blog post and Dockerfile borrows from Misiowiec’s post Running Ansible Inside Docker and his earlier work. If you want to test playbooks it’s work checking out his ansible_playbook repository. Since then Alpine Linux has evolved and things could be cleaned a bit more like getting Ansible directly from testing repository.

Monthly notes 23

Autumn approaches with heavy rains and cold weather and it’s good time to sit inside with warm mug of tea and read what has happened in the field of software development. This month’s notes are about cyber security, accessibility, microservices and tools to help your development.

Issue 23, 18.10.2017

Learning new things

Cyber Security Base with F-Secure
Course series by University of Helsinki in collaboration with F‑Secure Cyber Security Academy that focuses on building core knowledge and abilities related to the work of a cyber security professional. Starts on 31st of October, 2017. Learn about tools used to analyse flaws in software systems, necessary knowledge to build secure software systems, the skills needed to perform risk and threat analysis on existing systems and the relevant legislation within EU.

Developing accessibility in mind

Writing CSS with Accessibility in Mind
“An introduction to web accessibility. Tips on how to improve the accessibility of your web sites and apps with CSS.”

How to test NVDA screen reader behaviour on a Mac
Developing accessibility in mind has some extra hoops especially on macOS. Here’s good howto for setting up NVDA screen reader to Windows Virtual Machine.

Frontend

Deploying ES2015+ Code in Production Today
You can deploy ES2015+ code in production today. Every browser that supports <script type=”module”> also supports most of the ES2015+ features. For older browsers use <script nomodule>.

Backend

Testing Microservices — Java & Spring Boot
A comprehensive guide to Microservices testing with Spring Boot. (from Java Weekly 196)

Top 10 Docker logging gotchas every Docker user should know
Docker changed the way applications are deployed, as well as the workflow for log management. In this article, Stefan Thies reveals the top 10 Docker logging gotchas every Docker user should know. tl;dr; Use the default json-file driver which is reliable and things just work.

The Top 10 Jigsaw and Java 9 Misconceptions Debunked
There are a number of myths surrounding Java 9 – so this piece is doing some myth-busting. (from Java Weekly, Issue 195)

Event Messaging for Microservices with Spring Boot and RabbitMQ
In a microservice environment you may come upon the requirement to exchange events between services. This article shows how to implement a messaging solution with Spring Boot and RabbitMQ. (from Java Weekly, Issue 195)

Tools of the trade

Mermaid.js
Ever wanted to simplify documentation and avoid heavy tools like Visio when explaining your code? Mermaid is a simple markdown-like script language for generating charts from text via javascript. Has online editor and plugins for e.g. Atom. Good alternative for Draw.io.

A more connected universe
GitHub can now analyze and show you the project’s dependency graph. And … “Soon, your dependency graph will be able to track when dependencies are associated with public security vulnerabilities” (from Weekend Reading)

Rico’s cheatsheets
This is such a fantastic resource. 340 cheatsheets for a variety of tools, languages, libraries, and frameworks. (from Weekend Reading)

Something different

So all y’all know that UserAgent strings are total bullshit, right?

Monthly notes 22

The weather has turned Autumnal and evenings are getting darker which leaves us more “good” reasons to spend time with our computers and learn. This monthly notes provide guide to impactful performance improvements, tips for optimizing React apps, howto gifs for Chrome DevTools, tips for healthier ways for working and how sticker makes things faster.

Issue 22, 12.9.2017

Frontend

The State of the Web: guide to impactful performance improvements
How can we make the Web better by designing and developing with performance in mind? A look at various ways of making impactful performance improvements. (from WebOps weekly 132)

React is Slow, React is Fast: Optimizing React Apps in Practice
Practical tips for optimizing #ReactJS performance with react_perf and Chrome Devtools’ Timeline.

Increase your web development skill-set: 150 animated tips on Chrome DevTools
Chrome DevTools is powerful web development tool and these 150 gifs will help you grasp features in 30 seconds. Also text descriptions if you want to read more.

Using React with TypeScript (video)
A short practical demo about using React and Typescript. No slides, just code. (from @reactdaily)

Development

What is Clean Code and why should you care?
Clean code is subjective and every developer has a personal take on it. There are some ideas that are considered best practice and what constitutes as clean code within the industry and community, but there is no definitive distinction. And I don’t think there ever will be. “Clean code is code that is easy to understand and easy to change.”

3 Effective Ways to Maintain High Energy Levels at Work for Software Engineers
Good tips for energetic days at the office: “1. Find Solutions for Energy Waste; 2. Take a Real Break; 3. Shift Between Different Tasks.”

Lessons learned about running Microservices
There are many reasons and benefits related to opt for an architecture based on Microservices, but there is no free lunch, at the same time it also brings some difficult and hard aspects to deal with. B2W has used microservices since 2014 and this four parts serie starts with some best practices related about Microservices communication.
(from Microservices Weekly 97)

Backend

Basic API Rate-Limiting
If we want to apply client-specific rate limiting, a standard load balancer might be not enough – especially when there’s no uniform way of identifying clients. The article explains the basics about rate limiting and tells you about some implementations for rate limiters you can use. For example Guava RateLimiter isn’t meant for (web) API rate-limiting and you should use libraries like RateLimitJ or bucket4j project. (from Java Weekly Issue 186)

Guide to Spring Boot REST API error handling
Spring Boot gives very useful error messages to build REST APIs but those same messages are noisy and useless for the API consumer, not to mention they reveal implementation details. Luckily, Bruno Leite is here to explain how there are simple ways of handling this.

Something different

Mysterious Axxios Stickers Allegedly Reduce Vibrations On Bikes
Hmm. “Axxios’ technology can alter the modulus of elasticity for a material by interacting with it through electron fields on an atomic level. Not only that, but the company also claim this is done passively, with no external power source.” It’s known truth that stickers make things go faster ;)

Monthly notes 21

Holiday season is soon over here in Finland and it’s good to be back at work. Summer has been quite busy and sadly I skipped June’s monthly notes. So, time to move forward and keep on track what has happened in software development. This time it’s about monitoring Docker, using webpack, writing functional apps with Spring and Kotlin and comparing Spring app written in Java, Kotlin and Scala.

Issue 21, 2.8.2017

Microservices

Docker Monitoring: 5 Methods for Monitoring Java Applications in Docker
What are some of the most useful methods to monitor Java applications in Docker containers? Making Logs Useful, Performance Monitoring, Error Tracking, Container Metrics, Orchestration.

Moving from a Java Monolith to Microservices at Squarespace (video)
Julian Applebaum describes the challenges of moving to a service-oriented architecture, drawing boundaries between different layers of business logic and discovering fundamental tensions in restructuring application logic. Squarespace’s journey to a series of RESTful API endpoints was a matter of building services and integrating them slowly as they became reliable.

JavaScript

webpack: The Core Concepts
Start with nothing. Leave with boilerplate independence.

Unambiguous Webpack config with Typescript
You can write your Webpack config in Typescript, and it’ll save you a huge amount of pain. (from JavaScript Weekly 340)

A Set of Best Practices for JavaScript Projects
British design studio Hive has collected guidelines for working on JavaScript projects. Although it says for JavaScript projects the practices are applicable to other projects as well as it covers things like Git workflow, documentation and API. (from JavaScript weekly 342)

Static AST checker for a11y rules on JSX elements
Pairing this plugin with an editor lint plugin, you can bake accessibility standards into your application in real-time.

Java and Kotlin

Functional web applications with Spring and Kotlin
How to build reactive and functional web applications with Spring Framework 5, WebFlux and Kotlin. Video available. Sources of the reference project are available at GitHub and there’s also related blog post.

Basic Spring web application in Java, Kotlin and Scala – comparison
If you’ve been wondering how hard would it be to implement a basic Spring Boot app in alternative JVM languages, such as Scala and Kotlin, read this post. (from Java Weekly 185)

Project package organization
Package structure in Java projects is often neglected or applied mindlessly – here we can see a comparison of the two most popular approaches: package-by-layer vs. package-by-feature.
(from Java Weekly 185)

Simple Spring Boot Admin Setup
Spring Boot Admin dashboard setup can be slightly unintuitive – here’s a short overview of how to set it up. The post could use a picture.

Implementing a custom Spring Boot starter for CXF and Swagger
(from Java Weekly, Issue 184)

Something different

Keeping data secured with iStorage datAshur Personal2 USB flash drive

Knowledge is power and keeping it secured from unauthorised eyes is important, be it inside of a computer, on external hard drive or on USB flash drive. Especially small external devices are easy to lose and can leave your data vulnerable if not encrypted. Fortunately there are solutions like iStorage datAshur Personal2 which is an USB flash drive with combination of hardware encryption, physical keypad and tamper-proofing. I got 8 GB version of Personal2 for testing (for free) and here’s a quick review how the device works.

datAshur Personal2 secures data with hardware encryption and on-board keypad

iStorage datAshur Personal2 is an USB 3.0 flash drive designed to keep your data protected from unauthorised access even if it’s lost or stolen. It’s operating system and platform-independent and available up to 64 GB. The beef about the flash drive is that user needs to enter 7-15 digit PIN code onto the rechargeable battery powered on-board keypad before connecting the drive to the USB port and accessing the data. All data transferred to the datAshur Personal2 is encrypted in real-time with built-in XTS-AES 256-bit hardware encryption. The device automatically locks when unplugged from the computer or power to the USB port is turned off and it can be set to lock after a certain amount of time. And what’s good about hardware encryption is that it (in theory) shouldn’t slow the drive down when writing or reading files to or from the drive. The device has protection against brute forcing and it’s aluminium housing is dust- and water- resistant.

Personal2 differs from most flash drives in length, being a little longer to accommodate the keypad. Buttons are quite small so large fingers may have some difficulty finding the right key. Overall build quality looks good although the removable USB plug cover is cumbersome and easily lost. The keypad is powered with rechargeable battery and even if the battery goes dead you can just recharge it from the USB port. The keypad on the iStorage datAshur is critical for security as it means the device works independently from a computer and prevents a keylogger from recording a code entered via keyboard. It also makes it operating system and platform-independent and doesn’t require any specific software or drivers.

The datAshur Personal2 can be configured with two different PINs: user and admin PINs, making it perfect for corporate and government deployment. If the user forgets their PIN, the drive can be unlocked using the Admin PIN which will then clear the old User PIN and allow the User to set a new PIN. It also ensures that the corporate data can be retrieved from the device when an employee leaves the company.

The device also has a reset feature which clears both User and Admin PINs, deletes all data, creates a new randomly generated encryption key and allows the drive to be reused. To prevent brute-force attacks, if both admin and user PINs have been created and incorrect user PIN is entered ten consecutive times, the brute force mechanism will trigger and the user PIN will be deleted. If the admin PIN is entered incorrectly ten consecutive times, then both the user and admin PINs, the encryption key and all data will be deleted and lost forever. The device will revert back to factory default settings and needs to be formatted before it can be reused.

The device comes with quick start guide which tells you how to unlock the drive and how to change the user PIN. I tested the Personal2 with macOS Sierra and getting started with it was easy. The drive worked just like any other normal USB flash drive and after unlocking it was recognised as usual. I didn’t measure the read or write speeds but they seemed fine for that size of a drive. They say that it’s up to 116MB/s read and 43MB/s write which is typical for small USB 3 flash drives. Of course decent performance is required but transfer speeds are not the reason why you buy encrypted USB flash drives.

The datAshur Personal2 isn’t the first or last encrypted USB flash drive with hardware keypad but it seems to work nicely. It costs somewhat more than a normal USB flash drive (8GB is £39, 64GB is £79) but that’s what you pay for keeping sensitive data secured. And what comes to performance, it’s always a compromise between security and speed.

Monthly Notes 20

Midsummer started the holiday season in Finland and things are slowing down. Time to take some break from routines, enjoy the Summer and stroll in the forest. And on rainy days read books and check out what happens in technology. Here’s monthly notes for June and it’s about microservices, designing user experience and React.

Issue 20, 26.6.2017

Microservices

The Dark Side of Microservices
There’s much debate for and against using Docker and microservices and although I don’t fully agree with the writer, the post gives something to think about.

The Hardest Part About Microservices: Your Data
Microservices aren’t as easy as you think. This blog series looks good for explaining it. First understand your data.

Microservices implementation — Netflix stack
There are lot of tools and technologies for implementing Microservices. This article is focusing on doing it with the Netflix stack and SpringBoot. (from The Microservices Weekly)

7 reasons to switch to microservices — and 5 reasons you might not succeed
Using a microservices can improve resilience and expedite your time to market, but breaking apps into fine-grained services offers complications. The article doesn’t provide much surprises but gives something to think about. (from The Microservices Weekly)

User Experience needs thought

Building systems that don’t match your worldview
Developing systems with accessibility in mind makes it possible and pleasant to use for all groups. WCAG is one part of the solution.

Cultural Blind Spots in UX
Designing for international markets is about understanding the nuances, starting with how cultures look at web pages in different ways.

How Human Memory Works: Tips for UX Designers
If design is all about understanding humans, then understanding how our memory works is going to play a vital part. (from iOS dev weekly #306)

React

React Express: Learn React with Interactive Examples
An opinionated, all-in-one guide walking through create-react-app, webpack, Babel, ES2015+, JSX, Redux, CSS-in-JS, and more. (from JavaScript weekly 340)

Techniques for decomposing React components
React components have a lot of power and flexibility but it’s incredibly easy for components to grow over time, become bloated and do too much. Adhering to the single responsibility principle not only makes your components easier to maintain, but also allows for greater reuse. However, identifying how to separate the responsibilities of a large React component is not always easy. Here are three techniques to get you started, from the simplest to most advanced. (from JavaScript weekly 340)

Something different