Two days of React Finland 2018: Day two with React and React Native

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. This is the second part of my recap of the talks and my notes which I posted to Twitter. Check out also the first part of my notes from the first day’s talks.

React Finland 2018, Day 2

How React changed everything — Ken Wheeler

Second day started with keynote by Ken Wheeler. He examined how React changed the front end landscape as we know it and started it with nice time travel to the 90s with i.a. Flash, JavaScript and AngularJS. Most importantly the talk took a look at the core idea of React, and why it transcends language or rendering target and posit on what that means going forward. And last we heard about what React async: suspense and time slicing.

“Best part of React is the community”

How React changed everything

Get started with Reason — Nik Graf

The keynote also touched Reason ML and Nik Graf went into details kicking off with the basics and going into how to leverage features like variant types and pattern matching to make impossible states impossible.

Get started with Reason ML

Making Unreasonable States Impossible — Patrick Stapfer

Based on “Get started with Reason” Patrick Stapfer’s talk went deeper into the world of variant types and pattern matching and put them into a practical context. The talk was nice learning by doing TicTacToe live coding. It showed how Reason ML helps you design solid APIs, which are impossible to misuse by consumers. We also got more insights into practical ReasonReact code. Presentation is available on the Internet.

Conclusion about ReasonReact:

  • More rigid design
  • More KISS (keep it simple, stupid) than DRY (don’t repeat yourself)
  • Forces edge-cases to be handled
Learning Reason by doing TicTacToe

Reactive State Machines and Statecharts — David Khourshid

David Khourshid’s talk about state machines and statecharts was interesting. Functional + reactive approach to state machines can make it much easier to understand, visualize, implement and automatically create tests for complex user interfaces and flows. Model the code and automatically generate exhaustive tests for every possible permutation of the code. Things mentioned: React automata, xstate. Slides are available on the Internet.

“Model once, implement anywhere” – David Khourshid

The talk was surprisingly interesting especially for use cases as anything to make testing better is good. This might be something to look into.

ReactVR — Shay Keinan

After theory heavy presentations we got into more visual stuff: React VR. Shay Keinan presented the core concepts behind VR, showed different demonstrations, and how to get started with React VR and how to add new features from the Three.js library. React VR: Three.js + React Native = 360 and VR content. On the VR device side it was mentioned that Oculus Go, HTC Vive Focus are the big step to Virtual Reality.

“Virtual Reality’s possibilities are endless. Compares to lucid dreaming.” – Shay Keinan

WebVR enables web developers to create frictionless, immersive experiences and we got to see Solar demo and Three VR demo which were lit 🔥.

React VR

World Class experience with React Native — Michał Chudziak

I’ve shortly experimented with React Native so it was nice to listen Michał Chudziak’s talk how to set up a friendly React Native development environment with the best DX, spot bugs in early stage and deliver continuous builds to QA. Again Redux was dropped in favour of apollo-link-state.

Work close to your team – Napoleon Hill

What makes a good Developer eXperience?

  • stability
  • function
  • clarity
  • easiness

GraphQL was mentioned to be the holy grail of frontend development and perfect with React Native. Tools for better developer experience: Haul, CircleCI, Fastlane, ESLint, Flow, Jest, Danger, Detox. Other tips were i.a to use native IDEs (XCode, Android Studio) as it helps debugging. XCode Instruments helps debug performance (check iTunes for video) and there’s also Android Profiler.

World Class experience with React Native

React Finland App – Lessons learned — Toni Ristola

Every conference has to have an app and React Finland of course did a React Native app. Toni Ristola lightning talked about lessons learned. Technologies used with React Native was Ignite, GraphQL and Apollo Client 👌 App’s source code is available on GitHub.

Lessons learned:

  • Have a designer in the team
  • Reserve enough time — doing and testing a good app takes time
  • Test with enough devices — publish alpha early
React Finland App – Lessons learned

React Native Ignite — Gant Laborde

80% of mobile app development is the same old song which can be cut short with Ignite CLI. Using Ignite, you can jump into React Native development with a popular combination of technologies, OR brew your own. Gant Laborde talked about the new Bowser version which makes things even better with Storybook, Typescript, Solidarity, mobx-state-tree and lint-staged. Slides can be found on the Internet.

Ignite
Ignite

How to use React, webpack and other buzzwords if there is no need — Varya Stepanova

Varya Stepanova’s lightning talk suggested to start a side-project other than ToDo app to study new development approaches and showed what it can be in React. The example was how to generate a multilingual static website using Metalsmith, React and other modern technologies and tools which she uses to build her personal blog. Slides can be found on the Internet.

Doing meaningful side-projects is a great idea to study new things and I’ve used that for i.a. learning Swift with Highkara newsreader, did couple of apps for Sailfish OS and played with GraphQL and microservices while developing app with largish vehicle dataset.

After party

Summary

Two days full of talks of React, React Native, React VR and all the things that go with developing web applications with them was great experience. Days were packed with great talks, new information and everything went smoothly. The conference was nicely organized, food was good and participants got soft hoodies to go with the Allas Sea Pool ticket. The talks were all great but especially “World Class experience with React Native” and “React Native Ignite” gave new inspiration to write some app. Also “ReactVR” seemed interesting although I think Augmented Reality will be bigger thing than Virtual Reality. It was nice to hear from “The New Best Practices” talk that there really is no new best practices as the old ones still work. Just use them!

Something to try and even to take into production will be Immer, styled components and Next.js. One thing which is easy to implement is to start using lint-staged although we are linting all the things already.

One of the conference organizers and speaker, Juho Vepsäläinen, wrote Lessons Learned from the conference and many of the points he mentions are to the point. The food was nice but “there wasn’t anything substantial for the afternoon break”. There wasn’t anything to eat after lunch but luckily I had own snacks. Vepsäläinen also mentions that “there was sometimes too much time between the presentations” but I think the longer breaks between some presentations were nice for having a quick stroll outside and have some fresh air. The venue was quite warm and the air wasn’t so good in the afternoon.

The Afterparty at Sea Life Helsinki was interesting choice and it worked nicely although there wasn’t so many people there. The aquarium was fishy experience and provided also some other content than refreshments. Too bad I hadn’t have time to go and check the Allas Sea Pool which we got a free ticket. Maybe next time.

Thanks to the conference crew for such a good event and of course to my fellow Goforeans which attended it and had a great time!

Two days of React Finland 2018: Day one topics of React

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. Here’s the first part of my notes from the talks which I posted to Twitter. Read also the second part with more of React Native.

React Finland 2018: Day 1

React Finland combined the Finnish React community with international flavor from Jani Eväkallio to Ken Wheeler and other leading talents of the community. The event was the first of its kind in Finland and consisted of a workshop day and two days of talks around the topic. It was nice that the event was single track so you didn’t need to choose between interesting talks.

At work I’ve been developing with React couple of years and tried my hands with React Native so the topics were familiar. The conference provided crafty new knowledge to learn from and maybe even put to production. Overall the conference was great experience and everything went smoothly. Nice work from the React Finland conference team! And of course thanks to Gofore which sponsored the conference and got me a ticket.

I tweeted my notes from almost every presentation and here’s a recap of the talks. I heard that the videos from the conference will be available shortly.

The New Best Practices — Jani Eväkallio

First day’s keynote was by Jani Eväkallio who talked about “The New Best Practices”. As the talk description wrote “When React was first introduced, it was ridiculed for going against established web development best practices as we knew them. Five years later, React is the gold standard for how we create user interfaces. Along the way, we’ve discovered a new set of tools, design patterns and programming techniques.”

The new best practices were:

  • Build big things from small things
  • Write code for humans first: flow, Typescript, storybook
  • Stay close to the language:
    • helps i.a. linters
  • Always prefer simplicity
  • Don’t break things:
    • Facebook makes React API changes easy to upgrade, depreciation well in advance, migration, documentation. it’s a flow, not versions. Use codemod.
  • Keep an open mind

You ask “what new best practices”? Yep, that’s the thing. We don’t need new best practices as the same concepts like Model-View-Controller and separation of concerns are still valid. We should use best practices which have been proved good before as they also work nicely with React philosophy. Eväkallio also talked why React will be around for a long time. It’s because components and interoperable components are an innovation primitive.

The New Best Practices

Declarative state and side effects — Christian Alfoni

After the keynote it was time to get more practical and Christian Alfoni talked about how we can get help writing our business logic in a declarative manner and see what benefits it gives us. He talked about lessons learned refactoring Codesandbox.io from Redux to Cerebral and about Cerebral which provides a declarative state and side effects management for popular JavaScript frameworks. Talks slides are available on the Internet.

Alfoni also pointed to Turning the database inside out with Apache Samza. Also that Cerebral had time travel before Dan Abramov presented Live React in his talk Hot Reloading with Time Travel at react-europe 2015

Immer: Immutability made easy — Michel Weststrate

Immutable data structures are a good thing and Michel Weststrate showed Immer which is a tiny package that allows you to work with immutable data structures with unprecedented ease. Managing the state of React app is a huge deal with Redux and any help is welcome. “Immer doesn’t require learning new data structures or update APIs, but instead creates a temporarily shadow tree which can be modified using the standard JavaScript APIs. The shadow tree will be used to generate your next immutable state tree.”

The talk showed how to write your reducers in a much more readable way, with half the code and without requiring additional large libraries. The talk slides are available on the Internet.

Get Rich Quick With React Context — Patrick Hund

“Get Rich Quick With React Context” lightning talk by Patrick Hund didn’t tell how good job opportunities you have when doing React But how with React 16.3 the context API has been completely revamped and demonstrated a good use case: Putting ad placements on your web page to get rich quick! Other use cases are localizations. Check out the slides which will tell you how easy it is to use context now and how to migrate your old context code to the new API.

There’s always a better way to handle localization — Eemeli Aro

“There’s always a better way to handle localization” lightning talk by Eemeli Aro told about how localization is a ridiculously difficult problem in the general case, but in the specific you can get away with really simple solutions, especially if you understand the compromises you’re making.

I must have been dozing as all I got was there are also other options to store localizations than JSON like YAML and JavaScript property format especially when dealing with non-developers like translators. The talk was quite general and on abstract level and mentioned solutions to localization were react-intl, react-i18next and react-message-context.

Styled Components, SSR, and Theming — Kasia Jastrzębska

Web applications need to be styled and Kasia Jastrzębska talked about CSS-in-JS with styled-components by going through the new API, performance improvements, server side rendering with Next.js. She also showed the theming manager available with v2 of styled-components. Talk slides are available on the Internet.

Takeaways from this talk was that CSS in React app can be written as you always have or by using CSS-in-JS solutions. There are several benefits of using styled-components but I’m still thinking how styles get scattered all over components.

Universal React Apps Using Next.js — Sia Karamalegos

53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
DoubleClick by Google, 2016

Every user’s hardware is different and processing speed can hinder user experience on client-side rendered React applications and so Sia Karamalegos talked how server-side rendering and code-splitting can drastically improve user experience. By minimizing the work that the client has to do. Performance and shipping your code matters. The talk showed how to easily build an universal React apps using the Next.js framework and walked through the concepts and code examples. Talk slides are available on the Internet.

There are lots of old (mobile) devices which especially benefit from Server Side Rendering. Next.js is a minimalistic framework for universal, server-rendered (or statically pre-rendered) React applications which enables faster page loads. Pages are server-rendered by default for initial load, you can enable prefetching future routes and there’s automatic code splitting. It’s also customizable so you can use own Babel and Webpack configurations and customize the server API with e.g. Express. And if you don’t want to use a server Next.js can also build static web apps that you can then host on Github pages or AWS S3.

Universal React apps using Next.js

State Management in React Apps with Apollo Client — Sara Vieira

Apollo Client was one of the most mentioned framework in the conference along with Reason ML and Sara Vieira gave energetic talk how to use it for state management in React Apps. If you haven’t come across Apollo Client it’s caching GraphQL client and helps you to manage data coming from the server. Virieira showed how to manage local state with apollo-link-state.

The talk was fast paced and I somewhat missed the why part but at least it’s easy to setup: yarn add apollo-boost graphql react-apollo. Have to see slides and demo later. Maybe the talk can be wrapped up to: “GQL all the things” and “I don’t like Redux” :D

State Management with Apollo

Detox: A year in. Building it, Testing with it — Rotem Mizrachi-Meidan

Detox testing framework for React Native talk by Rotem Mizrachi-Meidan was the other talk I dozed along. Mizrachi-Meidan talked what developing and using Detox in production has taught and how Detox works and what makes it deterministic. The talk showed how mobile apps could be tested. There’s a video of earlier talk on the Internet.

Detox

Make linting great again! — Andrey Okonetchnikov

One thing in software development which always gets developers to argue over stupid things is code formatting and linting. Andrey Okonetchnikov talked how “with a wrong workflow linting can be really a pain and will slow you and your team down but with a proper setup it can save you hours of manual work reformatting the code and reducing the code-review overhead.”

The talk was a quick introduction how 🚫💩 lint-staged a node.js library can improve developer experience. Small tool coupled with tools that analyze and improve the code like ESLint, Stylelint, Prettier and Jest can make a big difference.

Missed talks

There was also two talks I missed: “Understanding the differences is accepting” by Sven Sauleau and “Why I YAML” by Eemeli Aro. Sauleau showed “interesting” twists of Javascript language.

Read also the second part with more of React Native.

Extracting JSON value from command line with jq and Python

Developing modern web applications you often come to around checking REST API responses and parsing JSON values. You can do it with a combination of Unix tools like sed, cut and awk but if you’re allowed to install extra tools or use Python then things get easier. This post shows you couple of options for extracting JSON values with Unix tools.

There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with awk. One of those tools is jq as shown Stack Overflow. You can install it in macOS from Homebrew: brew install jq.

$ curl -s 'https://api.github.com/users/walokra' | jq -r '.name'

If you’re limited to tools that are likely installed on your system such as Python, using the json module gives you the benefit of a proper JSON parser and avoiding any extra dependencies.

Python:

$ curl -s 'https://api.github.com/users/walokra' | \
    python -c "import sys, json; print(json.load(sys.stdin)['name'])"

Stack Overflow answers to the question of “Parsing JSON with Unix tools” shows you other options with standard tools like sed, cut and Awk and more exotic options with Perl, Node.js and PHP.

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files’ contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here’s a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job’s workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project’s root directory’s .pre-commit-config.yaml as repository local hook. As you can see, I’m using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you’re linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there “just like that”. Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their “ready-to-consume” state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook’s script as StackOverflow describes. One way is to check out the files in a temporary location or if you’re on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.

Docker containers and using Alpine Linux for minimal base images

After using Docker for a while, you quickly realize that you spend a lot of time downloading or distributing images. This is not necessarily a bad thing for some but for others that scale their infrastructure are required to store a copy of every image that’s running on each Docker host. One solution to make your images lean is to use Alpine Linux which is a security-oriented, lightweight Linux distribution.

Lately I’ve been working with our Docker images for Java and Node.js microservices and when our stack consist of over twenty services, one thing to consider is how we build our docker images and what distributions to use. Building images upon Debian based distributions like Ubuntu works nicely but it gives packages and services which we don’t need. And that’s why developers are aiming to create the thinnest most usable image possible either by stripping conventional distributions, or using minimal distributions like Alpine Linux.

Choosing your Linux distribution

What’s a good choice of Linux distribution to be used with Docker containers? There was a good discussion in Hacker News about small Docker images, which had good points in the comment section to consider when choosing container operating system.

For some, size is a tiny concern, and far more important concerns are, for example:

  • All the packages in the base system are well maintained and updated with security fixes.
  • It’s still maintained a few years from now.
  • It handles all the special corner cases with Docker.

In the end the choice depends on your needs and how you want to run your services. Some like to use the quite large Phusion Ubuntu base image which is modified for Docker-friendliness, whereas others like to keep things simple and minimal with Alpine Linux.

Divide and conquer?

One question to ask yourself is: do you need full operating system? If you dump an OS in a container you are treating it like a lightweight virtual machine and that might be fine in some cases. If you however restrict it to exactly what you need and its runtime dependencies plus absolutely nothing more then suddenly it’s something else entirely – it’s process isolation, or better yet, it’s portable process isolation.

Other thing to think about is if you should combine multiple processes in single container. For example if you care about logging you shouldn’t use a logger daemon or logrotate in a container, but you probably want to store them externally – in a volume or mounted host directory. SSH server in container could be useful for diagnosing problems in production, but if you have to log in to a container running in production – you’re doing something wrong (and there’s docker exec anyways). And for cron, run it in a separate container and give access to the exact things your cronjob needs.

There are a couple of different schools of thought about how to use docker containers: as a way to distribute and run a single process, or as a lighter form of a virtual machine. It depends on what you’re doing with docker and how you manage your containers/applications. It makes sense to combine some services, but on the other hand you should still separate everything. It’s preferred to isolate every single process and explicitly telling it how to communicate with other processes. It’s sane from many perspectives: security, maintainability, flexibility and speed. But again, where you draw the line is almost always a personal, aesthetic choice. In my opinion it could make sense to combine nginx and php-fpm in a single container.

Minimal approach

Lately, there has been some movement towards minimal distributions like Alpine Linux, and it has got a lot of positive attention from the Docker community. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox using a grsecurity/PaX patched Linux kernel and OpenRC as its init system. In its x86_64 ISO flavor, it weighs in at an 82MB and a container requires no more than 8 MB. Alpine provides a wealth of possible packages via its apk package manager. As it uses musl, you may run into some issues with environments expecting glibc-like behaviour (for example Kubernetes or with compiling some npm modules), but for most use cases it should work just fine. And with minimal base images it’s more convenient to divide your processes to many small containers.

Some advantages for using Alpine Linux are:

  • Speed in which the image is downloaded, installed and running on your Docker host
  • Security is improved as the image has a smaller footprint thus making the attack surface also smaller
  • Faster migration between hosts which is especially helpful in high availability and disaster recovery configurations.
  • Your system admin won’t complain as much as you will use less disk space

For my purposes, I need to run Spring Boot and Node.js applications on Docker containers, and they were easily switched from Debian based images to Alpine Linux without any changes. There are official Docker images for OpenJDK/OpenJRE on Alpine and Dockerfiles for running Oracle Java on Alpine. Although there isn’t an official Node.js image built on Alpine, you can easily make your own Dockerfile or use community provided files. When official Java Docker image is 642 MB, Alpine Linux with OpenJDK 8 is 150 MB and with Oracle JDK 382 MB (can be stripped down to 172 MB). With official Node.js image it’s 651 MB (or if using slim 211 MB) and with Alpine Linux that’s 36 MB. That’s a quite a reduction in size.

Examples of using minimal container based on Alpine Linux:

For Node.js:

FROM alpine:edge
 
ENV NODE_ALPINE_VERSION=6.2.0-r0
 
RUN apk update && apk upgrade \
    && apk add nodejs="$NODE_ALPINE_VERSION"

For Java applications with OpenJDK:

FROM alpine:edge
ENV LANG C.UTF-8
 
RUN { \
      echo '#!/bin/sh'; \
      echo 'set -e'; \
      echo; \
      echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
   } > /usr/local/bin/docker-java-home \
   && chmod +x /usr/local/bin/docker-java-home
 
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:$JAVA_HOME/bin
ENV JAVA_VERSION 8u92
ENV JAVA_ALPINE_VERSION 8.92.14-r0
 
RUN set -x \
    && apk update && apk upgrade \
    && apk add --no-cache bash \
    && apk add --no-cache \
      openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

If you want to read more about running services on Alpine Linux, check Atlassian’s Nicola Paolucci’s nice article about experiences of running Java apps on Alpine.

Go small or go home?

So, should you use Alpine Linux for running your application on Docker? As also Docker official images are moving to Alpine Linux then it seems to make perfect sense from both a performance and security perspectives to switch to Alpine. And if you don’t want to take the leap from Debian or Ubuntu or want support from the downstream vendor you should consider stripping it from unneeded files to make it smaller.

Notes from Tampere goes Agile 2015

What could be a better way to spend a beautiful Autumn Saturday than visiting Tampere goes Agile and being inspired beyong agile. Well, I can think couple of activities which beat waking up 5:30 to catch a train to Tampere but attending a conference and listening to thought provoking presentations is always refreshing. So, what did they tell about being “Inspired beyond agile” in Tampere goes Agile 2015?

Tampere goes Agile 2015

Tampere goes Agile is a free to attend event about agile and this year the theme of the conference was “inspired beyond agile”. The event was held at Sokos Hotel Ilves and there were roughly 140 attendees. Agile as a topic isn’t interesting as it’s practices are widely in use, so the event went past agile and concentrated on “being agile”. How the organizational level and our mindsets has to change to make agile work. Waterfall mindset eats your agile culture for breakfast. And that’s the problem many presentations addressed.

Juho Vepsäläinen wrote also a great blog post about afterthoughts of Tampere Goes Agile 2015. It was nice to read a recap of the sessions which were in the other room than I was.

Keynote: After Agile

Event started with a keynote by Bob Marshall who asked what’s after agile. He has introduced the concept of right shifting where the core idea is that a large amount of organizations are underperforming. We’re always more or less prisoners of our mindset and existing ways.

“It’s not enough to do your best; you must know what to do, and then do your best.” – W. Edwards Deming

Marshall showed his right shifting organizational effectiveness chart where mean is around 1 (0 to 5 scale) and organizations using agile sit around 1.25 to 2. So, what’s beyond that? Agile thinking isn’t getting us there. What differs the organizations in the chart is their mindset: adhoc, analytic, synergistic and finally chaordic.

The Marshall model:

The Marshall model

In order to improve the effectiveness and efficiency of our organizations, we’ll need to be able to imagine better ones. The question is, what does an ideal organization look like? What kind of society we would build if it was wiped out? Starting from clean slate. We should look at the organization as a whole and what Marshall suggest is to use therapy to understand organization health and changing the mindset of the organization to one that’s more conducive for high performance.

The ideal model for IT company is built around: people, relationships between people, collective mindset, cognitive function and motivation. And it’s good to remember the difference between effectiveness and efficiency: Doing the right thing or doing the thing right.

Doctor, please fix my Agile!

Ville Törmälä talked about how we have seen changes on the method level, organizations are still mostly functioning the same ways as before. Many have tried to become more agile but without much success as there’s a waterfall way of doing everything.

Törmälä presented his definition of agile: 1) Make the work better 2) Make the work work better 3) Make lives better. But waterfall mindset eats our agile culture for breakfast so it’s about time to broaden our thinking about what really constitutes a long-term success in organizations doing any kind of knowledge work. Agile gives you tools and ideas but organizations can’t change or improve by “doing agile” better. If you fail with one agile “method”, you probably fail with the rest of them. It’s a systemic problem. It’s all built deep in to the thinking and structures of the organizations. That is the challenge.

“Every system is perfectly designed to achieve the results it gets”. We should change from “project thinking” to “stable teams thinking”. To change the power and influence structure from managing people to empowering people and further to liberating people.

One way of doing this is to use KBIs, Key Behaviour Indicators, where you write down examples of behaviour you want to see, think in what kind of environmental it’s possible or could happen and then create the environment, write down concrete actions.

“The supreme art of agile is to subdue the waterfall thinking without fighting” – Sun Tzu, The Art of War

In summary, we need to look beyond methods and practices. Organizations change by changing how they think and become better by understanding better how work works, how to create value and how to learn better. We’ve to work with the system, aiming to understand and affect its thinking.

Pairing is sharing

Pair programming is a core agile technical practice but many people still have reluctance to pair and Maaret Pyhäjärvi talked about the deliberate practice in building up the skill of pairing to allow pairing to take one’s skills on other activities to a new level. Pyhäjärvi shared her different stages of pairing and lessons picked up as a testing specialist.

Again, pairing is also about mindset and effective pairing is far from trivial – but it is skill that can be practiced. Pyhäjärvi talked about growth patterns from pairing with peers to pairing and mobbing with developers, from traditional style and side-by-side work to strong-style pairing and to pairing on both testing and programming activities.

Mindset: fixed <> growth

Listeners also got to test specific style of pairing, Strong-style pairing, where for an idea to go from your head to the computer it must go through someone else’s hands. You really need to think the steps through for the other to manage the given task.

One presented point about pairing was that you must unlearn ownership of ideas and contributions. Co-creation vs. collaboration.

Pyhäjärvi also told that selling pairing to team (of introvert programmers) is hard but Mob Programming has been their gateway to pair programming. It feels safer. You can read more about it from Mob Programming guide book.

Beyond Continuous Deployment: Documentation Pipeline

Before lunch there was also nice lightning talk about documentation pipeline by Antti Virtanen. He told about Lessons learnt from creating a Documentation Pipeline for Continuous Deployment with Jenkins and other open source tools. His slides are available from SlideShare.

1 ???
2 Continuous Delivery DevOps magic
3 ???
4 Profit

The DevOps magic with Jenkins was more or less standard practice and it was configured to generate documentation from database schema, JavaDocs, test coverage reports, performance test results and API specification. Reminded me of all the work I should introduce to our continuous integration.

3 standard tricks were presented:

  • Jenkins is the Swiss knife.
  • Database documentation in database metadata and generating ER-diagrams with SchemaSpy.
  • API documentation with Swagger.

When quality is just a cost: Useful approaches to testing

Testing is also important part of successful projects so Jani Grönman talked about useful approaches to testing and software quality.

“Software quality is measured by your customer success, not development project metrics and quality processes.”

Grönman approached the topic with often surprisingly common attitude towards testing and quality:

  • “Quality is just a cost and like other costs, it should be avoided or minimized.”
  • “Testing is it just another buffer in project’s budget”
  • “testers are not skilled labor, it’s enough if they can read and write.”
  • “What automation? They can quickly click trough the app can’t they?”.

And as you know this is all wrong. It’s true that testing is expensive but so is development. Can you afford not to test? You should think it as an investment. The presentation went through the reasons and motivations behind the various attitudes and explored differences in views and how to best tackle them using the right technology and approach. He also talked about the schools of testing: Analytical, Standard, Quality, Context driven, Agile.

But overall you should know that testing is skilled activity and part of the development. Testing provides information to the project and you should use mix of techniques like exploratory and automatisation. And think about what testing would be most effective now. You need to choose the right set of QA tools for the job. One size fits no-one.

DevOps: Boosting the agile way of working

DevOps has been quite the buzzword for some time, so it was interesting to hear what Timo Stordell had to say how Devops is boosting the agile way of working. In short DevOps isn’t anything revolutionary and should be seen an incremental way to improve our development practices. And talking about revolution, Stordell’s slides had nice Soviet theme.

The presentation was more or less what you would expect from a topic covering DevOps and has nice touch to it. In short: Small bangs over a big bang, requirements management meet acceptance testing, standardize development environments, monitor to understand what to develop.

Stordell had nice demo of how they perform acceptance testing using physical devices and automation. They have built a rig of CNC mill run by Raspberry Pi to test payment system.

For those interested about DevOps movement and everything around it there’s DevOps Finland meetup group. You can also download Eficode’s DevOps Quick Guide to read more about it.

Keynote: Beyond projects

Event’s final keynote was by Allan Kelly who spoke about #noprojects. Why projects are wrong and what to do instead. The main point of the keynote was that the project model doesn’t match software development and outlined an alternative to the project model and what companies need to do to achieve it. The presentation slides are available from SlideShare. Kelly has also written a book about team centric agile software developmen,: Xanpan, which combines Kanban and XP.

Going beyond projects is an interesting idea as everything we do is somewhat tied to doing things in projects. So, what’s wrong with projects? Projects are temporary whereas software is forever. Projects have end dates which in turn is against the defining feature of successful software: it doesn’t end. Software which is useful is used and demands change, stop changing it and you kill it. At worst the project metaphor leads to dead software, higher costs and missed business opportunities.

We should think projects more like a continuous flow where it’s success isn’t determined by staying on schedule, on budget, and with quality. We should concentrate on the value delivered and put value in flexibility as requirements change. This goes against the fixed nature of projects. Also after project you often break a functioning team and start all over again. We should put emphasis on teams, treat team as an unit and push work through it.

The other thing is that software is not milk. It’s cheapest in small packages, not in big cartons. Software development has not economics of scale. Big projects are risk. Think small and make regular delivery which increases ROI. Fail fast, fail cheap. Quite basic agile thinking.

So, beyond projects: waterfall 2.0, continuous flow

Continuous flow of waterfall

Now we have #noestimates, #nomanagement and #noprojects. Profit?

Summary

It was my first time visiting Tampere goes Agile and it was nice conference. The topics provided something to think about and not just the same agile thinking. You could clearly see the theme “Inspired beyond agile” working through different presentations and the emphasis was about changing our mindsets.

Going beyond agile isn’t easy as it’s more about thinking than tools. Old habits die hard and changing the waterfall way of thinking isn’t trivial. We should start with understanding our organization’s health and changing the mindset of the organization to one that’s more conducive for high performance. Switch from “project thinking” to “stable teams thinking” and change the power and influence structure from managing people to empowering people and further to liberating people.

The after party was at Ruby & Fellas but after early morning and couple of nice beers it was time to take train back home. But before that I had to visit the Moro Sky Bar with nice scenery over Tampere.

Tampere from Moro Sky Bar

Newsletters for software developers

Software development is one of the professions where you just have to keep your knowledge up to date and follow what happens in the field. But as normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersects the stack you’re using and topics you’re interested at. Here are my selection of newsletters for software developers covering topics like web and mobile development, user experience and design and general topics. For more newsletters for developers you can check what for example Dzone wrote.

The power of newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. Well curated newsletter to targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later :)

General

Hacker Newsletter
Weekly newsletter of the best articles in Hacker News.

Status code
A language agnostic roundup of the latest ideas, releases, trends, events and must-read articles from the programming world (think C, UNIX, algorithms, editors, protocols)

Mobile development

iOS Dev Weekly
Hand picked round up of the best iOS development links published every Friday.

This Week In Swift
List of the best Swift resources of the week.

iOS Dev nuggets
Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.

In depth Mac and iOS articles archives

Java

Java Web Weekly by Baeldung
Once-weekly email roundup of Java Web curated news by Eugen Baeldung.

The Java Specialists’ Newsletter
A monthly newsletter exploring the intricacies and depths of Java, curated Dr. Heinz Kabutz.

Java Performance Tuning News
A monthly newsletter focusing on Java performance issues, including the latest tips, articles, and news about Java Performance. Curated by Jack Shirazi and Kirk Pepperdine.

Database

DB Weekly
A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

HTML and CSS

HTML5Weekly
Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.

CSS Weekly
Roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

Web Development Reading List
Weekly roundup of web development–related sources, selected by Anselm Hannemann.

Versioning
SitePoint’s daily newsletter, which features the latest web development news.

Hacking UI
Newsletter for designers, front-end developers and product managers.

Scott Hanselman
Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.

The Modern Web Observer
Biweekly email newsletter about current issues and trends in front-end web development. It is much like a commentory on the important current news and articles related to front end development.

Web Design Weekly
Links to the best news and articles to hit the interweb during the week.

MergeLinks
Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.

For front-end developers there’s “How to keep up to date on
Front-End Technologies”
page which lists newsletters, blogs and people to follow.

JavaScript

JavaScript Weekly
Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.

A Drip of JavaScript
“One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.

SuperHero.js
Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.

Node Weekly
Once–weekly e-mail round-up of Node.js news and articles.

User experience and design

UX weekly
Five links each week with the best UX writing, process, analysis, and critique from around the web. Its content lies at the intersection of user experience design, game design, and tech industry critique.

GoodUI
Monthly newsletter where the author will share ideas on how to improve customer conversion and ease of use.

Sidebar.io
To satisfy your web aesthetics with list of the 5 best design links of the day. The content is manually curated by a couple great editors.

Userfocus
Updates you monthly about the happenings in the UX/usability arena.

UX Design Weekly
Best user experience design links every week, published every Friday.

Ops

DevOps Weekly
Weekly slice of devops news.

Web Operations Weekly
Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.

Microservice Weekly
Weekly newsletter of articles regarding microservices.

Build secure Web applications by reading Iron-Clad Java

Building secure Web applications isn’t easy and contains many aspects that the development team has to consider and take into account. “Iron-Clad Java: Building Secure Web Applications” book is good starting point to learn concepts, tactics, patterns and anti-patterns to develop, deploy and maintain secure Java applications. With 304 pages the book is more about getting an overview and pointers for further reading and research but works quite nicely in that regard.

“Iron-Clad Java: Building secure Web applications”

As the name suggests, “Iron-Clad Java: Building Secure Web Applications” by Jim Manico and August Detlefsen, is targeted for Java developers and is suitable reading also for less technical people in the team like project managers and managers as it doesn’t go too deeply to technical aspects or code. After reading the book even the managers should get an appreciation and the vocabulary to discuss security with their staff. The reader should get a solid understanding of what is wrong with many web apps in general and what corrective measures to take to mitigate the issues. The book was published September 2014 and has 304 pages (ISBN-13: 978-0071835886).

The book covers topics like secure authentication and session management processes, access control design, defending against cross-site scripting (XSS) and cross-site request forgery (CSRF), protecting sensitive data while stored or in transit, preventing SQL injection, ensuring safe file I/O and upload, using effective logging, error handling, and intrusion detection methods and also guide for secure software development lifecycle (secure-SDLC). The topics are written with theory and practice and so that they are approachable for developers new to security, for those that might be a little inexperienced but still providing useful nuggets for experienced developers.

In good and bad the book gives somewhat opinionated answers what technics and tools you can use to address security issues but overall the advice is solid and un-biased and more or less framework agnostic. So, the lessons learned should apply to any project. For me, writing examples with e.g. JSP and Struts makes me question if also the other tools the book suggest (which I wasn’t familiar with) are suitable for modern Java EE application development. Something to look into after reading the book. Also OWASP seemed to have answer to almost every security aspect.

Anyways, the book’s advice isn’t about using certain technologies but giving you something to think about and educating about security aspects in your Java Web application. What matters is that the book gives explanations of why you need to implement a specific control for a given issue, how you could do it and what the impacts are. This is what many security professionals miss when speaking to developers. The book tells you what the security problem is and then why and how you should fix that so it makes sense for developers.

Taking care for Web application security isn’t just for architects and developers but it’s also a topic which importance whole team should know and understand. The “Iron-Clad Java: Building secure Web applications” gives good overview to security and is suitable for the whole development team to read.

Book: Real World Java EE Night Hacks

Reading software development related books can be said to be unnecessary as all the information can be also found from the Internet but sometimes it’s easier to read all the related topics from one place. Adam Bien’s “Real World Java EE Night Hacks: Dissecting the Business Tier” is a book which walks through best practices and patterns used to create a Java EE 6 application and covers several important topics from architecture to performance and monitoring to testing. The book has 167 pages with source code so the topics are more about getting the idea than explaining them thoroughly. So if you’re new to Java EE 6 and patterns this book is for you. It gets you started and gives you topics to research more.

Real World Java EE Night Hacks

“Real World Java EE Night Hacks” walks through best practices and patterns used to create a real world Java EE application called “X-ray.” It’s a high-performance blog statistics application add-on for Apache Roller which is built with “vanilla” Java EE 6. It tells you about the core principles of Java EE like EJB 3.1, CDI, JPA, JTA, JAX-RS, Dependency Injection, Convention over Configuration, interceptors, transactions and binds them in “X-ray” application with source code to follow. The book is also more than just Java EE as it covers concepts like unit and integration testing, performance measuring and monitoring, continuous integration, real-time monitoring and timers and batch processing.

The book is easy to read although it isn’t for beginners as it requires you to know the Java jargon and main topics of Java EE. The book covers all the important topics regarding what you would need to know when building Java EE application but doesn’t explain or cover them thoroughly. It’s understandable as you would need more than one book to go them all through in sufficient detail. It’s more about telling you that there are this kinds of things to consider and how to apply them with Java EE application. It’s a starting point for your own research. It would’ve been also nice to have more pictures and diagrams in it.

In overall, the “Real World Java EE Night Hacks” is a decent book about implementing Java EE concepts and application architecture with best practices and patterns but it still feels a bit meager especially as the example isn’t an application you would first think of Java EE application to be.

Setting up Bower and Gulp in Windows

Doing things manually is fine once but if you can automate things it’s better. With little tools you can speed up development and reduce recursive mundane tasks such as starting a project or setting up boilerplate code. I recently came across Bower which is a package manager for the web. With Bower you can fetch and install packages from all over, and it takes care of finding, downloading, and saving the stuff you’re looking for. The other interesting tool to help you get going is Gulp which enables you to automate and enhance your workflow. Let’s see how to put things together on Windows, nothing special just steps to get you started.

Gulp tasks

Install Git

Bower needs Git to work so first install Git if you don’t have it. I chose Git for Windows which gives you BASH emulation used to run Git from the command line, graphical user interface for using Git and Shell integration.

Just click through the installation.

Install Node.js

Bower depends on Node.js and NPM so you need to get Node.js. Just download the installation package from Node.js site and click through it.

Install Bower

After you have Node.js installed we can install Bower with npm. You might need to restart your Windows to get all the path variables setup so Npm can find them.

Open up the Git Bash or Command Prompt and Bower is installed globally by running the following command.

$ npm install -g bower

Once you have Bower installed you then can install packages and dependencies using these commands:

# Using a local or remote package
bower install <package>
 
# Using a specific version of a package
bower install <package>#<version>
 
# Search packages
$ bower search <package>

By default packages will be put in the bower_components directory which can be changed if you prefer. If you want your packages downloaded into js/libs you can achieve this by creating a .bowerrc file

.bowerrc

{
    "directory": "js/libs"
}

You can also create a bower.json file which allows you to define the packages needed along with dependencies and then simply run bower install to download packages. In our example we setup a simple Backbone.js application which uses Bootstrap.

bower.json

{
    "name": "Foobar",
    "version": "0.1.0",
    "dependencies": {
          "jquery": "~2.0.3",
          "underscore": "~1.5.0",
          "bootstrap": "~3.3.2",
          "backbone": "~1.1.2"
    }
}

Our bower.json describes that we want some JavaScript libraries and as we have defined the version with ~ it can have bigger minor versions, e.g. jquery version can be between 2.0.3 < 2.1.0. Read more about semantic versioner for npm.

Now after creating that file inside the app directory you can run the following command:

$ bower install

After that you should see all your JavaScript packages under bower_components folder.

Install Gulp

To automate and enhance your workflow you can use Gulp for example to copy the files where you want them. There are nice recipes to show how to benefit of Gulp.

Install Gulp globally with npm:

$ npm install --global gulp

Install Gulp also in your project devDependencies:

$ npm install --save-dev gulp

Now we can setup our Gulp dependencies which pull from npm. Create a new package.json file in your project root and just add an empty object, {} and save it.

Next we install gulp-bower plugin which we can use to install Bower packages.

$ npm install --save-dev gulp-bower

This will install all the needed dependencies in a node_modules folder and also automatically update our package.json file with these dependencies.

Finally we need to setup the gulpfile.js which defines our tasks we want to perform. First we define what we installed in npm step above and create a config object to hold various settings. The bowerDir is just the path to the bower_components. Finally we add task for running bower and default task. Our bower tasks basically runs bower install but by including in the gulpfile other contributors only have to run gulp bower and have them all setup and ready.

gulp.js

var gulp = require('gulp'),
    bower = require('gulp-bower');
 
var config = {
     bowerDir: './bower_components'
}
 
gulp.task('bower', function() {
    return bower()
        .pipe(gulp.dest(config.bowerDir))
});
 
$ gulp.task('default', ['bower']);

The default task runs the bower task and all the user has to do to setup the needed packages is to run

$ gulp

In our case running gulp just runs our bower task which downloads the JavaScript packages we need. Pretty simple.

Gulp is powerful tool and has many use cases but also needs some to get all things working like you want and even then you might need to make compromises. One crafty task for Gulp and Bower is to customize your Bootstrap theme. Also Mark Goodyear has written good article about Getting started with gulp which shows some typical use cases.