Generating documentation as code with mermaid and PlantUML

Writing documentation is always a task which isn’t much liked and especially with diagrams and flowcharts there’s the problem of which tools to use. One crafty tool is Draw.io with web and desktop editors but what to use if you want to write documentation as a code and see the changes clearly in text format and maintain source-controlled diagrams? One of the tools for drawing diagrams with human readable text are mermaid and PlantUML.

mermaid

“Generation of diagrams and flowcharts from text in a similar manner as markdown.”

mermaid is a simple markdown-like script language for generating charts from text via javascript. You can try it in live editor.

mermaid sequence diagram

You can write mermaid diagrams in text editor but it’s better to use some editor with plugins to preview your work. Markdown Preview Enhanced for Atom and VS Code can render mermaid and PlantUML. There’s also dedicated preview plugins for VS Code and Atom.

To preview mermaid definition in VS Code with Markdown Preview Enhanced press Cmd-P to open Command palette and select Markdown Preview Enhanced: Open Preview.

mermaid in VS Code with Markdown Preview Enhanced

To preview mermaid definition in VS Code with Mermaid Preview press Cmd-P to open Command palette and select Preview Mermaid Diagram.

mermaid in VS Code with Mermaid preview
mermaid in VS Code with Mermaid preview

Generating PNG images from mermaid definitions

To use mermaid diagrams it’s useful to export them to PNGs. You can use mermaid.cli tool which takes a mermaid definition file as input and generates svg/png/pdf file as output.

Install mermaid.cli locally:

npm install mermaid.cli

Generate PNG:

./node_modules/.bin/mmdc -i input.mmd -o output.png

If you have plenty of defition files you can use the following script to generate PNGs:

#!/usr/bin/env bash
 
for mmd in ./docs/*.mmd
do
    filename="${mmd##*/}"
    echo "Generating $mmd"
    ./node_modules/.bin/mmdc -i $mmd -o ${filename%%.*}.png
done

Alternatively you can use node_modules/mermaid/bin/mermaid.js $mmd where mmd is the mermaid file.

PlantUML diagrams

PlantUML is used to draw UML diagrams, using a simple and human readable text description.

PlantUML is used to draw UML diagrams, using a simple and human readable text description. Diagrams are defined using a simple and intuitive language (pdf) and images can be generated in PNG, in SVG or in LaTeX format.

You can use PlantUML to write e.g. sequence diagrams, usecase diagrams, class diagrams, component diagrams, state diagrams and deployment diagrams.

PlantUML example diagram:

PlantUML diagram

Simple way to create and view PlantUML diagrams is to use Visual Studio Code and Markdown Preview Enhanced plugin which renders both PlantUML and mermaid diagrams. Alternative option is to use plantuml plugin.

To preview PlantUML diagram in VS Code with Markdown Preview Enhanced press Cmd-P to open Command palette and select Markdown Preview Enhanced: Open Preview.

PlantUML in VS Code with Markdown Preview Enhanced

There’s an online demo server which you can use to view PlantUML diagrams. The whole diagram is compressed into the URL itself and diagram data is stored in PNG metadata, so you can fetch it even from a downloaded image. For example this link opens the PlantUML Server with a simple Authentication activity diagram.

Running PlantUML server locally

Although you can render PlantUML diagrams online it’s better for usability and security reasons to install a local server. And this approach is important if you plan to generate diagrams with sensitive information. The easiest path is to run PlantUML Server Docker container and configure localhost as server.

docker run -d -p 8080:8080 plantuml/plantuml-server:jetty

In VS Code config, open user setting, and configure like:

"plantuml.server": "http://localhost:8080",
"plantuml.render": "PlantUMLServer",

Now to preview diagram in VS Code press alt-D to start PlantUML preview.

PlantUML preview in VS Code and local server

You can also generate diagrams from the command line. First download PlantUML compiled Jar and run the following command which will look for @startXXX into file1, file2 and file3. For each diagram, a .png file will be created.

java -jar plantuml.jar file1 file2 file3

The plantuml.jar needs Graphviz for dot (graph description language) and on macOS you can install it from Homebrew: brew install graphviz.

For processing a whole directory, you can use the following command which will search for @startXXX and @endXXX in .c, .h, .cpp, .txt, .pu, .tex, .html, .htm or .java files of the given directories:

java -jar plantuml.jar "directory1" "directory2"

Maintain source-controlled diagrams as a code

Documentation and drawing diagrams can be simple and maintaining source-controlled diagrams with tools like PlantUML and mermaid is achievable. These tools are not like the behemoth of Sparx Enterprise Architect but provide light and easy way to draw different diagrams for software development. You don’t have to draw lines and position labels manually as they are magically added where they fit and you even get as crude boxes and squares as thousands of dollars more expensive tools. Now the question is which tool to choose: PlantUML or mermaid?

Two days of React Finland 2018: Day two with React and React Native

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. This is the second part of my recap of the talks and my notes which I posted to Twitter. Check out also the first part of my notes from the first day’s talks.

React Finland 2018, Day 2

How React changed everything — Ken Wheeler

Second day started with keynote by Ken Wheeler. He examined how React changed the front end landscape as we know it and started it with nice time travel to the 90s with i.a. Flash, JavaScript and AngularJS. Most importantly the talk took a look at the core idea of React, and why it transcends language or rendering target and posit on what that means going forward. And last we heard about what React async: suspense and time slicing.

“Best part of React is the community”

How React changed everything

Get started with Reason — Nik Graf

The keynote also touched Reason ML and Nik Graf went into details kicking off with the basics and going into how to leverage features like variant types and pattern matching to make impossible states impossible.

Get started with Reason ML

Making Unreasonable States Impossible — Patrick Stapfer

Based on “Get started with Reason” Patrick Stapfer’s talk went deeper into the world of variant types and pattern matching and put them into a practical context. The talk was nice learning by doing TicTacToe live coding. It showed how Reason ML helps you design solid APIs, which are impossible to misuse by consumers. We also got more insights into practical ReasonReact code. Presentation is available on the Internet.

Conclusion about ReasonReact:

  • More rigid design
  • More KISS (keep it simple, stupid) than DRY (don’t repeat yourself)
  • Forces edge-cases to be handled
Learning Reason by doing TicTacToe

Reactive State Machines and Statecharts — David Khourshid

David Khourshid’s talk about state machines and statecharts was interesting. Functional + reactive approach to state machines can make it much easier to understand, visualize, implement and automatically create tests for complex user interfaces and flows. Model the code and automatically generate exhaustive tests for every possible permutation of the code. Things mentioned: React automata, xstate. Slides are available on the Internet.

“Model once, implement anywhere” – David Khourshid

The talk was surprisingly interesting especially for use cases as anything to make testing better is good. This might be something to look into.

ReactVR — Shay Keinan

After theory heavy presentations we got into more visual stuff: React VR. Shay Keinan presented the core concepts behind VR, showed different demonstrations, and how to get started with React VR and how to add new features from the Three.js library. React VR: Three.js + React Native = 360 and VR content. On the VR device side it was mentioned that Oculus Go, HTC Vive Focus are the big step to Virtual Reality.

“Virtual Reality’s possibilities are endless. Compares to lucid dreaming.” – Shay Keinan

WebVR enables web developers to create frictionless, immersive experiences and we got to see Solar demo and Three VR demo which were lit 🔥.

React VR

World Class experience with React Native — Michał Chudziak

I’ve shortly experimented with React Native so it was nice to listen Michał Chudziak’s talk how to set up a friendly React Native development environment with the best DX, spot bugs in early stage and deliver continuous builds to QA. Again Redux was dropped in favour of apollo-link-state.

Work close to your team – Napoleon Hill

What makes a good Developer eXperience?

  • stability
  • function
  • clarity
  • easiness

GraphQL was mentioned to be the holy grail of frontend development and perfect with React Native. Tools for better developer experience: Haul, CircleCI, Fastlane, ESLint, Flow, Jest, Danger, Detox. Other tips were i.a to use native IDEs (XCode, Android Studio) as it helps debugging. XCode Instruments helps debug performance (check iTunes for video) and there’s also Android Profiler.

World Class experience with React Native

React Finland App – Lessons learned — Toni Ristola

Every conference has to have an app and React Finland of course did a React Native app. Toni Ristola lightning talked about lessons learned. Technologies used with React Native was Ignite, GraphQL and Apollo Client 👌 App’s source code is available on GitHub.

Lessons learned:

  • Have a designer in the team
  • Reserve enough time — doing and testing a good app takes time
  • Test with enough devices — publish alpha early
React Finland App – Lessons learned

React Native Ignite — Gant Laborde

80% of mobile app development is the same old song which can be cut short with Ignite CLI. Using Ignite, you can jump into React Native development with a popular combination of technologies, OR brew your own. Gant Laborde talked about the new Bowser version which makes things even better with Storybook, Typescript, Solidarity, mobx-state-tree and lint-staged. Slides can be found on the Internet.

Ignite
Ignite

How to use React, webpack and other buzzwords if there is no need — Varya Stepanova

Varya Stepanova’s lightning talk suggested to start a side-project other than ToDo app to study new development approaches and showed what it can be in React. The example was how to generate a multilingual static website using Metalsmith, React and other modern technologies and tools which she uses to build her personal blog. Slides can be found on the Internet.

Doing meaningful side-projects is a great idea to study new things and I’ve used that for i.a. learning Swift with Highkara newsreader, did couple of apps for Sailfish OS and played with GraphQL and microservices while developing app with largish vehicle dataset.

After party

Summary

Two days full of talks of React, React Native, React VR and all the things that go with developing web applications with them was great experience. Days were packed with great talks, new information and everything went smoothly. The conference was nicely organized, food was good and participants got soft hoodies to go with the Allas Sea Pool ticket. The talks were all great but especially “World Class experience with React Native” and “React Native Ignite” gave new inspiration to write some app. Also “ReactVR” seemed interesting although I think Augmented Reality will be bigger thing than Virtual Reality. It was nice to hear from “The New Best Practices” talk that there really is no new best practices as the old ones still work. Just use them!

Something to try and even to take into production will be Immer, styled components and Next.js. One thing which is easy to implement is to start using lint-staged although we are linting all the things already.

One of the conference organizers and speaker, Juho Vepsäläinen, wrote Lessons Learned from the conference and many of the points he mentions are to the point. The food was nice but “there wasn’t anything substantial for the afternoon break”. There wasn’t anything to eat after lunch but luckily I had own snacks. Vepsäläinen also mentions that “there was sometimes too much time between the presentations” but I think the longer breaks between some presentations were nice for having a quick stroll outside and have some fresh air. The venue was quite warm and the air wasn’t so good in the afternoon.

The Afterparty at Sea Life Helsinki was interesting choice and it worked nicely although there wasn’t so many people there. The aquarium was fishy experience and provided also some other content than refreshments. Too bad I hadn’t have time to go and check the Allas Sea Pool which we got a free ticket. Maybe next time.

Thanks to the conference crew for such a good event and of course to my fellow Goforeans which attended it and had a great time!

Two days of React Finland 2018: Day one topics of React

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. Here’s the first part of my notes from the talks which I posted to Twitter. Read also the second part with more of React Native.

React Finland 2018: Day 1

React Finland combined the Finnish React community with international flavor from Jani Eväkallio to Ken Wheeler and other leading talents of the community. The event was the first of its kind in Finland and consisted of a workshop day and two days of talks around the topic. It was nice that the event was single track so you didn’t need to choose between interesting talks.

At work I’ve been developing with React couple of years and tried my hands with React Native so the topics were familiar. The conference provided crafty new knowledge to learn from and maybe even put to production. Overall the conference was great experience and everything went smoothly. Nice work from the React Finland conference team! And of course thanks to Gofore which sponsored the conference and got me a ticket.

I tweeted my notes from almost every presentation and here’s a recap of the talks. I heard that the videos from the conference will be available shortly.

The New Best Practices — Jani Eväkallio

First day’s keynote was by Jani Eväkallio who talked about “The New Best Practices”. As the talk description wrote “When React was first introduced, it was ridiculed for going against established web development best practices as we knew them. Five years later, React is the gold standard for how we create user interfaces. Along the way, we’ve discovered a new set of tools, design patterns and programming techniques.”

The new best practices were:

  • Build big things from small things
  • Write code for humans first: flow, Typescript, storybook
  • Stay close to the language:
    • helps i.a. linters
  • Always prefer simplicity
  • Don’t break things:
    • Facebook makes React API changes easy to upgrade, depreciation well in advance, migration, documentation. it’s a flow, not versions. Use codemod.
  • Keep an open mind

You ask “what new best practices”? Yep, that’s the thing. We don’t need new best practices as the same concepts like Model-View-Controller and separation of concerns are still valid. We should use best practices which have been proved good before as they also work nicely with React philosophy. Eväkallio also talked why React will be around for a long time. It’s because components and interoperable components are an innovation primitive.

The New Best Practices

Declarative state and side effects — Christian Alfoni

After the keynote it was time to get more practical and Christian Alfoni talked about how we can get help writing our business logic in a declarative manner and see what benefits it gives us. He talked about lessons learned refactoring Codesandbox.io from Redux to Cerebral and about Cerebral which provides a declarative state and side effects management for popular JavaScript frameworks. Talks slides are available on the Internet.

Alfoni also pointed to Turning the database inside out with Apache Samza. Also that Cerebral had time travel before Dan Abramov presented Live React in his talk Hot Reloading with Time Travel at react-europe 2015

Immer: Immutability made easy — Michel Weststrate

Immutable data structures are a good thing and Michel Weststrate showed Immer which is a tiny package that allows you to work with immutable data structures with unprecedented ease. Managing the state of React app is a huge deal with Redux and any help is welcome. “Immer doesn’t require learning new data structures or update APIs, but instead creates a temporarily shadow tree which can be modified using the standard JavaScript APIs. The shadow tree will be used to generate your next immutable state tree.”

The talk showed how to write your reducers in a much more readable way, with half the code and without requiring additional large libraries. The talk slides are available on the Internet.

Get Rich Quick With React Context — Patrick Hund

“Get Rich Quick With React Context” lightning talk by Patrick Hund didn’t tell how good job opportunities you have when doing React But how with React 16.3 the context API has been completely revamped and demonstrated a good use case: Putting ad placements on your web page to get rich quick! Other use cases are localizations. Check out the slides which will tell you how easy it is to use context now and how to migrate your old context code to the new API.

There’s always a better way to handle localization — Eemeli Aro

“There’s always a better way to handle localization” lightning talk by Eemeli Aro told about how localization is a ridiculously difficult problem in the general case, but in the specific you can get away with really simple solutions, especially if you understand the compromises you’re making.

I must have been dozing as all I got was there are also other options to store localizations than JSON like YAML and JavaScript property format especially when dealing with non-developers like translators. The talk was quite general and on abstract level and mentioned solutions to localization were react-intl, react-i18next and react-message-context.

Styled Components, SSR, and Theming — Kasia Jastrzębska

Web applications need to be styled and Kasia Jastrzębska talked about CSS-in-JS with styled-components by going through the new API, performance improvements, server side rendering with Next.js. She also showed the theming manager available with v2 of styled-components. Talk slides are available on the Internet.

Takeaways from this talk was that CSS in React app can be written as you always have or by using CSS-in-JS solutions. There are several benefits of using styled-components but I’m still thinking how styles get scattered all over components.

Universal React Apps Using Next.js — Sia Karamalegos

53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
DoubleClick by Google, 2016

Every user’s hardware is different and processing speed can hinder user experience on client-side rendered React applications and so Sia Karamalegos talked how server-side rendering and code-splitting can drastically improve user experience. By minimizing the work that the client has to do. Performance and shipping your code matters. The talk showed how to easily build an universal React apps using the Next.js framework and walked through the concepts and code examples. Talk slides are available on the Internet.

There are lots of old (mobile) devices which especially benefit from Server Side Rendering. Next.js is a minimalistic framework for universal, server-rendered (or statically pre-rendered) React applications which enables faster page loads. Pages are server-rendered by default for initial load, you can enable prefetching future routes and there’s automatic code splitting. It’s also customizable so you can use own Babel and Webpack configurations and customize the server API with e.g. Express. And if you don’t want to use a server Next.js can also build static web apps that you can then host on Github pages or AWS S3.

Universal React apps using Next.js

State Management in React Apps with Apollo Client — Sara Vieira

Apollo Client was one of the most mentioned framework in the conference along with Reason ML and Sara Vieira gave energetic talk how to use it for state management in React Apps. If you haven’t come across Apollo Client it’s caching GraphQL client and helps you to manage data coming from the server. Virieira showed how to manage local state with apollo-link-state.

The talk was fast paced and I somewhat missed the why part but at least it’s easy to setup: yarn add apollo-boost graphql react-apollo. Have to see slides and demo later. Maybe the talk can be wrapped up to: “GQL all the things” and “I don’t like Redux” :D

State Management with Apollo

Detox: A year in. Building it, Testing with it — Rotem Mizrachi-Meidan

Detox testing framework for React Native talk by Rotem Mizrachi-Meidan was the other talk I dozed along. Mizrachi-Meidan talked what developing and using Detox in production has taught and how Detox works and what makes it deterministic. The talk showed how mobile apps could be tested. There’s a video of earlier talk on the Internet.

Detox

Make linting great again! — Andrey Okonetchnikov

One thing in software development which always gets developers to argue over stupid things is code formatting and linting. Andrey Okonetchnikov talked how “with a wrong workflow linting can be really a pain and will slow you and your team down but with a proper setup it can save you hours of manual work reformatting the code and reducing the code-review overhead.”

The talk was a quick introduction how 🚫💩 lint-staged a node.js library can improve developer experience. Small tool coupled with tools that analyze and improve the code like ESLint, Stylelint, Prettier and Jest can make a big difference.

Missed talks

There was also two talks I missed: “Understanding the differences is accepting” by Sven Sauleau and “Why I YAML” by Eemeli Aro. Sauleau showed “interesting” twists of Javascript language.

Read also the second part with more of React Native.

Build Monitor with Raspberry Pi and Touch Screen

Information is a great tool in software development and it’s useful to have easy access to it. The more obvious you make your problems, the harder you make them to ignore. The more attention they get, the quicker they get solved. One thing developers like to monitor in software development is continuous integration status and metrics from running services. And what better way to achieve visibility and visualize to those metrics than building an information radiator.

I didn’t want to invent the wheel again so I got Raspberry Pi 3 Model B with accessories and 7″ touch screen to base my project. Using a Raspberry Pi as an information radiator isn’t a new idea and the Internet is full of examples of different adaptations with screens, lights, bells and whistles. For the start we just visualized our Jenkins builds and Grafana dashboard but later on we will propably do a custom dashboard.

Setting up the base

The information radiator is easy to get running as you only need a computer which preferably runs Linux. You can use an old laptop and attach it to external screen or if you’re like me and want to tinker you can get e.g. Raspberry Pi 3 and couple it with small external screen for portability. Nice and low cost solution which gets you some hacker value. I got the Rpi from our local hardware store and unfortunately the Model B+ was just released on the same day. The extra 15% power, 5 GHz Wifi and less heat and throttling would’ve been nice.

Raspberry Pi 3 Model B and accessories

I got the Raspberry Pi starting package with the official case, power supply, HDMI cable and a MicroSD card with preloaded NOOBS. So I just needed to connect the cables, put SD Card in and click to install Raspbian. Other interesting operating systems would’ve been Fedberry which is Fedora ‘Minimal, XFCE and LXQt’ Remixes.

For the screen I used 7″ IPS 5 point touch screen for Raspberry Pi with 1024×600 resolution and HDMI from joy-it codename RB-LCD-7-2. Initially I thought I could install the whole system with this display but as it turned out Rpi doesn’t understand it out of the box. It just showed some white noise and interference . Luckily some one had already solved this and I got the right config after I had installed Raspbian with real monitor.

Joy-it touch screen with default settings

Edit your /boot/config.txt:

# uncomment to force a specific HDMI mode (this will force VGA)
hdmi_group=2
hdmi_mode=87
 
# Add line:
hdmi_cvt=1024 600 60 3 0 0 0

And reboot your Raspberry Pi after those changes.

You should also run $ sudo raspi-config to setup for example WiFi country to allow channels 12 and 13 and your current Timezone.

I also updated Raspbian which bumps it to rpi-4.14.y linux tree:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo rpi-update

To connect to Rpi with SSH enable it with raspi-config > Interfacing Options or just:

$ sudo systemctl enable ssh
$ sudo systemctl start ssh

For the note, by default the user pi has password raspberry. You should change it but if you want to remove the nagging of default password do the following:

$ sudo apt-get -y purge pprompt
$ sudo rm /etc/profile.d/sshpwd.sh
$ sudo rm /etc/xdg/lxsession/LXDE-pi/sshpwd.sh

Problems with WiFi connection

I set up the Raspberry Pi at our local office and at home and there were no problems with WiFi connection. But when I brought it to customer premises the WiFi connection was weak and practically couldn’t move a bit. My MacBook worked fine but it was connected to 5 GHz network which isn’t an option with my Rpi 3 Model B. The WiFi on Rpi 3 was using channel 11 on 802.11i with WPA2 as shown with iwlist wlan0 scan.

There is a thread on Raspberry Pi forum about Very poor wifi performance which suggest to set up WiFi internalisation correctly to allow channels 12 and 13. At one point the issue was that only channels 1-11 are available on the Rpi 3 but checking out the ‘next’ branch of firmware/kernel (sudo BRANCH=next rpi-update) apparently fixed channels 12/13. I was on kernel 4.9.80 so it wasn’t a problem for me. The other suggested problem is with Atheros chipset based router which doesn’t like Broadcom WiFi on Rpi 3.

For some disabling power management solves the connection issues. For RPi built-in Broadcom (Cypress) WiFi there’s no control for power management and it’s disabled by the kernel. In iw / iwlist / iwconfig you see bug with “Power Management:on”.

But nevertheless testing switching it off made my WiFi connection better but it’s strength didn’t of course change.

$ sudo iwconfig wlan0 power off

To make it permanent you can add something like this in your interfaces file:

$ sudo touch /etc/network/if-up.d/wlan0
$ sudo chmod +x /etc/network/if-up.d/wlan0
$ sudo echo -e '#!/bin/bash\niwconfig wlan0 power off' > /etc/network/if-up.d/wlan0

Accessing Raspberry Pi remotely

The information radiator is usually connected to a TV with no keyboard or mouse attached so accessing it remotely is useful. You can use x11vnc which allows you to VNC into a headless Pi with a VNC client like Apple Remote Desktop, RealVNC’s vncviewer or homebrew’s tiger-vnc.

$ sudo apt-get install ttf-mscorefonts-installer
$ sudo apt-get install x11vnc

To start x11vnc automatically create new or edit existing ~/.xsessionrc file:

$ cat ~/.xsessionrc
/usr/bin/x11vnc -noxrecord -noxdamage -forever -bg -rfbport 5900

Getting interesting things on the screen

To test our setup and quickly show some data I just added a Build Monitor view in Jenkins and other view with Dashboard view. I also configured the Rpi to automatically start Chromium browser in kiosk mode after reboots and directed it to Jenkins website so there would be no need for interactions to get things on the screen. To show several sources of data and get things running quickly without customized information radiator we used Revolver – Tabs Chromium extension to rotate between multiple browser tabs: one showed Jenkins Build Monitor other Grafana Dashboard and third Twitter feed.

To automatically start the chromium-browser after Raspbian desktop starts, edit the following lxsession file:

$ cp /etc/xdg/lxsession/LXDE-pi/autostart /home/pi/.config/lxsession/LXDE-pi/autostart
$ vim /home/pi/.config/lxsession/LXDE-pi/autostart
 
#@xscreensaver -no-splash  # comment this line out to disable screensaver
# Disable Xsession from blanking
@xset s off
@xset -dpms
@xset s noblank
 
@sh ./autostart.sh
# load chromium after boot and point to the localhost webserver in full screen mode
@chromium-browser --kiosk --no-default-browser-check --no-first-run --disable-infobars "http://localhost/"

Chromium has a feature to show “Restore pages” nagging popup when not grafefully shutdown and you can try the following Stack Overflow suggestion. What was also suggested was doing “chmod 001 ~/.config/chromium/Default/Preferences” but it results to another nagging window.

$ cat ./autostart.sh
#!/bin/sh
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' ~/.config/chromium/'Local State'
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/; s/"exit_type":"[^"]\+"/"exit_type":"Normal"/' ~/.config/chromium/Default/Preferences

You could use “–restore-last-session” or “–incognito” parameter which also works but it has several disadvantages, such as disabling the cache and login information. Or maybe I should just use Firefox.

Raspberry Pi and Jenkins Build Monitor

It might be also useful to set Chromium to restart every night. When running Chromium for longer periods it may fill Rpi’s memory with garbage and after it must be hard rebooted.

Turning the monitor on and off automatically

When running Rpi as a wall monitor it’s useful to save energy and extend the life of your monitor by turning the monitor on/off on a daily schedule. You can do this by running a cron script. Get this script and put it in /home/pi/rpi-hdmi.sh and make it executable: chmod +x /home/pi/rpi-hdmi.sh. Call the script at the desired time with cron entry:

$ crontab -e
 
# Turn HDMI Off (22:00)
0 17 * * * /home/pi/rpi-hdmi.sh off
 
# Turn HDMI On (7:00)
30 7 * * * /home/pi/rpi-hdmi.sh on

If you’ve problems with the above script try this which is the original but change “curr_vt=`fgconsole`” to be “curr_vt=`sudo fgconsole`” as fgconsole needs sudo privileges and otherwise you get an error “Couldn’t get a file descriptor referring to the console”.

From simple dashboard to real information Radiator

Showing just Jenkins Build Monitor or Grafana dashboards is simple but to get more information of things running you could show things like the success rate of the builds, build health, latest open pull request and project Twitter Messages. One nice example of information radiator is Panic’s Status board.

There are different ways to create customizable dashboard and one way is to use Dashing which is a dashboard framework and to get headstart you can see Project Dashboard https://github.com/martin-naumann/project-cockpit is which shows “build health” for the latest build, the latest open pull request in Github, the success rate of your builds, some free-form text and your project’s or company’s logo. It uses the Jenkins API to get the ration of successful / non-successful builds as well as the latest build state. It also calls the Github API to get the latest pull request to display the name and picture of the author and the title of the pull request on the dashboard.

For more leisure use you can set up the Raspberry Pi as a wall display to show information like calendar, weather, photos and RSS feeds. One option is to use Dakboard which is a web interface used to display information and is quite configurable with different services. At first Dakboard seems nice but is quite limited on what data it can show and some useful features are premium. Another open source option is MagicMirror² which seems to be more modular and extensible (as you can create your own modules) but needs more tinkering.

Extracting JSON value from command line with jq and Python

Developing modern web applications you often come to around checking REST API responses and parsing JSON values. You can do it with a combination of Unix tools like sed, cut and awk but if you’re allowed to install extra tools or use Python then things get easier. This post shows you couple of options for extracting JSON values with Unix tools.

There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with awk. One of those tools is jq as shown Stack Overflow. You can install it in macOS from Homebrew: brew install jq.

$ curl -s 'https://api.github.com/users/walokra' | jq -r '.name'

If you’re limited to tools that are likely installed on your system such as Python, using the json module gives you the benefit of a proper JSON parser and avoiding any extra dependencies.

Python:

$ curl -s 'https://api.github.com/users/walokra' | \
    python -c "import sys, json; print(json.load(sys.stdin)['name'])"

Stack Overflow answers to the question of “Parsing JSON with Unix tools” shows you other options with standard tools like sed, cut and Awk and more exotic options with Perl, Node.js and PHP.

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files’ contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here’s a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job’s workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project’s root directory’s .pre-commit-config.yaml as repository local hook. As you can see, I’m using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you’re linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there “just like that”. Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their “ready-to-consume” state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook’s script as StackOverflow describes. One way is to check out the files in a temporary location or if you’re on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.

Expanding your horizons on JVM with Kotlin

The power of Java ecosystem lies in the Java Virtual Machine (JVM) which runs variety of programming languages which are better suitable for some tasks than Java. One relatively new JVM language is Kotlin which is statically typed programming language that targets the JVM and JavaScript. You can use it with Java, Android and the browser and it’s 100% interoperable with Java. Kotlin is open source (Apache 2 License) and developed by a team at JetBrains. The name comes from the Kotlin Island, near St. Petersburg. The first officially considered stable release of Kotlin v1.0 was released on February 15, 2016.

Why Kotlin?

“Kotlin is designed to be an industrial-strength object-oriented language, and to be a better language than Java but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin.” – Kotlin, Wikipedia

Kotlin’s page summaries the question “Why Kotlin?” to:

  • Concise: Reduce the amount of boilerplate code you need to write.
  • Safe: Avoid entire classes of errors such as null pointer exceptions.
  • Versatile: Build server-side applications, Android apps or frontend code running in the browser. You can write code in Kotlin and target JavaScript to run on Node.js or in browser.
  • Interoperable: Leverage existing frameworks and libraries of the JVM with 100% Java Interoperability.

“You can write code that’s more expressive and more concise than even a scripting language, but with way fewer bugs and with way better performance.” – Why Kotlin is my next programming language

One of the obvious applications of Kotlin is Android development as the platform uses Java 6 although it can use most of Java 7 and some backported Java 8 features. Only the recent Android N which changes to use OpenJDK introduces support for Java 8 language features.

For Java developers one significant feature in Kotlin is Higher-Order Functions, function that takes functions as parameters, which makes functional programming more convenient than in Java. But in general, I’m not so sure if using Kotlin compared to Java 8 is as much beneficial. It smooths off a lot of Java’s rough edges, makes code leaner and costs nothing to adopt (other than using IntelliJ IDEA) so it’s at least worth trying. But if you’re stuck with legacy code and can’t upgrade from Java 6, I would jump right in.

Learning Kotlin

Coming from Java background Kotlin at first glance looks a lot leaner, elegant, simpler and the syntax is familiar if you’ve written Swift. To get to know the language it’s useful to do some Kotlin Examples and Koans which get you through how it works. They also have “Convert from Java” tool which is useful to see how Java classes translate to Kotlin. For mode detailed information you can read the complete reference to the Kotlin language and the standard library.

If you compare Kotlin to Java you see that null references are controlled by the type system, there’s no raw types, arrays are invariant (can’t assign an Array to an Array) and there’s no checked exceptions. Also semicolons are not required, there’s no static members, non-private fields or wildcard types.

And what Kotlin has that Java doesn’t have? For starters there’s null safety, smart casts, extension functions and lots of things Java just got in recent versions like Null safety, streams, lambdas ( although which are “expensive”). On the other hand Kotlin targets Java 6 bytecode and doesn’t use some of the improvements in Java 8 like invoke-dynamic or lambda support. Some of JDK7/8 features are going to be included in Standard Library in 1.1 and in the mean time you can use small kotlinx-support library. It provides extension and top-level functions to use JDK7/JDK8 features such as calling default methods of collection interfaces and use extension for AutoCloseable.

And you can also call Java code from Kotlin which makes it easier to write it alongside Java if you want to utilize it in existing project and write some part of your codebase with Kotlin.

The Kotlin Discuss is also nice forum to read experiences of using Kotlin.

Tooling: in practice IntelliJ IDEA

You can use simple text editors and compile your code from the command line or use build tools such as Ant, Gradle and Maven but good IDEs make the development more convenient. In practice, using Kotlin is easiest with JetBrains IntelliJ IDEA and you can use their open source Community edition for free. There’s also Eclipse plugin for Kotlin but naturally it’s much less sophisticated than the IntelliJ support.

Example project

The simplest way to start with Kotlin application is to use Spring Boot’s project generator, add your dependencies, choose Gradle or Maven and click on “Generate Project”.

There are some gotchas with using Spring and Kotling together which can be seen from Spring + Kotlin FAQ. For example by default, classes are final and you have to mark them as “open” if you want the standard Java behaviour. This is useful to know with @Configuration classes and @Bean methods. There’s also Kotlin Primavera which is a set of libraries to support Spring portfolio projects.

For example Spring Boot + Kotlin application you should look at Spring.io writeup where they do a geospatial messenger with Kotlin, Spring Boot and PostgreSQL

What does Kotlin look like compared to Java?
Simple example of using Java 6, Java 8 and Kotlin to filter a Map and return a String. Notice that Kotlin and Java 8 are quite similar.

# Java 6
String result = "";
for (Map.Entry<Integer, String> entry : someMap.entrySet()) {
	if("something".equals(entry.getValue())){
		result += entry.getValue();
}
 
# Java 8
String result = someMap.entrySet().stream()
		.filter(map -> "something".equals(map.getValue()))
		.map(map->map.getValue())
		.collect(Collectors.joining());
 
# Kotlin
val result = someMap
  .values
  .filter { it == "something" }
  .joinToString("")
 
# Kotlin, shorter 
val str = "something"
val result = str.repeat(someMap.count { it.value == str })
 
# Kotlin, more efficient with large maps where only some matching.
val result = someMap
  .asSequence()
  .map { it.value }
  .filter { it == "something" }
  .joinToString("")

The last Kotlin example makes the evaluation lazy by changing the map to sequence. In Kotlin collections map/filter methods aren’t lazy by default but create always a new collection. So if we call filter after values method then it’s not as efficient with large maps where only some elements are matching the predicate.

Using Java and Kotlin in same project

To start with Kotlin it’s easiest to mix it existing Java project and write some classes with Kotlin. Using Kotlin in Maven project is explained in the Reference and to compile mixed code applications Kotlin compiler should be invoked before Java compiler. In maven terms that means kotlin-maven-plugin should be run before maven-compiler-plugin.

Just add the kotlin and kotlin-maven-plugin to your pom.xml as following

<dependencies>
    <dependency>
        <groupId>org.jetbrains.kotlin</groupId>
        <artifactId>kotlin-stdlib</artifactId>
        <version>1.0.3</version>
    </dependency>
</dependencies>
 
<plugin>
    <artifactId>kotlin-maven-plugin</artifactId>
    <groupId>org.jetbrains.kotlin</groupId>
    <version>1.0.3</version>
    <executions>
        <execution>
            <id>compile</id>
            <phase>process-sources</phase>
            <goals> <goal>compile</goal> </goals>
        </execution>
        <execution>
            <id>test-compile</id>
            <phase>process-test-sources</phase>
            <goals> <goal>test-compile</goal> </goals>
        </execution>
    </executions>
</plugin>

Notes on testing

Almost everything is final in Kotlin by default (classes, methods, etc) which is good as it forces immutability, less bugs. In most cases you use interfaces which you can easily mock and in integration and functional tests you’re likely to use real classes, so even then final is not an obstacle. For using Mockito there’s Mockito-Kotlin library https://github.com/nhaarman/mockito-kotlin which provides helper functions.

You can also do better than just tests by using Spek which is a specification framework for Kotlin. It allows you to easily define specifications in a clear, understandable, human readable way.

There’s yet no static analyzers for Kotlin. Java has: FindBugs, PMD, Checkstyle, Sonarqube, Error Prone, FB infer. Kotlin has kotlinc and IntelliJ itself comes with static analysis engine called the Inspector. Findbugs works with Kotlin but detects some issues that are already covered by the programming language itself and are impossible in Kotlin.

To use Kotlin or not?

After writing some classes with Kotlin and testing converting existing Java classes to Kotlin it makes the code leaner and easier to read especially with data classes like DTOs. Less (boilerplate) code is better. You can call Java code from Kotlin and Kotlin code can be used from Java rather smoothly as well although there are some things to remember.

So, to use Kotlin or not? It looks a good statically-typed alternative to Java if you want to expand your horizons. It’s pragmatic evolution to Java that respects the need for good Java integration and doesn’t introduce anything that’s terribly hard to understand and includes a whole bunch of features you might like. The downsides what I’ve come across are that tooling support is kind of limited, meaning in practice only IntelliJ IDEA. Also documentation isn’t always up to date or updated when the language evolves and that’s also an issue when searching for examples and issues. But hey, everything is fun with Kotlin :)

Docker containers and using Alpine Linux for minimal base images

After using Docker for a while, you quickly realize that you spend a lot of time downloading or distributing images. This is not necessarily a bad thing for some but for others that scale their infrastructure are required to store a copy of every image that’s running on each Docker host. One solution to make your images lean is to use Alpine Linux which is a security-oriented, lightweight Linux distribution.

Lately I’ve been working with our Docker images for Java and Node.js microservices and when our stack consist of over twenty services, one thing to consider is how we build our docker images and what distributions to use. Building images upon Debian based distributions like Ubuntu works nicely but it gives packages and services which we don’t need. And that’s why developers are aiming to create the thinnest most usable image possible either by stripping conventional distributions, or using minimal distributions like Alpine Linux.

Choosing your Linux distribution

What’s a good choice of Linux distribution to be used with Docker containers? There was a good discussion in Hacker News about small Docker images, which had good points in the comment section to consider when choosing container operating system.

For some, size is a tiny concern, and far more important concerns are, for example:

  • All the packages in the base system are well maintained and updated with security fixes.
  • It’s still maintained a few years from now.
  • It handles all the special corner cases with Docker.

In the end the choice depends on your needs and how you want to run your services. Some like to use the quite large Phusion Ubuntu base image which is modified for Docker-friendliness, whereas others like to keep things simple and minimal with Alpine Linux.

Divide and conquer?

One question to ask yourself is: do you need full operating system? If you dump an OS in a container you are treating it like a lightweight virtual machine and that might be fine in some cases. If you however restrict it to exactly what you need and its runtime dependencies plus absolutely nothing more then suddenly it’s something else entirely – it’s process isolation, or better yet, it’s portable process isolation.

Other thing to think about is if you should combine multiple processes in single container. For example if you care about logging you shouldn’t use a logger daemon or logrotate in a container, but you probably want to store them externally – in a volume or mounted host directory. SSH server in container could be useful for diagnosing problems in production, but if you have to log in to a container running in production – you’re doing something wrong (and there’s docker exec anyways). And for cron, run it in a separate container and give access to the exact things your cronjob needs.

There are a couple of different schools of thought about how to use docker containers: as a way to distribute and run a single process, or as a lighter form of a virtual machine. It depends on what you’re doing with docker and how you manage your containers/applications. It makes sense to combine some services, but on the other hand you should still separate everything. It’s preferred to isolate every single process and explicitly telling it how to communicate with other processes. It’s sane from many perspectives: security, maintainability, flexibility and speed. But again, where you draw the line is almost always a personal, aesthetic choice. In my opinion it could make sense to combine nginx and php-fpm in a single container.

Minimal approach

Lately, there has been some movement towards minimal distributions like Alpine Linux, and it has got a lot of positive attention from the Docker community. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox using a grsecurity/PaX patched Linux kernel and OpenRC as its init system. In its x86_64 ISO flavor, it weighs in at an 82MB and a container requires no more than 8 MB. Alpine provides a wealth of possible packages via its apk package manager. As it uses musl, you may run into some issues with environments expecting glibc-like behaviour (for example Kubernetes or with compiling some npm modules), but for most use cases it should work just fine. And with minimal base images it’s more convenient to divide your processes to many small containers.

Some advantages for using Alpine Linux are:

  • Speed in which the image is downloaded, installed and running on your Docker host
  • Security is improved as the image has a smaller footprint thus making the attack surface also smaller
  • Faster migration between hosts which is especially helpful in high availability and disaster recovery configurations.
  • Your system admin won’t complain as much as you will use less disk space

For my purposes, I need to run Spring Boot and Node.js applications on Docker containers, and they were easily switched from Debian based images to Alpine Linux without any changes. There are official Docker images for OpenJDK/OpenJRE on Alpine and Dockerfiles for running Oracle Java on Alpine. Although there isn’t an official Node.js image built on Alpine, you can easily make your own Dockerfile or use community provided files. When official Java Docker image is 642 MB, Alpine Linux with OpenJDK 8 is 150 MB and with Oracle JDK 382 MB (can be stripped down to 172 MB). With official Node.js image it’s 651 MB (or if using slim 211 MB) and with Alpine Linux that’s 36 MB. That’s a quite a reduction in size.

Examples of using minimal container based on Alpine Linux:

For Node.js:

FROM alpine:edge
 
ENV NODE_ALPINE_VERSION=6.2.0-r0
 
RUN apk update && apk upgrade \
    && apk add nodejs="$NODE_ALPINE_VERSION"

For Java applications with OpenJDK:

FROM alpine:edge
ENV LANG C.UTF-8
 
RUN { \
      echo '#!/bin/sh'; \
      echo 'set -e'; \
      echo; \
      echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
   } > /usr/local/bin/docker-java-home \
   && chmod +x /usr/local/bin/docker-java-home
 
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:$JAVA_HOME/bin
ENV JAVA_VERSION 8u92
ENV JAVA_ALPINE_VERSION 8.92.14-r0
 
RUN set -x \
    && apk update && apk upgrade \
    && apk add --no-cache bash \
    && apk add --no-cache \
      openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

If you want to read more about running services on Alpine Linux, check Atlassian’s Nicola Paolucci’s nice article about experiences of running Java apps on Alpine.

Go small or go home?

So, should you use Alpine Linux for running your application on Docker? As also Docker official images are moving to Alpine Linux then it seems to make perfect sense from both a performance and security perspectives to switch to Alpine. And if you don’t want to take the leap from Debian or Ubuntu or want support from the downstream vendor you should consider stripping it from unneeded files to make it smaller.

Notes from Tampere goes Agile 2015

What could be a better way to spend a beautiful Autumn Saturday than visiting Tampere goes Agile and being inspired beyong agile. Well, I can think couple of activities which beat waking up 5:30 to catch a train to Tampere but attending a conference and listening to thought provoking presentations is always refreshing. So, what did they tell about being “Inspired beyond agile” in Tampere goes Agile 2015?

Tampere goes Agile 2015

Tampere goes Agile is a free to attend event about agile and this year the theme of the conference was “inspired beyond agile”. The event was held at Sokos Hotel Ilves and there were roughly 140 attendees. Agile as a topic isn’t interesting as it’s practices are widely in use, so the event went past agile and concentrated on “being agile”. How the organizational level and our mindsets has to change to make agile work. Waterfall mindset eats your agile culture for breakfast. And that’s the problem many presentations addressed.

Juho Vepsäläinen wrote also a great blog post about afterthoughts of Tampere Goes Agile 2015. It was nice to read a recap of the sessions which were in the other room than I was.

Keynote: After Agile

Event started with a keynote by Bob Marshall who asked what’s after agile. He has introduced the concept of right shifting where the core idea is that a large amount of organizations are underperforming. We’re always more or less prisoners of our mindset and existing ways.

“It’s not enough to do your best; you must know what to do, and then do your best.” – W. Edwards Deming

Marshall showed his right shifting organizational effectiveness chart where mean is around 1 (0 to 5 scale) and organizations using agile sit around 1.25 to 2. So, what’s beyond that? Agile thinking isn’t getting us there. What differs the organizations in the chart is their mindset: adhoc, analytic, synergistic and finally chaordic.

The Marshall model:

The Marshall model

In order to improve the effectiveness and efficiency of our organizations, we’ll need to be able to imagine better ones. The question is, what does an ideal organization look like? What kind of society we would build if it was wiped out? Starting from clean slate. We should look at the organization as a whole and what Marshall suggest is to use therapy to understand organization health and changing the mindset of the organization to one that’s more conducive for high performance.

The ideal model for IT company is built around: people, relationships between people, collective mindset, cognitive function and motivation. And it’s good to remember the difference between effectiveness and efficiency: Doing the right thing or doing the thing right.

Doctor, please fix my Agile!

Ville Törmälä talked about how we have seen changes on the method level, organizations are still mostly functioning the same ways as before. Many have tried to become more agile but without much success as there’s a waterfall way of doing everything.

Törmälä presented his definition of agile: 1) Make the work better 2) Make the work work better 3) Make lives better. But waterfall mindset eats our agile culture for breakfast so it’s about time to broaden our thinking about what really constitutes a long-term success in organizations doing any kind of knowledge work. Agile gives you tools and ideas but organizations can’t change or improve by “doing agile” better. If you fail with one agile “method”, you probably fail with the rest of them. It’s a systemic problem. It’s all built deep in to the thinking and structures of the organizations. That is the challenge.

“Every system is perfectly designed to achieve the results it gets”. We should change from “project thinking” to “stable teams thinking”. To change the power and influence structure from managing people to empowering people and further to liberating people.

One way of doing this is to use KBIs, Key Behaviour Indicators, where you write down examples of behaviour you want to see, think in what kind of environmental it’s possible or could happen and then create the environment, write down concrete actions.

“The supreme art of agile is to subdue the waterfall thinking without fighting” – Sun Tzu, The Art of War

In summary, we need to look beyond methods and practices. Organizations change by changing how they think and become better by understanding better how work works, how to create value and how to learn better. We’ve to work with the system, aiming to understand and affect its thinking.

Pairing is sharing

Pair programming is a core agile technical practice but many people still have reluctance to pair and Maaret Pyhäjärvi talked about the deliberate practice in building up the skill of pairing to allow pairing to take one’s skills on other activities to a new level. Pyhäjärvi shared her different stages of pairing and lessons picked up as a testing specialist.

Again, pairing is also about mindset and effective pairing is far from trivial – but it is skill that can be practiced. Pyhäjärvi talked about growth patterns from pairing with peers to pairing and mobbing with developers, from traditional style and side-by-side work to strong-style pairing and to pairing on both testing and programming activities.

Mindset: fixed <> growth

Listeners also got to test specific style of pairing, Strong-style pairing, where for an idea to go from your head to the computer it must go through someone else’s hands. You really need to think the steps through for the other to manage the given task.

One presented point about pairing was that you must unlearn ownership of ideas and contributions. Co-creation vs. collaboration.

Pyhäjärvi also told that selling pairing to team (of introvert programmers) is hard but Mob Programming has been their gateway to pair programming. It feels safer. You can read more about it from Mob Programming guide book.

Beyond Continuous Deployment: Documentation Pipeline

Before lunch there was also nice lightning talk about documentation pipeline by Antti Virtanen. He told about Lessons learnt from creating a Documentation Pipeline for Continuous Deployment with Jenkins and other open source tools. His slides are available from SlideShare.

1 ???
2 Continuous Delivery DevOps magic
3 ???
4 Profit

The DevOps magic with Jenkins was more or less standard practice and it was configured to generate documentation from database schema, JavaDocs, test coverage reports, performance test results and API specification. Reminded me of all the work I should introduce to our continuous integration.

3 standard tricks were presented:

  • Jenkins is the Swiss knife.
  • Database documentation in database metadata and generating ER-diagrams with SchemaSpy.
  • API documentation with Swagger.

When quality is just a cost: Useful approaches to testing

Testing is also important part of successful projects so Jani Grönman talked about useful approaches to testing and software quality.

“Software quality is measured by your customer success, not development project metrics and quality processes.”

Grönman approached the topic with often surprisingly common attitude towards testing and quality:

  • “Quality is just a cost and like other costs, it should be avoided or minimized.”
  • “Testing is it just another buffer in project’s budget”
  • “testers are not skilled labor, it’s enough if they can read and write.”
  • “What automation? They can quickly click trough the app can’t they?”.

And as you know this is all wrong. It’s true that testing is expensive but so is development. Can you afford not to test? You should think it as an investment. The presentation went through the reasons and motivations behind the various attitudes and explored differences in views and how to best tackle them using the right technology and approach. He also talked about the schools of testing: Analytical, Standard, Quality, Context driven, Agile.

But overall you should know that testing is skilled activity and part of the development. Testing provides information to the project and you should use mix of techniques like exploratory and automatisation. And think about what testing would be most effective now. You need to choose the right set of QA tools for the job. One size fits no-one.

DevOps: Boosting the agile way of working

DevOps has been quite the buzzword for some time, so it was interesting to hear what Timo Stordell had to say how Devops is boosting the agile way of working. In short DevOps isn’t anything revolutionary and should be seen an incremental way to improve our development practices. And talking about revolution, Stordell’s slides had nice Soviet theme.

The presentation was more or less what you would expect from a topic covering DevOps and has nice touch to it. In short: Small bangs over a big bang, requirements management meet acceptance testing, standardize development environments, monitor to understand what to develop.

Stordell had nice demo of how they perform acceptance testing using physical devices and automation. They have built a rig of CNC mill run by Raspberry Pi to test payment system.

For those interested about DevOps movement and everything around it there’s DevOps Finland meetup group. You can also download Eficode’s DevOps Quick Guide to read more about it.

Keynote: Beyond projects

Event’s final keynote was by Allan Kelly who spoke about #noprojects. Why projects are wrong and what to do instead. The main point of the keynote was that the project model doesn’t match software development and outlined an alternative to the project model and what companies need to do to achieve it. The presentation slides are available from SlideShare. Kelly has also written a book about team centric agile software developmen,: Xanpan, which combines Kanban and XP.

Going beyond projects is an interesting idea as everything we do is somewhat tied to doing things in projects. So, what’s wrong with projects? Projects are temporary whereas software is forever. Projects have end dates which in turn is against the defining feature of successful software: it doesn’t end. Software which is useful is used and demands change, stop changing it and you kill it. At worst the project metaphor leads to dead software, higher costs and missed business opportunities.

We should think projects more like a continuous flow where it’s success isn’t determined by staying on schedule, on budget, and with quality. We should concentrate on the value delivered and put value in flexibility as requirements change. This goes against the fixed nature of projects. Also after project you often break a functioning team and start all over again. We should put emphasis on teams, treat team as an unit and push work through it.

The other thing is that software is not milk. It’s cheapest in small packages, not in big cartons. Software development has not economics of scale. Big projects are risk. Think small and make regular delivery which increases ROI. Fail fast, fail cheap. Quite basic agile thinking.

So, beyond projects: waterfall 2.0, continuous flow

Continuous flow of waterfall

Now we have #noestimates, #nomanagement and #noprojects. Profit?

Summary

It was my first time visiting Tampere goes Agile and it was nice conference. The topics provided something to think about and not just the same agile thinking. You could clearly see the theme “Inspired beyond agile” working through different presentations and the emphasis was about changing our mindsets.

Going beyond agile isn’t easy as it’s more about thinking than tools. Old habits die hard and changing the waterfall way of thinking isn’t trivial. We should start with understanding our organization’s health and changing the mindset of the organization to one that’s more conducive for high performance. Switch from “project thinking” to “stable teams thinking” and change the power and influence structure from managing people to empowering people and further to liberating people.

The after party was at Ruby & Fellas but after early morning and couple of nice beers it was time to take train back home. But before that I had to visit the Moro Sky Bar with nice scenery over Tampere.

Tampere from Moro Sky Bar

Newsletters for software developers

Software development is one of the professions where you just have to keep your knowledge up to date and follow what happens in the field. But as normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersects the stack you’re using and topics you’re interested at. Here are my selection of newsletters for software developers covering topics like web and mobile development, user experience and design and general topics. For more newsletters for developers you can check what for example Dzone wrote.

The power of newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. Well curated newsletter to targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later :)

General

Hacker Newsletter
Weekly newsletter of the best articles in Hacker News.

Status code
A language agnostic roundup of the latest ideas, releases, trends, events and must-read articles from the programming world (think C, UNIX, algorithms, editors, protocols)

Mobile development

iOS Dev Weekly
Hand picked round up of the best iOS development links published every Friday.

This Week In Swift
List of the best Swift resources of the week.

iOS Dev nuggets
Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.

In depth Mac and iOS articles archives

Java

Java Web Weekly by Baeldung
Once-weekly email roundup of Java Web curated news by Eugen Baeldung.

The Java Specialists’ Newsletter
A monthly newsletter exploring the intricacies and depths of Java, curated Dr. Heinz Kabutz.

Java Performance Tuning News
A monthly newsletter focusing on Java performance issues, including the latest tips, articles, and news about Java Performance. Curated by Jack Shirazi and Kirk Pepperdine.

Database

DB Weekly
A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

HTML and CSS

HTML5Weekly
Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.

CSS Weekly
Roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

Web Development Reading List
Weekly roundup of web development–related sources, selected by Anselm Hannemann.

Versioning
SitePoint’s daily newsletter, which features the latest web development news.

Hacking UI
Newsletter for designers, front-end developers and product managers.

Scott Hanselman
Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.

The Modern Web Observer
Biweekly email newsletter about current issues and trends in front-end web development. It is much like a commentory on the important current news and articles related to front end development.

Web Design Weekly
Links to the best news and articles to hit the interweb during the week.

MergeLinks
Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.

For front-end developers there’s “How to keep up to date on
Front-End Technologies”
page which lists newsletters, blogs and people to follow.

JavaScript

JavaScript Weekly
Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.

A Drip of JavaScript
“One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.

SuperHero.js
Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.

Node Weekly
Once–weekly e-mail round-up of Node.js news and articles.

User experience and design

UX weekly
Five links each week with the best UX writing, process, analysis, and critique from around the web. Its content lies at the intersection of user experience design, game design, and tech industry critique.

GoodUI
Monthly newsletter where the author will share ideas on how to improve customer conversion and ease of use.

Sidebar.io
To satisfy your web aesthetics with list of the 5 best design links of the day. The content is manually curated by a couple great editors.

Userfocus
Updates you monthly about the happenings in the UX/usability arena.

UX Design Weekly
Best user experience design links every week, published every Friday.

Ops

DevOps Weekly
Weekly slice of devops news.

Web Operations Weekly
Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.

Microservice Weekly
Weekly newsletter of articles regarding microservices.