Build Monitor with Raspberry Pi and Touch Screen

Information is a great tool in software development and it’s useful to have easy access to it. The more obvious you make your problems, the harder you make them to ignore. The more attention they get, the quicker they get solved. One thing developers like to monitor in software development is continuous integration status and metrics from running services. And what better way to achieve visibility and visualize to those metrics than building an information radiator.

I didn’t want to invent the wheel again so I got Raspberry Pi 3 Model B with accessories and 7″ touch screen to base my project. Using a Raspberry Pi as an information radiator isn’t a new idea and the Internet is full of examples of different adaptations with screens, lights, bells and whistles. For the start we just visualized our Jenkins builds and Grafana dashboard but later on we will propably do a custom dashboard.

Setting up the base

The information radiator is easy to get running as you only need a computer which preferably runs Linux. You can use an old laptop and attach it to external screen or if you’re like me and want to tinker you can get e.g. Raspberry Pi 3 and couple it with small external screen for portability. Nice and low cost solution which gets you some hacker value. I got the Rpi from our local hardware store and unfortunately the Model B+ was just released on the same day. The extra 15% power, 5 GHz Wifi and less heat and throttling would’ve been nice.

Raspberry Pi 3 Model B and accessories

I got the Raspberry Pi starting package with the official case, power supply, HDMI cable and a MicroSD card with preloaded NOOBS. So I just needed to connect the cables, put SD Card in and click to install Raspbian. Other interesting operating systems would’ve been Fedberry which is Fedora ‘Minimal, XFCE and LXQt’ Remixes.

For the screen I used 7″ IPS 5 point touch screen for Raspberry Pi with 1024×600 resolution and HDMI from joy-it codename RB-LCD-7-2. Initially I thought I could install the whole system with this display but as it turned out Rpi doesn’t understand it out of the box. It just showed some white noise and interference . Luckily some one had already solved this and I got the right config after I had installed Raspbian with real monitor.

Joy-it touch screen with default settings

Edit your /boot/config.txt:

# uncomment to force a specific HDMI mode (this will force VGA)
hdmi_group=2
hdmi_mode=87
 
# Add line:
hdmi_cvt=1024 600 60 3 0 0 0

And reboot your Raspberry Pi after those changes.

You should also run $ sudo raspi-config to setup for example WiFi country to allow channels 12 and 13 and your current Timezone.

I also updated Raspbian which bumps it to rpi-4.14.y linux tree:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo rpi-update

To connect to Rpi with SSH enable it with raspi-config > Interfacing Options or just:

$ sudo systemctl enable ssh
$ sudo systemctl start ssh

For the note, by default the user pi has password raspberry. You should change it but if you want to remove the nagging of default password do the following:

$ sudo apt-get -y purge pprompt
$ sudo rm /etc/profile.d/sshpwd.sh
$ sudo rm /etc/xdg/lxsession/LXDE-pi/sshpwd.sh

Problems with WiFi connection

I set up the Raspberry Pi at our local office and at home and there were no problems with WiFi connection. But when I brought it to customer premises the WiFi connection was weak and practically couldn’t move a bit. My MacBook worked fine but it was connected to 5 GHz network which isn’t an option with my Rpi 3 Model B. The WiFi on Rpi 3 was using channel 11 on 802.11i with WPA2 as shown with iwlist wlan0 scan.

There is a thread on Raspberry Pi forum about Very poor wifi performance which suggest to set up WiFi internalisation correctly to allow channels 12 and 13. At one point the issue was that only channels 1-11 are available on the Rpi 3 but checking out the ‘next’ branch of firmware/kernel (sudo BRANCH=next rpi-update) apparently fixed channels 12/13. I was on kernel 4.9.80 so it wasn’t a problem for me. The other suggested problem is with Atheros chipset based router which doesn’t like Broadcom WiFi on Rpi 3.

For some disabling power management solves the connection issues. For RPi built-in Broadcom (Cypress) WiFi there’s no control for power management and it’s disabled by the kernel. In iw / iwlist / iwconfig you see bug with “Power Management:on”.

But nevertheless testing switching it off made my WiFi connection better but it’s strength didn’t of course change.

$ sudo iwconfig wlan0 power off

To make it permanent you can add something like this in your interfaces file:

$ sudo touch /etc/network/if-up.d/wlan0
$ sudo chmod +x /etc/network/if-up.d/wlan0
$ sudo echo -e '#!/bin/bash\niwconfig wlan0 power off' > /etc/network/if-up.d/wlan0

Accessing Raspberry Pi remotely

The information radiator is usually connected to a TV with no keyboard or mouse attached so accessing it remotely is useful. You can use x11vnc which allows you to VNC into a headless Pi with a VNC client like Apple Remote Desktop, RealVNC’s vncviewer or homebrew’s tiger-vnc.

$ sudo apt-get install ttf-mscorefonts-installer
$ sudo apt-get install x11vnc

To start x11vnc automatically create new or edit existing ~/.xsessionrc file:

$ cat ~/.xsessionrc
/usr/bin/x11vnc -noxrecord -noxdamage -forever -bg -rfbport 5900

Getting interesting things on the screen

To test our setup and quickly show some data I just added a Build Monitor view in Jenkins and other view with Dashboard view. I also configured the Rpi to automatically start Chromium browser in kiosk mode after reboots and directed it to Jenkins website so there would be no need for interactions to get things on the screen. To show several sources of data and get things running quickly without customized information radiator we used Revolver – Tabs Chromium extension to rotate between multiple browser tabs: one showed Jenkins Build Monitor other Grafana Dashboard and third Twitter feed.

To automatically start the chromium-browser after Raspbian desktop starts, edit the following lxsession file:

$ cp /etc/xdg/lxsession/LXDE-pi/autostart /home/pi/.config/lxsession/LXDE-pi/autostart
$ vim /home/pi/.config/lxsession/LXDE-pi/autostart
 
#@xscreensaver -no-splash  # comment this line out to disable screensaver
# Disable Xsession from blanking
@xset s off
@xset -dpms
@xset s noblank
 
@sh ./autostart.sh
# load chromium after boot and point to the localhost webserver in full screen mode
@chromium-browser --kiosk --no-default-browser-check --no-first-run --disable-infobars "http://localhost/"

Chromium has a feature to show “Restore pages” nagging popup when not grafefully shutdown and you can try the following Stack Overflow suggestion. What was also suggested was doing “chmod 001 ~/.config/chromium/Default/Preferences” but it results to another nagging window.

$ cat ./autostart.sh
#!/bin/sh
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' ~/.config/chromium/'Local State'
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/; s/"exit_type":"[^"]\+"/"exit_type":"Normal"/' ~/.config/chromium/Default/Preferences

You could use “–restore-last-session” or “–incognito” parameter which also works but it has several disadvantages, such as disabling the cache and login information. Or maybe I should just use Firefox.

Raspberry Pi and Jenkins Build Monitor

It might be also useful to set Chromium to restart every night. When running Chromium for longer periods it may fill Rpi’s memory with garbage and after it must be hard rebooted.

Turning the monitor on and off automatically

When running Rpi as a wall monitor it’s useful to save energy and extend the life of your monitor by turning the monitor on/off on a daily schedule. You can do this by running a cron script. Get this script and put it in /home/pi/rpi-hdmi.sh and make it executable: chmod +x /home/pi/rpi-hdmi.sh. Call the script at the desired time with cron entry:

$ crontab -e
 
# Turn HDMI Off (22:00)
0 17 * * * /home/pi/rpi-hdmi.sh off
 
# Turn HDMI On (7:00)
30 7 * * * /home/pi/rpi-hdmi.sh on

If you’ve problems with the above script try this which is the original but change “curr_vt=`fgconsole`” to be “curr_vt=`sudo fgconsole`” as fgconsole needs sudo privileges and otherwise you get an error “Couldn’t get a file descriptor referring to the console”.

From simple dashboard to real information Radiator

Showing just Jenkins Build Monitor or Grafana dashboards is simple but to get more information of things running you could show things like the success rate of the builds, build health, latest open pull request and project Twitter Messages. One nice example of information radiator is Panic’s Status board.

There are different ways to create customizable dashboard and one way is to use Dashing which is a dashboard framework and to get headstart you can see Project Dashboard https://github.com/martin-naumann/project-cockpit is which shows “build health” for the latest build, the latest open pull request in Github, the success rate of your builds, some free-form text and your project’s or company’s logo. It uses the Jenkins API to get the ration of successful / non-successful builds as well as the latest build state. It also calls the Github API to get the latest pull request to display the name and picture of the author and the title of the pull request on the dashboard.

For more leisure use you can set up the Raspberry Pi as a wall display to show information like calendar, weather, photos and RSS feeds. One option is to use Dakboard which is a web interface used to display information and is quite configurable with different services. At first Dakboard seems nice but is quite limited on what data it can show and some useful features are premium. Another open source option is MagicMirror² which seems to be more modular and extensible (as you can create your own modules) but needs more tinkering.

Monthly notes 28

Winter refuses to make way for the Spring and March in Southern Finland has been quite cold despite warm and rainy week which melted away some of the already scarce snow. So, skiing mainly on artificial snow and mountainbiking on icy paths which is nice. But this also leaves time to read what has happened on software development field. This month it’s about tools and working methods.

Issue 28: 17.3.2018

Tools of the trade

Must have extensions for VS Code (according to me)
tl;dr; Auto import, Beautif, Clipboard History, Git History, Code Runner, Docker, Material Icon Theme, Path Intellisense. (from @ThePracticalDev)

Reclaim your abandonware
Super cool post about how to get the abandoned mac Twitter client to support 280 character tweets by modifying its assembly. (from @b0rk)

Keep calm and code on: Productivity tools for developers
Suggestion of tools for developers for different tasks. I didn’t agree all of them so my suggestions are in brackets. tl;dr; 1. actiTIME (or toggl) 2. Cold Turkey (or other pomodoro) 3. Strict Workflow 4. Habitica 5. Oh My Zsh (or Bash-it) 6. The Silver Searcher 7. UltraEdit (or Atom, VS Code etc.) 8. Homebrew 9. GitHub Changelog Generator. (from @ThePracticalDev)

Git aliases
If you use Git command-line a lot, you will probably grow your own list of Git aliases sooner or later. After simple standard aliases (ci -> commit, co -> checkout) you might want to see some advanced tricks you may find useful.

CTFR
Get subdomains of an HTTPS website abusing Certificate Transparency logs. (from @KitPloit). Apparently also curl "https://crt.sh/?q=%.starbucks.com&output=json" -sS | jq .name_value | uniq | tee output works.

Front-end

Front-End Performance Checklist 2018
Unbiased and objective front-end performance checklist for 2018 — an overview of the issues you might need to consider to ensure that your response times are fast, user interaction is smooth and your sites don’t drain user’s bandwidth. (from @igrigorik)

Working methdods

Tim Ottinger: what once was thought impossibility is now commonplace in software development
TDD, pairing, mobbing, evolutionary design, self organizing, lean startup, commenting code, interpreted languages, beta, noestimates. “You have to ask, what impossible thing is going to be done next? We change how we think, and new vistas open up.”

Branching Is Easy. So? Git-flow Is Not Agile.
I wrote this blog post ages ago and I still stand firmly behind it trunk-based development 4ever. (from @skamille)

Getting Things Done – A Programmer Productivity Guide
“Everybody has some sort of system—even not having a system and trying to remember everything is technically a system. I wanted to share mine because it seems to work pretty well.” (from @ThePracticalDev)

Herding cats is easy compared to managing developers (video)
A short and sharp 10 minute guide to managing developers by Dom Millar at NDC Conference Sydney 2017.

Something different

10 x weekend brunches in Helsinki
Il Birricifio, Ipi Kulmakuppila, Sandro, Gastro Café Kallio, Yes Yes Yes!, Loop, Moko Market, Sue Ellen, Paulig Kulma and Krog Roba. My addition to the list is Rupla. (from @VisitHelsinki)

Monthly notes 27

For cold winter evenings here’s something to read. Monthly notes for February are about relearning and thinking.

Issue 27: 23.2.2018

Relearning

Computer Science and why it’s necessary even for web developers
“Computer Science and why it’s necessary even for web developers I know that in some countries a degree in CS is expensive or unattainable, and that some companies do unnecessary algorithm interviews. This thread is not about degrees or interviews, it’s about CS itself.”

Free Intro to Web Development slides (with demos)
Slides of the Web Dev Intro labs for the “6.813 User Interface Design and Implementation” at MIT
(from Twitter)

The Four Rules of Simple Design (in order of importance)

  • Passes the tests
  • Reveals intention
  • No duplication
  • Fewest elements

And, yes, “fewest elements” is last, which means you only minimize classes and methods if everything else satisfied

Tools

shuttle
When openssh port forwarding doesn’t cut it, use sshuttle: “Transparent proxy meets VPN meets ssh.”

Microservices

The Death of Microservice Madness in 2018
There are many cases where great efforts have been made to adopt microservice patterns without necessarily understanding how the costs and benefits will apply to the specifics of the problem at hand. The post describes in detail what microservices are, why the pattern is so appealing, and also some of the key challenges that they present.

Should that be a Microservice? Keep These Six Factors in Mind
These days, you can’t swing a dry erase marker without hitting someone talking about microservices but few have spent any appreciable time asking if a given application should be a microservice. tl;dr; “1. Multiple Rates of Change; 2. Independent Life Cycles; 3. Independent Scalability; 4. Isolated Failure; 5. Simplify Interactions with External Dependencies; 6. The Freedom to Choose the Right Tech for the Job”.

JavaScript

A Guide to Web Performance Optimization with Webpack
This guide walks through how to effectively optimize site resources using webpack. This can help users load and interact with your sites more quickly. (from JavaScript Weekly 373)

Security

face-verify.js: Monitoring who is physically looking at a website for additional security
Demo project showing how Machine Box tech can be integrated into JavaScript applications. Facebox takes an image and tells you how many faces it sees, as well as who those faces belong to provided you have shown it a single example previously. You can use this capability to build additional security into web apps so you can see how many people are watching the screen and who they are. Using the webcam with some JavaScript and Facebox, you can periodically check to ensure only authorised people can see the information that users consider sensitive.

Mac Privacy: Sandboxed Mac apps can record your screen at any time without you knowing
TL;DR Any Mac app can take screenshots of your Mac silently, and use basic OCR software to read all text on the screen. (from Weekend Reading)

To think about

Nick Stenning on Twitter
“Flat organisational structures do not exist. There are only organisations with visible structure and organisations with invisible structure”. (from Weekend Reading)

Developers On Call
Quite self-explanatory ideas for how to manage on-call rotations without burn out but maybe it’s not always that way. The linked Twitter thread is worth reading. (from Weekend Reading)

Something different

2017: The Year in Charts
These are the charts and themes that tell the story of 2017. I. The Year Volatility Died; II. Records Are Made to Be Broken; III. The World is Flattening; IV. Still Easy After All These Years; V. A Good Old-Fashioned Mania; VI. King Dollar Dethroned; VII. Wrapping Up: 1991-99 Redux?

Extracting JSON value from command line with jq and Python

Developing modern web applications you often come to around checking REST API responses and parsing JSON values. You can do it with a combination of Unix tools like sed, cut and awk but if you’re allowed to install extra tools or use Python then things get easier. This post shows you couple of options for extracting JSON values with Unix tools.

There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with awk. One of those tools is jq as shown Stack Overflow. You can install it in macOS from Homebrew: brew install jq.

$ curl -s 'https://api.github.com/users/walokra' | jq -r '.name'

If you’re limited to tools that are likely installed on your system such as Python, using the json module gives you the benefit of a proper JSON parser and avoiding any extra dependencies.

Python:

$ curl -s 'https://api.github.com/users/walokra' | \
    python -c "import sys, json; print(json.load(sys.stdin)['name'])"

Stack Overflow answers to the question of “Parsing JSON with Unix tools” shows you other options with standard tools like sed, cut and Awk and more exotic options with Perl, Node.js and PHP.

Monthly notes 26

January finally brought snow also to Southern Finland and darkness is retreating slowly when the day becomes longer. This time monthly notes tells you about different JavaScript frameworks, making webpack perform better and looks into bootstrapping microservices and running docker securely. On programming side there are articles for best practices with Kotlin and about Kotlin stdlib. If you haven’t stumbled upon Kotlin, it’s good to check it out as it’s a nice language for building services targeting the Java Virtual Machine.

Issue 26: 23.1.2018

Web development

An Extensive Guide to JS Frameworks
The world is full of JavaScript frameworks and this roundup post goest through 52 of them and touches on their pros, cons, and distinctive features. (from JavaScript Weekly 369)

2017’s JavaScript Rising Stars
A look at what JS projects were hot or not in 2017 based on their GitHub star counts. (from JavaScript Weekly 369)

Keep webpack Fast: A Guide for Better Build Perf
webpack is a great tool for bundling frontend assets but it’s worth knowing what to do when it starts to get bogged down. (from JavaScript Weekly 369)

Short

webpack: Plugin to Remove Unused Moment.js Locales

Microservices

Bootstrapping a microservice architecture (screencast)
Screencasts to present an open source bootstrap project to help you with your next microservice architecture using Java. The repository addresses some common challenges that everyone faces when starting with microservices.

Top tips to keep Docker running securely in production (video)
Gianluca Arbezzano gave important tips on setting up a production environment, immutability, and security concepts for dockers in his session at the DevOpsCon 2017.

Kotlin

Idiomatic Kotlin. Best Practices.
“In order to take full advantage of Kotlin, we have to revisit some best practices we got used to in Java. Many of them can be replaced with better alternatives that are provided by Kotlin.”

Make your life easier with Kotlin stdlib
“Kotlin is not about big killer features but about a bunch of small improvements that have deep impact. Most of them are not built-in into the language, but are functions offered as part of the Kotlin standard library.” The post goes through a limited set of them, and describes how they can be used to improve the code.

Something different

The best science fiction, fantasy, and horror novels of 2017
The Verge lists great books of 2017 in science fiction, fantasy, and horror category which shined a light in the darkness. You newer know if a book is interesting by reading it’s description but these took my eye: Meg Howrey’s The Wanderers, Kameron Hurley’s The Stars are Legion, N.K. Jemisin’s Broken Earth trilogy, Zachary Mason’s Void Star, Joe M. McDermott’s The Fortress at the End of Time, Ian McDonald Luna: New Moon and Linda Nagata’s The Last Good Man.

2017 Retrospective

It’s January 2018 and while I’m gathering my notes for the year’s first post its’ good to look back what I wrote in 2017 and make plans for the new year. In 2017 I managed to write as leisurely as usual and put together 17 articles of which 6 are something other than monthly notes. On average I wrote 1.4 post per month. I visited some meetups, did software development and tested technology stuff. Business as usual and I presume that it’s going to continue this way also this year.

Monthly notes

Writing Monthly notes series about interesting articles I’ve come across has proved to be good way to ensure that I keep reading what happens in software development and also think about it. Collecting articles to monthly post have worked better than publishing weekly. In July I was mostly mountain biking and away from the computer so there was no Monthly notes.

Meetups

Meetup scene in Helsinki has grown and there are several interesting events you can attend almost monthly. But that said, it’s also starting to get growded and events with good topics tend to fill up quickly. I usually find myself going to events to hear war stories of Amazon Web Services, Docker, DevOps, Frontend and Mobile. It’s useful to hear how other’s do things and get new ideas. Meetups and conferences are also nice way to both freshen your thinking and get to know people working in the same field.

In Nebula Tech Thursday – Beer & DevOps we heard stories about “Cloud Analytics – Providing Insight on Application Health and Performance” and “Building a Full Devops Pipeline with Open Source Tools”. OWASP Helsinki chapter meeting #31 presented topics like “DevSec – Developers are the key to security”, “Docker Security” and “Leaking credentials – a security malpractice more common than expected”. Both events where nice and as usual Nebula Tech Thursday with great food and drinks. If you follow me on Twitter you might have noticed that I went to more meetups than I wrote about, like Solita Core and Slush.D.

Software development as usual

Microservices and Docker has evolved the way we do things and to Dockerize all the things you can run Ansible inside Docker container. You might ask why and that’s easy to answer: isolating all of the required dependencies from the host machine and to get the Ansible version we want.

For making software development more reliable I introduced git pre-commit and pre-receive hooks for validating YAML to our continuous integration process. Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive helps you to automate the check for syntax validity, for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation.

Other things

As an engineer I’m interested of technology and gadgets and sometimes I get things to test. In July I wrote about keeping data secured with iStorage datAshur Personal2 USB flash drive. It is an USB flash drive with combination of hardware encryption, physical keypad and tamper-proofing. Small external devices are easy to lose and can leave your data vulnerable if not encrypted. The hardware encrypted USB flash drive seemed to be quite crafty.

Awesome times ahead

New year, old me. Or something like that. Plans are to continue as before, write about technology, collect interesting articles, learn new things about software development and of course ride mountain bike. The training for the Enduro racing season has already started.

So, stay tuned by subscribing to the RSS feed or follow me on Twitter. Check also my other blog in Finnish.

Monthly notes 25

December has gone fast and this time the monthly notes are more about pointers to tools and resources. Especially for accessibility which is important aspect of web development. If you don’t follow front-end development actively check out the recap of it’s development in 2017. And to learn more about security it’s good to read the updated OWASP Top-10 list. Happy reading and have nice holidays!

Issue 25, 20.12.2017

Web development

A recap of front-end development in 2017
tl;dr; PWA, yarn, serverless, vue.js, css-in-js, GraphQL, React Router 4, types in JavaScript.

Pointers for better accessibility

Inclusive Components
Blog which writes about designing inclusive web interfaces, piece by piece. Trying to be a pattern library.

Web Accessibility In Mind
Resources for reading about Web accessibility.

Web Accessibility Checklist
A beginner’s guide to web accessibility.

aXe
Nice open-source tool for accessibility testing. Runs right in your web browser.

NonVisual Desktop Access
Developing for better accessibility is easier when you can test how end users “see” things. NVDA (NonVisual Desktop Access) is a free “screen reader” for Windows which enables blind and vision impaired people to use computers. It reads the text on the screen in a computerised voice.

Security

OWASP Top 10 – 2017
The Ten Most Critical Web Application Security Risks. Read the PDF.

Internet Chemotherapy
Internet Chemotherapy was a 13 month project between Nov 2016 – Dec 2017. It has been known under names such as ‘BrickerBot’, ‘bad firmware upgrade’.

Testing tools

Cypress
“Cypress is the new standard in front-end testing that every developer and QA engineer needs. No more Selenium. Lots more power.”

TestCafe
“A Node.js tool to automate end-to-end web testing. Write tests in JS or TypeScript, run them and view results.”

mountebank
Provides cross-platform, multi-protocol test doubles over the wire. Simply point your application under test to mountebank instead of the real dependency, and test like you would with traditional stubs and mocks.

Something different

The 10 Best Mountain Biking Videos of the Year

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files’ contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here’s a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job’s workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project’s root directory’s .pre-commit-config.yaml as repository local hook. As you can see, I’m using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you’re linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there “just like that”. Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their “ready-to-consume” state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook’s script as StackOverflow describes. One way is to check out the files in a temporary location or if you’re on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.

Monthly notes 24

Rain, cold winds and darkness have arrived to Finland and there’s so many good reason to stay at home with warm mug of coffee and read. This month’s notes cover how you should optimize images, how your eyes are telling lies and how to circumvent it in design. You also get pointers to security tools for Docker and running Java apps with Docker and Kubernetes. And if you haven’t migrated to HTTPS check out Troy Hunt’s happy path. Happy reading.

Issue 24, 28.11.2017

User Interface

Essential Image Optimization (ebook)a
Image optimization should be automated. It’s easy to forget, best practices change, and content that doesn’t go through a build pipeline can easily slip. Addy Osmani’s eBook has the essential information you need to get started.

Optical Effects in User Interfaces (for True Nerds)
Making optically balanced icons, correct shapes alignment, and perfect corner rounding when your eyes are telling lies. Interesting article of optical effects in User Interfaces.

Microservices

Essential (and free) security tools for Docker
Docker makes it easy for developers to package up and push out application changes, and spin up run-time environments on their own. But this also means that they can make simple but dangerous mistakes that will leave the system unsafe without anyone noticing until it is too late. Fortunately, there are some good tools that can catch many of these problems early, as part of your build pipelines and run-time configuration checks. Jim Bird has put together a short list of the essential open source tools that are available today to help you secure your Docker environment.

Deploying Java Applications with Docker and Kubernetes
A good intro to using Docker and Kubernetes for a typical Spring web application. (from Java Weekly 199)

Technical

The 6-Step “Happy Path” to HTTPS
HTTPS is now somewhat of a necessity and the path to it can be difficult but it can also be fundamentally simple. Troy Hunt details the 6-step “Happy Path”, that is the fastest, easiest way you can get HTTPS up and running right.

Fast By Default: Modern Loading Best Practices (Chrome Dev Summit 2017)
Optimizing sites to load instantly on mobile is far from trivial. Costly JavaScript can take seconds to process, we often aren’t sensitive to users data-plans, and browsers don’t know what UX-critical resources should load first. One interesting talk https://www.youtube.com/watch?v=_srJ7eHS3IM&feature=youtu.be&t=11m3s is Queryable Real User Monitoring for the web? which tells us about Chrome User Expericence Report https://blog.chromium.org/2017/10/introducing-chrome-user-experience-report.html. Dataset of real world performance as experienced by Chrome users to which you can do SQL queries.

Introducing Code Smells into Code
Code smells are hints that show you potential problems in your code. Martin Fowler describes 21 code smells and Adrian Bolboaca came up with the Brutal Refactoring Coding Game. In the game participants are asked to write the cleanest code possible. If the facilitator spots any code smell, participants must stop and immediately remove it. The post is not about the game but about code smells introduced into code. The game allows observation how and when code smells are introduced (because the whole point is to spot and remove them). (from Java Weekly 199)

Miscellanous

Becoming an accidental architect
“How does one transition from developer to accidental architect? It doesn’t happen overnight.” The article describes the journey from developer to architect and how software architects can balance technical proficiencies with an appropriate mastery of communication.

Something different

Pole Bicycles Announces New CNC-Machined ‘Machine’
Finnish bike company Pole has always stamped it’s own path and redefined how mountain bikes can be long and fast. Now they redefined how a frame is made and announced robotically CNC machined frame which is also 100% made in Finland. “The Machine is a cutting edge 29″ superbike which can be used as the one bike for everything. The travel on the bike is 180mm front and 160mm rear. The frame geometry follows Pole’s notoriously long and slack geometry with steep seat tube for better climbing. On our tests, the Machine was even easier to ride than the EVOLINK’s.”

Dockerizing all the things: Running Ansible inside Docker container

Automating things in software development is more than useful and using Ansible is one way to automate software provisioning, configuration management, and application deployment. Normally you would install Ansible to your control node just like any other application but an alternate strategy is to deploy Ansible inside a standalone Docker image. But why would you do that? This approach has benefits to i.a. operational processes.

Although Ansible does not require installation of any agents within managed nodes, the environment where Ansible is installed is not so simple to setup. In control node it requires specific Python libraries and their system dependencies. So instead of using package manager to install Ansible and it’s dependencies we just pull a Docker image.

By creating an Ansible Docker image you get the Ansible version you want and isolate all of the required dependencies from the host machine which potentially might break things in other areas. And to keep things small and clean your image uses Alpine Linux.

The Dockerfile is:

FROM alpine:3.7
 
ENV ANSIBLE_VERSION 2.5.0
 
ENV BUILD_PACKAGES \
  bash \
  curl \
  tar \
  openssh-client \
  sshpass \
  git \
  python \
  py-boto \
  py-dateutil \
  py-httplib2 \
  py-jinja2 \
  py-paramiko \
  py-pip \
  py-yaml \
  ca-certificates
 
# If installing ansible@testing
#RUN \
#	echo "@testing http://nl.alpinelinux.org/alpine/edge/testing" >> #/etc/apk/repositories
 
RUN set -x && \
    \
    echo "==> Adding build-dependencies..."  && \
    apk --update add --virtual build-dependencies \
      gcc \
      musl-dev \
      libffi-dev \
      openssl-dev \
      python-dev && \
    \
    echo "==> Upgrading apk and system..."  && \
    apk update && apk upgrade && \
    \
    echo "==> Adding Python runtime..."  && \
    apk add --no-cache ${BUILD_PACKAGES} && \
    pip install --upgrade pip && \
    pip install python-keyczar docker-py && \
    \
    echo "==> Installing Ansible..."  && \
    pip install ansible==${ANSIBLE_VERSION} && \
    \
    echo "==> Cleaning up..."  && \
    apk del build-dependencies && \
    rm -rf /var/cache/apk/* && \
    \
    echo "==> Adding hosts for convenience..."  && \
    mkdir -p /etc/ansible /ansible && \
    echo "[local]" >> /etc/ansible/hosts && \
    echo "localhost" >> /etc/ansible/hosts
 
ENV ANSIBLE_GATHERING smart
ENV ANSIBLE_HOST_KEY_CHECKING false
ENV ANSIBLE_RETRY_FILES_ENABLED false
ENV ANSIBLE_ROLES_PATH /ansible/playbooks/roles
ENV ANSIBLE_SSH_PIPELINING True
ENV PYTHONPATH /ansible/lib
ENV PATH /ansible/bin:$PATH
ENV ANSIBLE_LIBRARY /ansible/library
 
WORKDIR /ansible/playbooks
 
ENTRYPOINT ["ansible-playbook"]

The Dockerfile declares an entrypoint enabling the running container to function as a self-contained executable, working as a proxy to the ansible-playbook command.

Build the image as:

docker build -t walokra/ansible-playbook .

You can test the ansible-playbook running inside the container, e.g.:

docker run --rm -it -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook --version

The command for running e.g. site.yml playbook with ansible-playbook from inside the container:

docker run --rm -it -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook site.yml

If Ansible is interacting with external machines, you’ll need to mount an SSH key pair for the duration of the play:

docker run --rm -it \
    -v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/ansible/playbooks \
    walokra/ansible-playbook site.yml

To make things easier you can use shell script named ansible_helper that wraps a Docker image containing Ansible:

#!/usr/bin/env bash
docker run --rm -it \
  -v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
  -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
  -v $(pwd):/ansible_playbooks \
  -v /var/log/ansible/ansible.log \
  walokra/ansible-playbook "$@"

Point the above script to any inventory file so that you can execute any Ansible command on any host, e.g.

./ansible_helper play playbooks/deploy.yml -i inventory/dev -e 'some_var=some_value'

Now we have dockerized Ansible, isolated it’s dependencies and are not restricted to some old version which we get from Linux distribution’s package manager. Crafty, isn’t it? Check the docker-ansible-playbook repository for more information and examples with Ansible Vault.

This blog post and Dockerfile borrows from Misiowiec’s post Running Ansible Inside Docker and his earlier work. If you want to test playbooks it’s work checking out his ansible_playbook repository. Since then Alpine Linux has evolved and things could be cleaned a bit more like getting Ansible directly from testing repository.