Best Practices of forking git repository and continuing development

Sometimes there’s a need to fork a git repository and continue development with your own additions. It’s recommended to make pull request to upstream so that everyone could benefit of your changes but in some situations it’s not possible or feasible. When continuing development in forked repo there’s some questions which come to mind when starting. Here’s some questions and answers I found useful when we forked a repository in Github and continued to develop it with our specific changes.

Repository name: new or fork?

If you’re releasing your own package (to e.g. npm or mvn) from the forked repository with your additions then it’s logical to also rename the repository to that package name.

If it’s a npm package and you’re using scoped packages then you could also keep the original repository name.

Keeping master and continuing developing on branch?

Using master is the sane thing to do. You can always sync your fork with an upstream repository. See: syncing a fork

Generally you want to keep your local master branch as a close mirror of the upstream master and execute any work in feature branches (that might become pull requests later).

How you should do versioning?

Suppose that the original repository (origin) is still in active development and does new releases. How should you do versioning in your forked repository as you probably want to bring the changes done in the origin to your fork? And still maintain semantic versioning.

In short, semver doesn’t support prepending or appending strings to version. So adding your tag to the version number from the origin which your version is following breaks the versioning. So, you can’t use something like “1.0.0@your-org.0.1” or “1.0.0-your-org.1”. This has been discussed i.a. semver #287. The suggestion was to use a build meta tag to encode the other version as shown in semver spec item-10. But the downside is that “Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence.”

If you want to keep relation the original package version and follow semver then your options are short. The only option is to use build meta tag: e.g. “1.0.0+your-org.1”.

It seems that when following semantic versioning your only option is to differ from origin version and continue as you go.

If you don’t need to or want to follow semver you can track upstream version and mark your changes using similar markings as semver pre-releases: e.g. “1.0.0-your-org.1”.

npm package: scoped or unscoped?

Using scoped packages is a good way to signal official packages for organizations. Example of using scoped packages can be seen from Storybook.

It’s more of a preference and naming conventions of your packages. If you’re using something like your-org-awesome-times-ahead-package and your-org-patch-the-world-package then using scoped packages seems redundant.

Who should be the author?

At least add yourself to contributors in package.json.

Forking only for patching npm library?

Don’t fork, use patch-package which lets app authors instantly make and keep fixes to npm dependencies. Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn. Now you don’t need to wait around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

This post was originally published on Gofore Group blog at 11.2.2019.

Learning and Staying Current in Software Development

Software development is one of the professions where you have to keep your knowledge up to date and follow what happens in the field. Staying current in the field and expanding your horizons can be achieved with different ways and one good way I have used is to follow different news sources, newsletters, listening podcasts and attending meetups. Here is my opinionated selection of resources to learn, share ideas, newsletters, meetups and other things for software developers. Meetups and some things are Finnish related.

News

There are some good sites to follow what happens in technology. They provide community powered links and discussions.

Podcasts

Podcasts provide nice resource for gathering experiences and new information how things can be done and what’s happening and coming up in software development. I commute daily about an hour and time flies when you find good episodes to listen. Here’s my selection of podcast relating to software development.

General

  • Software Engineering Daily: “The world through the lens of software” (iTunes)
  • Software Engineering Radio: “Targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast”. (feed)
  • ShopTalk: “An internet radio show about the internet starring Dave Rupert and Chris Coyier.” (iTunes)
  • Full Stack Radio: “Every episode, Adam Wathan is joined by a guest to talk about everything from product design and user experience to unit testing and system administration.” (feed)

Front-end

  • Syntax: “A Tasty Treats Podcast for Web Developers.” (iTunes)
  • The Changelog: “Conversations with the hackers, leaders, and innovators of software development.”
  • React Podcast: “Conversations about React with your favorite developers.”
  • Brainfork: “A podcast about mental health & tech”

In Finnish

  • ATK-hetki: “Vesa Vänskä ja Antti Akonniemi keskustelevat teknologiasta, bisneksestä ja itsensä kehittämisestä.”
  • Webbidevaus: “Puheradiota webbikehityksestä suomeksi! Juontajina Antti Mattila ja Riku Rouvila.”

Newsletters

Normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersects the stack you’re using and topics you’re interested at.

The power of newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. Well curated newsletter to targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later.

General

Mobile development

  • iOS Dev Weekly: Hand picked round up of the best iOS development links published every Friday
  • This Week In Swift: List of the best Swift resources of the week.
  • iOS Dev nuggets: Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.

Java

Database

  • DB Weekly: A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.

HTML and CSS

  • HTML5Weekly: Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.
  • CSS Weekly: Roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

  • Status code: “Keeping developers informed.” weekly email newsletters on a range of programming niches (links to JavaScript weekly, DevOps weekly etc.)
  • Web Development Reading List: Weekly roundup of web development–related sources, selected by Anselm Hannemann.
  • Versioning: “Daily knowledge devs and designers need to get ahead of the game.” SitePoint’s daily newsletter, which features the latest web development news.
  • Hacking UI: A weekly email with our favorite articles about design, front-end development, technology, startups, productivity and the occasional inspirational life lesson.
  • Scott Hanselman: Newsletter of Wonderful Things. Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.
  • MergeLinks: Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.
  • “How to keep up to date on: Front-End Technologies” page lists newsletters, blogs and people to follow.

JavaScript

  • JavaScript Weekly: Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.
  • Node Weekly: Once–weekly e-mail round-up of Node.js news and articles.
    A Drip of JavaScript: “One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.
  • SuperHero.js: Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.
  • State of JS: Results of yearly JavaScript surveys

User experience and design

  • UX Design Weekly: Hand picked list of the best user experience design links every week. Curated by Kenny Chen & published every Monday.
  • Sidebar.io: To satisfy your web aesthetics with list of the 5 best design links of the day. The content is manually curated by a couple great editors.
  • Userfocus: Monthly newsletter which shares an in-depth article on user experience.

Ops

  • DevOps Weekly: Weekly slice of devops news.
  • Web Operations Weekly: Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.
  • Microservice Weekly: A hand-curated weekly newsletter with the best articles on microservices.

Twitter

Following fellow developers and other people and accounts on Twitter is good way to know what’s happening right now. Here’s a selection of accounts I (@walokra) follow. In no particular order.

Development

  • @ThePracticalDev: Great posts from the amazing dev.to community, with some opinion and humor mixed in.
  • @CommitStrip: The blog relating the daily life of developers. Official english account.
  • @baeldung: Author of restwithspring.com and learnspringsecurity.com, passionate about REST, Security, TDD and everything in between.
  • @martinfowler: Author and international public speaker on software development, specializing in object-oriented analysis and design, UML, patterns, and agile software development methodologies.

Infosec

  • @troyhunt: Pluralsight author. Microsoft Regional Director and MVP for Developer Security. Online security, technology and “The Cloud”. Creator of @haveibeenpwned.
  • @briankrebs: Independent investigative journalist. Writes about cybercrime. Author of ‘Spam Nation’, a NYT bestseller. Wrote for The Washington Post ’95-’09
  • @mikko: CRO at F-Secure ● TED Speaker ● Revɘrse Engineer ● Supervillain
  • @TinkerSec Infosec Hacker things
  • @Anakondantti: Mostly software security related, but occasionally other things too. I’m a white hat hacker at team ROT.
  • @SunTzuCyber: If Sun Tzu had written “The Art of Cyber War”, these would be his quotes.
  • @lennyzeltser: Advances information security. Grows tech businesses. Fights malware. // VP of Products @MinervaLabs. Author and Instructor @SANSInstitute.

React scene

  • @jevakallio: @FormidableLabs, React/Native engineer, comedian, speaker, writer, improviser, Twitter Developer Expert™. Artisanal small batch free range shitposting.
  • @bebraw: Award winning founder of @survivejs and @jsterlibs. I also organize @ReactFinland.
  • @ReactJSNews: The latest React news and articles.

Design / UX

  • @steveschoger: Designer for @TightenCo and @taylorotwell ❯ Maker of heropatterns , heroicons  and zondicons  ❯  ? Design Tips
  • @UX_Grant: ? Senior Designer @ booking.com . ? Creating, Learning, Sharing ? Maker: MakersMusic.co  ?
  • @jonikorpi: Making multiplayer games using the web platform, as @vuorodesign. Previously web design at @kiskolabs.
  • @lukew: Humanizing technology. Founded: Polar (Google acquired) Bagcheck (Twitter acquired) Wrote: Mobile First, Web Form Design, Site Seeing. Worked: Yahoo, eBay, NCSA.
  • @autiomaa: Helping people, with design & technology. Front-end development, visual design, photography. Learning something new every day.
  • @skrug: Best known as the guy who wrote Don’t Make Me Think (now in its 3rd edition!) and Rocket Surgery Made Easy.
  • @jnd1er: Don Norman. Design thinker, company advisor, professor, columnist, author, … Latest book: Design of Everyday Things, Revised and Expanded.
  • @mpietila: User experience etc. Occasional smart-assery & besserwisserism. I have a history of seeing what they did there. Head of design at @qvik.

Database

Miscellanous

Java

  • @mreinhold: Chief Architect, Java Platform Group, Oracle.
  • @jodastephen: Java Champion. Developer at OpenGamma. Occasional blogger and speaker. Best known for Joda projects and JSR-310

Technology News

Meetups

You can learn much from others and to broaden your horizon it’s beneficial to attend different meetups and listen how others have done things and watch war stories. Also free food and drinks.

Mostly Helsinki based

Tampere based

Community chats

Generating documentation as code with mermaid and PlantUML

Writing documentation is always a task which isn’t much liked and especially with diagrams and flowcharts there’s the problem of which tools to use. One crafty tool is Draw.io with web and desktop editors but what to use if you want to write documentation as a code and see the changes clearly in text format and maintain source-controlled diagrams? One of the tools for drawing diagrams with human readable text are mermaid and PlantUML.

mermaid

“Generation of diagrams and flowcharts from text in a similar manner as markdown.”

mermaid is a simple markdown-like script language for generating charts from text via javascript. You can try it in live editor.

mermaid sequence diagram

You can write mermaid diagrams in text editor but it’s better to use some editor with plugins to preview your work. Markdown Preview Enhanced for Atom and VS Code can render mermaid and PlantUML. There’s also dedicated preview plugins for VS Code and Atom.

To preview mermaid definition in VS Code with Markdown Preview Enhanced press Cmd-P to open Command palette and select Markdown Preview Enhanced: Open Preview.

mermaid in VS Code with Markdown Preview Enhanced

To preview mermaid definition in VS Code with Mermaid Preview press Cmd-P to open Command palette and select Preview Mermaid Diagram.

mermaid in VS Code with Mermaid preview
mermaid in VS Code with Mermaid preview

Generating PNG images from mermaid definitions

To use mermaid diagrams it’s useful to export them to PNGs. You can use mermaid.cli tool which takes a mermaid definition file as input and generates svg/png/pdf file as output.

Install mermaid.cli locally:

npm install mermaid.cli

Generate PNG:

./node_modules/.bin/mmdc -i input.mmd -o output.png

If you have plenty of defition files you can use the following script to generate PNGs:

#!/usr/bin/env bash
 
for mmd in ./docs/*.mmd
do
    filename="${mmd##*/}"
    echo "Generating $mmd"
    ./node_modules/.bin/mmdc -i $mmd -o ${filename%%.*}.png
done

Alternatively you can use node_modules/mermaid/bin/mermaid.js $mmd where mmd is the mermaid file.

PlantUML diagrams

PlantUML is used to draw UML diagrams, using a simple and human readable text description.

PlantUML is used to draw UML diagrams, using a simple and human readable text description. Diagrams are defined using a simple and intuitive language (pdf) and images can be generated in PNG, in SVG or in LaTeX format.

You can use PlantUML to write e.g. sequence diagrams, usecase diagrams, class diagrams, component diagrams, state diagrams and deployment diagrams.

PlantUML example diagram:

PlantUML diagram

Simple way to create and view PlantUML diagrams is to use Visual Studio Code and Markdown Preview Enhanced plugin which renders both PlantUML and mermaid diagrams. Alternative option is to use plantuml plugin.

To preview PlantUML diagram in VS Code with Markdown Preview Enhanced press Cmd-P to open Command palette and select Markdown Preview Enhanced: Open Preview.

PlantUML in VS Code with Markdown Preview Enhanced

There’s an online demo server which you can use to view PlantUML diagrams. The whole diagram is compressed into the URL itself and diagram data is stored in PNG metadata, so you can fetch it even from a downloaded image. For example this link opens the PlantUML Server with a simple Authentication activity diagram.

Running PlantUML server locally

Although you can render PlantUML diagrams online it’s better for usability and security reasons to install a local server. And this approach is important if you plan to generate diagrams with sensitive information. The easiest path is to run PlantUML Server Docker container and configure localhost as server.

docker run -d -p 8080:8080 plantuml/plantuml-server:jetty

In VS Code config, open user setting, and configure like:

"plantuml.server": "http://localhost:8080",
"plantuml.render": "PlantUMLServer",

Now to preview diagram in VS Code press alt-D to start PlantUML preview.

PlantUML preview in VS Code and local server

You can also generate diagrams from the command line. First download PlantUML compiled Jar and run the following command which will look for @startXXX into file1, file2 and file3. For each diagram, a .png file will be created.

java -jar plantuml.jar file1 file2 file3

The plantuml.jar needs Graphviz for dot (graph description language) and on macOS you can install it from Homebrew: brew install graphviz.

For processing a whole directory, you can use the following command which will search for @startXXX and @endXXX in .c, .h, .cpp, .txt, .pu, .tex, .html, .htm or .java files of the given directories:

java -jar plantuml.jar "directory1" "directory2"

Maintain source-controlled diagrams as a code

Documentation and drawing diagrams can be simple and maintaining source-controlled diagrams with tools like PlantUML and mermaid is achievable. These tools are not like the behemoth of Sparx Enterprise Architect but provide light and easy way to draw different diagrams for software development. You don’t have to draw lines and position labels manually as they are magically added where they fit and you even get as crude boxes and squares as thousands of dollars more expensive tools. Now the question is which tool to choose: PlantUML or mermaid?

Two days of React Finland 2018: Day two with React and React Native

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. This is the second part of my recap of the talks and my notes which I posted to Twitter. Check out also the first part of my notes from the first day’s talks.

React Finland 2018, Day 2

How React changed everything — Ken Wheeler

Second day started with keynote by Ken Wheeler. He examined how React changed the front end landscape as we know it and started it with nice time travel to the 90s with i.a. Flash, JavaScript and AngularJS. Most importantly the talk took a look at the core idea of React, and why it transcends language or rendering target and posit on what that means going forward. And last we heard about what React async: suspense and time slicing.

“Best part of React is the community”

How React changed everything

Get started with Reason — Nik Graf

The keynote also touched Reason ML and Nik Graf went into details kicking off with the basics and going into how to leverage features like variant types and pattern matching to make impossible states impossible.

Get started with Reason ML

Making Unreasonable States Impossible — Patrick Stapfer

Based on “Get started with Reason” Patrick Stapfer’s talk went deeper into the world of variant types and pattern matching and put them into a practical context. The talk was nice learning by doing TicTacToe live coding. It showed how Reason ML helps you design solid APIs, which are impossible to misuse by consumers. We also got more insights into practical ReasonReact code. Presentation is available on the Internet.

Conclusion about ReasonReact:

  • More rigid design
  • More KISS (keep it simple, stupid) than DRY (don’t repeat yourself)
  • Forces edge-cases to be handled

Learning Reason by doing TicTacToe

Reactive State Machines and Statecharts — David Khourshid

David Khourshid’s talk about state machines and statecharts was interesting. Functional + reactive approach to state machines can make it much easier to understand, visualize, implement and automatically create tests for complex user interfaces and flows. Model the code and automatically generate exhaustive tests for every possible permutation of the code. Things mentioned: React automata, xstate. Slides are available on the Internet.

“Model once, implement anywhere” – David Khourshid

The talk was surprisingly interesting especially for use cases as anything to make testing better is good. This might be something to look into.

ReactVR — Shay Keinan

After theory heavy presentations we got into more visual stuff: React VR. Shay Keinan presented the core concepts behind VR, showed different demonstrations, and how to get started with React VR and how to add new features from the Three.js library. React VR: Three.js + React Native = 360 and VR content. On the VR device side it was mentioned that Oculus Go, HTC Vive Focus are the big step to Virtual Reality.

“Virtual Reality’s possibilities are endless. Compares to lucid dreaming.” – Shay Keinan

WebVR enables web developers to create frictionless, immersive experiences and we got to see Solar demo and Three VR demo which were lit ?.

React VR

World Class experience with React Native — Michał Chudziak

I’ve shortly experimented with React Native so it was nice to listen Michał Chudziak’s talk how to set up a friendly React Native development environment with the best DX, spot bugs in early stage and deliver continuous builds to QA. Again Redux was dropped in favour of apollo-link-state.

Work close to your team – Napoleon Hill

What makes a good Developer eXperience?

  • stability
  • function
  • clarity
  • easiness

GraphQL was mentioned to be the holy grail of frontend development and perfect with React Native. Tools for better developer experience: Haul, CircleCI, Fastlane, ESLint, Flow, Jest, Danger, Detox. Other tips were i.a to use native IDEs (XCode, Android Studio) as it helps debugging. XCode Instruments helps debug performance (check iTunes for video) and there’s also Android Profiler.

World Class experience with React Native

React Finland App – Lessons learned — Toni Ristola

Every conference has to have an app and React Finland of course did a React Native app. Toni Ristola lightning talked about lessons learned. Technologies used with React Native was Ignite, GraphQL and Apollo Client ? App’s source code is available on GitHub.

Lessons learned:

  • Have a designer in the team
  • Reserve enough time — doing and testing a good app takes time
  • Test with enough devices — publish alpha early

React Finland App – Lessons learned

React Native Ignite — Gant Laborde

80% of mobile app development is the same old song which can be cut short with Ignite CLI. Using Ignite, you can jump into React Native development with a popular combination of technologies, OR brew your own. Gant Laborde talked about the new Bowser version which makes things even better with Storybook, Typescript, Solidarity, mobx-state-tree and lint-staged. Slides can be found on the Internet.

Ignite
Ignite

How to use React, webpack and other buzzwords if there is no need — Varya Stepanova

Varya Stepanova’s lightning talk suggested to start a side-project other than ToDo app to study new development approaches and showed what it can be in React. The example was how to generate a multilingual static website using Metalsmith, React and other modern technologies and tools which she uses to build her personal blog. Slides can be found on the Internet.

Doing meaningful side-projects is a great idea to study new things and I’ve used that for i.a. learning Swift with Highkara newsreader, did couple of apps for Sailfish OS and played with GraphQL and microservices while developing app with largish vehicle dataset.

After party

Summary

Two days full of talks of React, React Native, React VR and all the things that go with developing web applications with them was great experience. Days were packed with great talks, new information and everything went smoothly. The conference was nicely organized, food was good and participants got soft hoodies to go with the Allas Sea Pool ticket. The talks were all great but especially “World Class experience with React Native” and “React Native Ignite” gave new inspiration to write some app. Also “ReactVR” seemed interesting although I think Augmented Reality will be bigger thing than Virtual Reality. It was nice to hear from “The New Best Practices” talk that there really is no new best practices as the old ones still work. Just use them!

Something to try and even to take into production will be Immer, styled components and Next.js. One thing which is easy to implement is to start using lint-staged although we are linting all the things already.

One of the conference organizers and speaker, Juho Vepsäläinen, wrote Lessons Learned from the conference and many of the points he mentions are to the point. The food was nice but “there wasn’t anything substantial for the afternoon break”. There wasn’t anything to eat after lunch but luckily I had own snacks. Vepsäläinen also mentions that “there was sometimes too much time between the presentations” but I think the longer breaks between some presentations were nice for having a quick stroll outside and have some fresh air. The venue was quite warm and the air wasn’t so good in the afternoon.

The Afterparty at Sea Life Helsinki was interesting choice and it worked nicely although there wasn’t so many people there. The aquarium was fishy experience and provided also some other content than refreshments. Too bad I hadn’t have time to go and check the Allas Sea Pool which we got a free ticket. Maybe next time.

Thanks to the conference crew for such a good event and of course to my fellow Goforeans which attended it and had a great time!

Two days of React Finland 2018: Day one topics of React

React Finland 2018 conference was held last week and I had the opportunity to attend it and listen what’s hot in the React world. The conference started with workshops and after that there was two days of talks of React, React Native, React VR and all things that go with developing web applications with them. The two conference days were packed with great talks and new information. Here’s the first part of my notes from the talks which I posted to Twitter. Read also the second part with more of React Native.

React Finland 2018: Day 1

React Finland combined the Finnish React community with international flavor from Jani Eväkallio to Ken Wheeler and other leading talents of the community. The event was the first of its kind in Finland and consisted of a workshop day and two days of talks around the topic. It was nice that the event was single track so you didn’t need to choose between interesting talks.

At work I’ve been developing with React couple of years and tried my hands with React Native so the topics were familiar. The conference provided crafty new knowledge to learn from and maybe even put to production. Overall the conference was great experience and everything went smoothly. Nice work from the React Finland conference team! And of course thanks to Gofore which sponsored the conference and got me a ticket.

I tweeted my notes from almost every presentation and here’s a recap of the talks. I heard that the videos from the conference will be available shortly.

The New Best Practices — Jani Eväkallio

First day’s keynote was by Jani Eväkallio who talked about “The New Best Practices”. As the talk description wrote “When React was first introduced, it was ridiculed for going against established web development best practices as we knew them. Five years later, React is the gold standard for how we create user interfaces. Along the way, we’ve discovered a new set of tools, design patterns and programming techniques.”

The new best practices were:

  • Build big things from small things
  • Write code for humans first: flow, Typescript, storybook
  • Stay close to the language:
    • helps i.a. linters
  • Always prefer simplicity
  • Don’t break things:
    • Facebook makes React API changes easy to upgrade, depreciation well in advance, migration, documentation. it’s a flow, not versions. Use codemod.
  • Keep an open mind

You ask “what new best practices”? Yep, that’s the thing. We don’t need new best practices as the same concepts like Model-View-Controller and separation of concerns are still valid. We should use best practices which have been proved good before as they also work nicely with React philosophy. Eväkallio also talked why React will be around for a long time. It’s because components and interoperable components are an innovation primitive.

The New Best Practices

Declarative state and side effects — Christian Alfoni

After the keynote it was time to get more practical and Christian Alfoni talked about how we can get help writing our business logic in a declarative manner and see what benefits it gives us. He talked about lessons learned refactoring Codesandbox.io from Redux to Cerebral and about Cerebral which provides a declarative state and side effects management for popular JavaScript frameworks. Talks slides are available on the Internet.

Alfoni also pointed to Turning the database inside out with Apache Samza. Also that Cerebral had time travel before Dan Abramov presented Live React in his talk Hot Reloading with Time Travel at react-europe 2015

Immer: Immutability made easy — Michel Weststrate

Immutable data structures are a good thing and Michel Weststrate showed Immer which is a tiny package that allows you to work with immutable data structures with unprecedented ease. Managing the state of React app is a huge deal with Redux and any help is welcome. “Immer doesn’t require learning new data structures or update APIs, but instead creates a temporarily shadow tree which can be modified using the standard JavaScript APIs. The shadow tree will be used to generate your next immutable state tree.”

The talk showed how to write your reducers in a much more readable way, with half the code and without requiring additional large libraries. The talk slides are available on the Internet.

Get Rich Quick With React Context — Patrick Hund

“Get Rich Quick With React Context” lightning talk by Patrick Hund didn’t tell how good job opportunities you have when doing React But how with React 16.3 the context API has been completely revamped and demonstrated a good use case: Putting ad placements on your web page to get rich quick! Other use cases are localizations. Check out the slides which will tell you how easy it is to use context now and how to migrate your old context code to the new API.

There’s always a better way to handle localization — Eemeli Aro

“There’s always a better way to handle localization” lightning talk by Eemeli Aro told about how localization is a ridiculously difficult problem in the general case, but in the specific you can get away with really simple solutions, especially if you understand the compromises you’re making.

I must have been dozing as all I got was there are also other options to store localizations than JSON like YAML and JavaScript property format especially when dealing with non-developers like translators. The talk was quite general and on abstract level and mentioned solutions to localization were react-intl, react-i18next and react-message-context.

Styled Components, SSR, and Theming — Kasia Jastrzębska

Web applications need to be styled and Kasia Jastrzębska talked about CSS-in-JS with styled-components by going through the new API, performance improvements, server side rendering with Next.js. She also showed the theming manager available with v2 of styled-components. Talk slides are available on the Internet.

Takeaways from this talk was that CSS in React app can be written as you always have or by using CSS-in-JS solutions. There are several benefits of using styled-components but I’m still thinking how styles get scattered all over components.

Universal React Apps Using Next.js — Sia Karamalegos

53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
DoubleClick by Google, 2016

Every user’s hardware is different and processing speed can hinder user experience on client-side rendered React applications and so Sia Karamalegos talked how server-side rendering and code-splitting can drastically improve user experience. By minimizing the work that the client has to do. Performance and shipping your code matters. The talk showed how to easily build an universal React apps using the Next.js framework and walked through the concepts and code examples. Talk slides are available on the Internet.

There are lots of old (mobile) devices which especially benefit from Server Side Rendering. Next.js is a minimalistic framework for universal, server-rendered (or statically pre-rendered) React applications which enables faster page loads. Pages are server-rendered by default for initial load, you can enable prefetching future routes and there’s automatic code splitting. It’s also customizable so you can use own Babel and Webpack configurations and customize the server API with e.g. Express. And if you don’t want to use a server Next.js can also build static web apps that you can then host on Github pages or AWS S3.

Universal React apps using Next.js

State Management in React Apps with Apollo Client — Sara Vieira

Apollo Client was one of the most mentioned framework in the conference along with Reason ML and Sara Vieira gave energetic talk how to use it for state management in React Apps. If you haven’t come across Apollo Client it’s caching GraphQL client and helps you to manage data coming from the server. Virieira showed how to manage local state with apollo-link-state.

The talk was fast paced and I somewhat missed the why part but at least it’s easy to setup: yarn add apollo-boost graphql react-apollo. Have to see slides and demo later. Maybe the talk can be wrapped up to: “GQL all the things” and “I don’t like Redux” :D

State Management with Apollo

Detox: A year in. Building it, Testing with it — Rotem Mizrachi-Meidan

Detox testing framework for React Native talk by Rotem Mizrachi-Meidan was the other talk I dozed along. Mizrachi-Meidan talked what developing and using Detox in production has taught and how Detox works and what makes it deterministic. The talk showed how mobile apps could be tested. There’s a video of earlier talk on the Internet.

Detox

Make linting great again! — Andrey Okonetchnikov

One thing in software development which always gets developers to argue over stupid things is code formatting and linting. Andrey Okonetchnikov talked how “with a wrong workflow linting can be really a pain and will slow you and your team down but with a proper setup it can save you hours of manual work reformatting the code and reducing the code-review overhead.”

The talk was a quick introduction how ?? lint-staged a node.js library can improve developer experience. Small tool coupled with tools that analyze and improve the code like ESLint, Stylelint, Prettier and Jest can make a big difference.

Missed talks

There was also two talks I missed: “Understanding the differences is accepting” by Sven Sauleau and “Why I YAML” by Eemeli Aro. Sauleau showed “interesting” twists of Javascript language.

Read also the second part with more of React Native.

Build Monitor with Raspberry Pi and Touch Screen

Information is a great tool in software development and it’s useful to have easy access to it. The more obvious you make your problems, the harder you make them to ignore. The more attention they get, the quicker they get solved. One thing developers like to monitor in software development is continuous integration status and metrics from running services. And what better way to achieve visibility and visualize to those metrics than building an information radiator.

I didn’t want to invent the wheel again so I got Raspberry Pi 3 Model B with accessories and 7″ touch screen to base my project. Using a Raspberry Pi as an information radiator isn’t a new idea and the Internet is full of examples of different adaptations with screens, lights, bells and whistles. For the start we just visualized our Jenkins builds and Grafana dashboard but later on we will propably do a custom dashboard.

Setting up the base

The information radiator is easy to get running as you only need a computer which preferably runs Linux. You can use an old laptop and attach it to external screen or if you’re like me and want to tinker you can get e.g. Raspberry Pi 3 and couple it with small external screen for portability. Nice and low cost solution which gets you some hacker value. I got the Rpi from our local hardware store and unfortunately the Model B+ was just released on the same day. The extra 15% power, 5 GHz Wifi and less heat and throttling would’ve been nice.

Raspberry Pi 3 Model B and accessories

I got the Raspberry Pi starting package with the official case, power supply, HDMI cable and a MicroSD card with preloaded NOOBS. So I just needed to connect the cables, put SD Card in and click to install Raspbian. Other interesting operating systems would’ve been Fedberry which is Fedora ‘Minimal, XFCE and LXQt’ Remixes.

For the screen I used 7″ IPS 5 point touch screen for Raspberry Pi with 1024×600 resolution and HDMI from joy-it codename RB-LCD-7-2. Initially I thought I could install the whole system with this display but as it turned out Rpi doesn’t understand it out of the box. It just showed some white noise and interference . Luckily some one had already solved this and I got the right config after I had installed Raspbian with real monitor.

Joy-it touch screen with default settings

Edit your /boot/config.txt:

# uncomment to force a specific HDMI mode (this will force VGA)
hdmi_group=2
hdmi_mode=87
 
# Add line:
hdmi_cvt=1024 600 60 3 0 0 0

And reboot your Raspberry Pi after those changes.

You should also run $ sudo raspi-config to setup for example WiFi country to allow channels 12 and 13 and your current Timezone.

I also updated Raspbian which bumps it to rpi-4.14.y linux tree:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo rpi-update

To connect to Rpi with SSH enable it with raspi-config > Interfacing Options or just:

$ sudo systemctl enable ssh
$ sudo systemctl start ssh

For the note, by default the user pi has password raspberry. You should change it but if you want to remove the nagging of default password do the following:

$ sudo apt-get -y purge pprompt
$ sudo rm /etc/profile.d/sshpwd.sh
$ sudo rm /etc/xdg/lxsession/LXDE-pi/sshpwd.sh

Problems with WiFi connection

I set up the Raspberry Pi at our local office and at home and there were no problems with WiFi connection. But when I brought it to customer premises the WiFi connection was weak and practically couldn’t move a bit. My MacBook worked fine but it was connected to 5 GHz network which isn’t an option with my Rpi 3 Model B. The WiFi on Rpi 3 was using channel 11 on 802.11i with WPA2 as shown with iwlist wlan0 scan.

There is a thread on Raspberry Pi forum about Very poor wifi performance which suggest to set up WiFi internalisation correctly to allow channels 12 and 13. At one point the issue was that only channels 1-11 are available on the Rpi 3 but checking out the ‘next’ branch of firmware/kernel (sudo BRANCH=next rpi-update) apparently fixed channels 12/13. I was on kernel 4.9.80 so it wasn’t a problem for me. The other suggested problem is with Atheros chipset based router which doesn’t like Broadcom WiFi on Rpi 3.

For some disabling power management solves the connection issues. For RPi built-in Broadcom (Cypress) WiFi there’s no control for power management and it’s disabled by the kernel. In iw / iwlist / iwconfig you see bug with “Power Management:on”.

But nevertheless testing switching it off made my WiFi connection better but it’s strength didn’t of course change.

$ sudo iwconfig wlan0 power off

To make it permanent you can add something like this in your interfaces file:

$ sudo touch /etc/network/if-up.d/wlan0
$ sudo chmod +x /etc/network/if-up.d/wlan0
$ sudo echo -e '#!/bin/bash\niwconfig wlan0 power off' > /etc/network/if-up.d/wlan0

Accessing Raspberry Pi remotely

The information radiator is usually connected to a TV with no keyboard or mouse attached so accessing it remotely is useful. You can use x11vnc which allows you to VNC into a headless Pi with a VNC client like Apple Remote Desktop, RealVNC’s vncviewer or homebrew’s tiger-vnc.

$ sudo apt-get install ttf-mscorefonts-installer
$ sudo apt-get install x11vnc

To start x11vnc automatically create new or edit existing ~/.xsessionrc file:

$ cat ~/.xsessionrc
/usr/bin/x11vnc -noxrecord -noxdamage -forever -bg -rfbport 5900

Getting interesting things on the screen

To test our setup and quickly show some data I just added a Build Monitor view in Jenkins and other view with Dashboard view. I also configured the Rpi to automatically start Chromium browser in kiosk mode after reboots and directed it to Jenkins website so there would be no need for interactions to get things on the screen. To show several sources of data and get things running quickly without customized information radiator we used Revolver – Tabs Chromium extension to rotate between multiple browser tabs: one showed Jenkins Build Monitor other Grafana Dashboard and third Twitter feed.

To automatically start the chromium-browser after Raspbian desktop starts, edit the following lxsession file:

$ cp /etc/xdg/lxsession/LXDE-pi/autostart /home/pi/.config/lxsession/LXDE-pi/autostart
$ vim /home/pi/.config/lxsession/LXDE-pi/autostart
 
#@xscreensaver -no-splash  # comment this line out to disable screensaver
# Disable Xsession from blanking
@xset s off
@xset -dpms
@xset s noblank
 
@sh ./autostart.sh
# load chromium after boot and point to the localhost webserver in full screen mode
@chromium-browser --kiosk --no-default-browser-check --no-first-run --disable-infobars "http://localhost/"

Chromium has a feature to show “Restore pages” nagging popup when not grafefully shutdown and you can try the following Stack Overflow suggestion. What was also suggested was doing “chmod 001 ~/.config/chromium/Default/Preferences” but it results to another nagging window.

$ cat ./autostart.sh
#!/bin/sh
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' ~/.config/chromium/'Local State'
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/; s/"exit_type":"[^"]\+"/"exit_type":"Normal"/' ~/.config/chromium/Default/Preferences

You could use “–restore-last-session” or “–incognito” parameter which also works but it has several disadvantages, such as disabling the cache and login information. Or maybe I should just use Firefox.

Raspberry Pi and Jenkins Build Monitor

It might be also useful to set Chromium to restart every night. When running Chromium for longer periods it may fill Rpi’s memory with garbage and after it must be hard rebooted.

Turning the monitor on and off automatically

When running Rpi as a wall monitor it’s useful to save energy and extend the life of your monitor by turning the monitor on/off on a daily schedule. You can do this by running a cron script. Get this script and put it in /home/pi/rpi-hdmi.sh and make it executable: chmod +x /home/pi/rpi-hdmi.sh. Call the script at the desired time with cron entry:

$ crontab -e
 
# Turn HDMI Off (22:00)
0 17 * * * /home/pi/rpi-hdmi.sh off
 
# Turn HDMI On (7:00)
30 7 * * * /home/pi/rpi-hdmi.sh on

If you’ve problems with the above script try this which is the original but change “curr_vt=`fgconsole`” to be “curr_vt=`sudo fgconsole`” as fgconsole needs sudo privileges and otherwise you get an error “Couldn’t get a file descriptor referring to the console”.

From simple dashboard to real information Radiator

Showing just Jenkins Build Monitor or Grafana dashboards is simple but to get more information of things running you could show things like the success rate of the builds, build health, latest open pull request and project Twitter Messages. One nice example of information radiator is Panic’s Status board.

There are different ways to create customizable dashboard and one way is to use Dashing which is a dashboard framework and to get headstart you can see Project Dashboard https://github.com/martin-naumann/project-cockpit is which shows “build health” for the latest build, the latest open pull request in Github, the success rate of your builds, some free-form text and your project’s or company’s logo. It uses the Jenkins API to get the ration of successful / non-successful builds as well as the latest build state. It also calls the Github API to get the latest pull request to display the name and picture of the author and the title of the pull request on the dashboard.

For more leisure use you can set up the Raspberry Pi as a wall display to show information like calendar, weather, photos and RSS feeds. One option is to use Dakboard which is a web interface used to display information and is quite configurable with different services. At first Dakboard seems nice but is quite limited on what data it can show and some useful features are premium. Another open source option is MagicMirror² which seems to be more modular and extensible (as you can create your own modules) but needs more tinkering.

Extracting JSON value from command line with jq and Python

Developing modern web applications you often come to around checking REST API responses and parsing JSON values. You can do it with a combination of Unix tools like sed, cut and awk but if you’re allowed to install extra tools or use Python then things get easier. This post shows you couple of options for extracting JSON values with Unix tools.

There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with awk. One of those tools is jq as shown Stack Overflow. You can install it in macOS from Homebrew: brew install jq.

$ curl -s 'https://api.github.com/users/walokra' | jq -r '.name'

If you’re limited to tools that are likely installed on your system such as Python, using the json module gives you the benefit of a proper JSON parser and avoiding any extra dependencies.

Python:

$ curl -s 'https://api.github.com/users/walokra' | \
    python -c "import sys, json; print(json.load(sys.stdin)['name'])"

Stack Overflow answers to the question of “Parsing JSON with Unix tools” shows you other options with standard tools like sed, cut and Awk and more exotic options with Perl, Node.js and PHP.

Git pre-commit and pre-receive hooks: validating YAML

Software development has many steps which you can automate and one useful thing to automate is to add Git commit hooks to validate your commits to version control. Firing off custom client-side and server-side scripts when certain important actions occur. Validating commited files’ contents is important for syntax validity and even more when providing Spring Cloud Config configurations in YAML for microservices as otherwise things fail.

Validating YAML can be done by using a yamllint and hooking it to pre-commit or pre-receive. It does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces and indentation. Here’s a short overview to get started with yamllint on Git commit hooks.

Quickstart for yamllint

Installing yamllint

On Fedora / CentOS:
$ sudo dnf install yamllint
 
using pip, the Python package manager:
$ sudo pip install yamllint
 
or as in macOS
$ sudo -H python -m pip install yamllint

You can also install yamllint from sources when e.g. network connectivity is limited. The linter depends on pathspec >=0.5.3 and pyyaml >= 3.12.

Custom config

Yamllint is quite strict with validation and you might want to make it a bit more relax with custom configuration. For example I need to allow long lines. You can also disable checks for a specific line with a comment.

$ cat yamllint-config.yml
 
extends: default
 
rules:
  line-length: disable
  comments:
    require-starting-space: false

Usage

$ yamllint file.yml other-file.yaml

Usage with custom config:

$ yamllint -c yamllint-config.yml .

Or with custom config without config file:

$ yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" file.yaml

Or more specific case like running yamllint in Jenkins job’s workspace and validating files with specific suffix:

$ find . -type f -iname '*.j2' -exec yamllint -s -c yamllint-config.yaml {} \;

Pre-commit hook and yamllint

Better way to use yamllint is to integrate it with e.g. git and pre-commit-hook or pre-receive-hook. Adding yamllint to pre-commit-hook is easy with pre-commit which is a framework for managing and maintaining multi-language pre-commit hooks.

Installing pre-commit:

Using pip:
$ pip install pre-commit
 
Or on macOS:
$ brew install pre-commit

To enable yamllint pre-commit plugin you just add a file called .pre-commit-config.yaml to the root of your project and add following snippet to it

$ cat .pre-commit-config.yaml
---
- repo: https://github.com/adrienverge/yamllint.git
  sha: v1.10.0
  hooks:
    - id: yamllint

With custom config and strict mode:

$ cat .pre-commit-config.yaml
---
repos:
 - repo: https://github.com/adrienverge/yamllint.git
   sha: v1.10.0
   hooks:
     - id: yamllint
       args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']

You can also use repository-local hooks when e.g. it makes sense to distribute the hook scripts with the repository. Install yamllint locally and configure yamllint to your project’s root directory’s .pre-commit-config.yaml as repository local hook. As you can see, I’m using custom config for yamllint.

$ cat .pre-commit-config.yaml
---
- repo: local
  hooks: 
  - id: yamllint
    name: yamllint
    entry: yamllint -c yamllint-config.yml .
    language: python
    types: [file, yaml]

Note: If you’re linting files with other suffix than yaml/yml like ansible template files with .j2 suffix then use types: [file]

Pre-receive hook and yamllint

Using pre-commit-hooks to process commits is easy but often doing checks in server side with pre-receive hooks is better. Pre-receive hooks are useful for satisfying business rules, enforce regulatory compliance, and prevent certain common mistakes. Common use cases are to require commit messages to follow a specific pattern or format, lock a branch or repository by rejecting all pushes, prevent sensitive data from being added to the repository by blocking keywords, patterns or filetypes and prevent a PR author from merging their own changes.

One example of pre-receive hooks is to run a linter like yamllint to ensure that business critical file is valid. In practice the hook works similarly as pre-commit hook but files you check in to repository are not kept there “just like that”. Some of them are stored as deltas to others, or their the contents are compressed. There is no place where these files are guaranteed to exist in their “ready-to-consume” state. So you must take some extra hoops to get your files available for opening them and running checks.

There are different approaches to make files available for pre-receive hook’s script as StackOverflow describes. One way is to check out the files in a temporary location or if you’re on linux you can just point /dev/stdin as input file and put the files through pipe. Both ways have the same principle: checking modified files between new and the old revision and if files are present in new revision, runs the validation script with custom config.

Using /dev/stdin trick in Linux:

#!/usr/bin/env bash
 
set -e
 
ENV_PYTHON='/usr/bin/python'
 
if ((
       (ENV_PYTHON_RETV != 0) && (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    git diff --name-only $oldrev $newrev | while read file; do
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | egrep '\.yml$' | awk '{ print $3 }'`
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Get file in commit and point /dev/stdin as input file 
        # and put the files through pipe for syntax validation
        echo $file
        git show $newrev:$file | /usr/bin/yamllint -d "{extends: relaxed, rules: {line-length: disable, comments: disable, trailing-spaces: disable, empty-lines: disable}}" /dev/stdin || exit 1
    done
done

Alternative way: copy changed files to temporary location

#!/usr/bin/env bash
 
set -e
 
EXIT_CODE=0
ENV_PYTHON='/usr/bin/python'
COMMAND='/usr/bin/yamllint'
TEMPDIR=`mktemp -d`
 
if ((
        (ENV_PYTHON_RETV != 0) &&
        (YAMLLINT != 0)
)); then
    echo '`python` or `yamllint` not found.'
    exit 1
fi
 
oldrev=$1
newrev=$2
refname=$3
 
while read oldrev newrev refname; do
 
    # Get the file names, without directory, of the files that have been modified
    # between the new revision and the old revision
    files=`git diff --name-only ${oldrev} ${newrev}`
 
    # Get a list of all objects in the new revision
    objects=`git ls-tree --full-name -r ${newrev}`
 
    # Iterate over each of these files
    for file in ${files}; do
 
        # Search for the file name in the list of all objects
        object=`echo -e "${objects}" | egrep "(\s)${file}\$" | awk '{ print $3 }'`
 
        # If it's not present, then continue to the the next itteration
        if [ -z ${object} ]; 
        then 
            continue; 
        fi
 
        # Otherwise, create all the necessary sub directories in the new temp directory
        mkdir -p "${TEMPDIR}/`dirname ${file}`" &>/dev/null
        # and output the object content into it's original file name
        git cat-file blob ${object} > ${TEMPDIR}/${file}
 
    done;
done
 
# Now loop over each file in the temp dir to parse them for valid syntax
files_found=`find ${TEMPDIR} -name '*.yml'`
for fname in ${files_found}; do
    ${COMMAND} ${fname}
    if [[ $? -ne 0 ]];
    then
      echo "ERROR: parser failed on ${fname}"
      BAD_FILE=1
    fi
done;
 
rm -rf ${TEMPDIR} &> /dev/null
 
if [[ $BAD_FILE -eq 1 ]]
then
  exit 1
fi
 
exit 0

Testing pre-receive hook locally is a bit more difficult than pre-commit-hook as you need to get the environment where you have to remote repository. Fortunately you can use the process which is described for GitHub Enterprise pre-receive hooks. You create a local Docker environment to act as a remote repository that can execute the pre-receive hook.

Expanding your horizons on JVM with Kotlin

The power of Java ecosystem lies in the Java Virtual Machine (JVM) which runs variety of programming languages which are better suitable for some tasks than Java. One relatively new JVM language is Kotlin which is statically typed programming language that targets the JVM and JavaScript. You can use it with Java, Android and the browser and it’s 100% interoperable with Java. Kotlin is open source (Apache 2 License) and developed by a team at JetBrains. The name comes from the Kotlin Island, near St. Petersburg. The first officially considered stable release of Kotlin v1.0 was released on February 15, 2016.

Why Kotlin?

“Kotlin is designed to be an industrial-strength object-oriented language, and to be a better language than Java but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin.” – Kotlin, Wikipedia

Kotlin’s page summaries the question “Why Kotlin?” to:

  • Concise: Reduce the amount of boilerplate code you need to write.
  • Safe: Avoid entire classes of errors such as null pointer exceptions.
  • Versatile: Build server-side applications, Android apps or frontend code running in the browser. You can write code in Kotlin and target JavaScript to run on Node.js or in browser.
  • Interoperable: Leverage existing frameworks and libraries of the JVM with 100% Java Interoperability.

“You can write code that’s more expressive and more concise than even a scripting language, but with way fewer bugs and with way better performance.” – Why Kotlin is my next programming language

One of the obvious applications of Kotlin is Android development as the platform uses Java 6 although it can use most of Java 7 and some backported Java 8 features. Only the recent Android N which changes to use OpenJDK introduces support for Java 8 language features.

For Java developers one significant feature in Kotlin is Higher-Order Functions, function that takes functions as parameters, which makes functional programming more convenient than in Java. But in general, I’m not so sure if using Kotlin compared to Java 8 is as much beneficial. It smooths off a lot of Java’s rough edges, makes code leaner and costs nothing to adopt (other than using IntelliJ IDEA) so it’s at least worth trying. But if you’re stuck with legacy code and can’t upgrade from Java 6, I would jump right in.

Learning Kotlin

Coming from Java background Kotlin at first glance looks a lot leaner, elegant, simpler and the syntax is familiar if you’ve written Swift. To get to know the language it’s useful to do some Kotlin Examples and Koans which get you through how it works. They also have “Convert from Java” tool which is useful to see how Java classes translate to Kotlin. For mode detailed information you can read the complete reference to the Kotlin language and the standard library.

If you compare Kotlin to Java you see that null references are controlled by the type system, there’s no raw types, arrays are invariant (can’t assign an Array to an Array) and there’s no checked exceptions. Also semicolons are not required, there’s no static members, non-private fields or wildcard types.

And what Kotlin has that Java doesn’t have? For starters there’s null safety, smart casts, extension functions and lots of things Java just got in recent versions like Null safety, streams, lambdas ( although which are “expensive”). On the other hand Kotlin targets Java 6 bytecode and doesn’t use some of the improvements in Java 8 like invoke-dynamic or lambda support. Some of JDK7/8 features are going to be included in Standard Library in 1.1 and in the mean time you can use small kotlinx-support library. It provides extension and top-level functions to use JDK7/JDK8 features such as calling default methods of collection interfaces and use extension for AutoCloseable.

And you can also call Java code from Kotlin which makes it easier to write it alongside Java if you want to utilize it in existing project and write some part of your codebase with Kotlin.

The Kotlin Discuss is also nice forum to read experiences of using Kotlin.

Tooling: in practice IntelliJ IDEA

You can use simple text editors and compile your code from the command line or use build tools such as Ant, Gradle and Maven but good IDEs make the development more convenient. In practice, using Kotlin is easiest with JetBrains IntelliJ IDEA and you can use their open source Community edition for free. There’s also Eclipse plugin for Kotlin but naturally it’s much less sophisticated than the IntelliJ support.

Example project

The simplest way to start with Kotlin application is to use Spring Boot’s project generator, add your dependencies, choose Gradle or Maven and click on “Generate Project”.

There are some gotchas with using Spring and Kotling together which can be seen from Spring + Kotlin FAQ. For example by default, classes are final and you have to mark them as “open” if you want the standard Java behaviour. This is useful to know with @Configuration classes and @Bean methods. There’s also Kotlin Primavera which is a set of libraries to support Spring portfolio projects.

For example Spring Boot + Kotlin application you should look at Spring.io writeup where they do a geospatial messenger with Kotlin, Spring Boot and PostgreSQL

What does Kotlin look like compared to Java?
Simple example of using Java 6, Java 8 and Kotlin to filter a Map and return a String. Notice that Kotlin and Java 8 are quite similar.

# Java 6
String result = "";
for (Map.Entry<Integer, String> entry : someMap.entrySet()) {
	if("something".equals(entry.getValue())){
		result += entry.getValue();
}
 
# Java 8
String result = someMap.entrySet().stream()
		.filter(map -> "something".equals(map.getValue()))
		.map(map->map.getValue())
		.collect(Collectors.joining());
 
# Kotlin
val result = someMap
  .values
  .filter { it == "something" }
  .joinToString("")
 
# Kotlin, shorter 
val str = "something"
val result = str.repeat(someMap.count { it.value == str })
 
# Kotlin, more efficient with large maps where only some matching.
val result = someMap
  .asSequence()
  .map { it.value }
  .filter { it == "something" }
  .joinToString("")

The last Kotlin example makes the evaluation lazy by changing the map to sequence. In Kotlin collections map/filter methods aren’t lazy by default but create always a new collection. So if we call filter after values method then it’s not as efficient with large maps where only some elements are matching the predicate.

Using Java and Kotlin in same project

To start with Kotlin it’s easiest to mix it existing Java project and write some classes with Kotlin. Using Kotlin in Maven project is explained in the Reference and to compile mixed code applications Kotlin compiler should be invoked before Java compiler. In maven terms that means kotlin-maven-plugin should be run before maven-compiler-plugin.

Just add the kotlin and kotlin-maven-plugin to your pom.xml as following

<dependencies>
    <dependency>
        <groupId>org.jetbrains.kotlin</groupId>
        <artifactId>kotlin-stdlib</artifactId>
        <version>1.0.3</version>
    </dependency>
</dependencies>
 
<plugin>
    <artifactId>kotlin-maven-plugin</artifactId>
    <groupId>org.jetbrains.kotlin</groupId>
    <version>1.0.3</version>
    <executions>
        <execution>
            <id>compile</id>
            <phase>process-sources</phase>
            <goals> <goal>compile</goal> </goals>
        </execution>
        <execution>
            <id>test-compile</id>
            <phase>process-test-sources</phase>
            <goals> <goal>test-compile</goal> </goals>
        </execution>
    </executions>
</plugin>

Notes on testing

Almost everything is final in Kotlin by default (classes, methods, etc) which is good as it forces immutability, less bugs. In most cases you use interfaces which you can easily mock and in integration and functional tests you’re likely to use real classes, so even then final is not an obstacle. For using Mockito there’s Mockito-Kotlin library https://github.com/nhaarman/mockito-kotlin which provides helper functions.

You can also do better than just tests by using Spek which is a specification framework for Kotlin. It allows you to easily define specifications in a clear, understandable, human readable way.

There’s yet no static analyzers for Kotlin. Java has: FindBugs, PMD, Checkstyle, Sonarqube, Error Prone, FB infer. Kotlin has kotlinc and IntelliJ itself comes with static analysis engine called the Inspector. Findbugs works with Kotlin but detects some issues that are already covered by the programming language itself and are impossible in Kotlin.

To use Kotlin or not?

After writing some classes with Kotlin and testing converting existing Java classes to Kotlin it makes the code leaner and easier to read especially with data classes like DTOs. Less (boilerplate) code is better. You can call Java code from Kotlin and Kotlin code can be used from Java rather smoothly as well although there are some things to remember.

So, to use Kotlin or not? It looks a good statically-typed alternative to Java if you want to expand your horizons. It’s pragmatic evolution to Java that respects the need for good Java integration and doesn’t introduce anything that’s terribly hard to understand and includes a whole bunch of features you might like. The downsides what I’ve come across are that tooling support is kind of limited, meaning in practice only IntelliJ IDEA. Also documentation isn’t always up to date or updated when the language evolves and that’s also an issue when searching for examples and issues. But hey, everything is fun with Kotlin :)

Docker containers and using Alpine Linux for minimal base images

After using Docker for a while, you quickly realize that you spend a lot of time downloading or distributing images. This is not necessarily a bad thing for some but for others that scale their infrastructure are required to store a copy of every image that’s running on each Docker host. One solution to make your images lean is to use Alpine Linux which is a security-oriented, lightweight Linux distribution.

Lately I’ve been working with our Docker images for Java and Node.js microservices and when our stack consist of over twenty services, one thing to consider is how we build our docker images and what distributions to use. Building images upon Debian based distributions like Ubuntu works nicely but it gives packages and services which we don’t need. And that’s why developers are aiming to create the thinnest most usable image possible either by stripping conventional distributions, or using minimal distributions like Alpine Linux.

Choosing your Linux distribution

What’s a good choice of Linux distribution to be used with Docker containers? There was a good discussion in Hacker News about small Docker images, which had good points in the comment section to consider when choosing container operating system.

For some, size is a tiny concern, and far more important concerns are, for example:

  • All the packages in the base system are well maintained and updated with security fixes.
  • It’s still maintained a few years from now.
  • It handles all the special corner cases with Docker.

In the end the choice depends on your needs and how you want to run your services. Some like to use the quite large Phusion Ubuntu base image which is modified for Docker-friendliness, whereas others like to keep things simple and minimal with Alpine Linux.

Divide and conquer?

One question to ask yourself is: do you need full operating system? If you dump an OS in a container you are treating it like a lightweight virtual machine and that might be fine in some cases. If you however restrict it to exactly what you need and its runtime dependencies plus absolutely nothing more then suddenly it’s something else entirely – it’s process isolation, or better yet, it’s portable process isolation.

Other thing to think about is if you should combine multiple processes in single container. For example if you care about logging you shouldn’t use a logger daemon or logrotate in a container, but you probably want to store them externally – in a volume or mounted host directory. SSH server in container could be useful for diagnosing problems in production, but if you have to log in to a container running in production – you’re doing something wrong (and there’s docker exec anyways). And for cron, run it in a separate container and give access to the exact things your cronjob needs.

There are a couple of different schools of thought about how to use docker containers: as a way to distribute and run a single process, or as a lighter form of a virtual machine. It depends on what you’re doing with docker and how you manage your containers/applications. It makes sense to combine some services, but on the other hand you should still separate everything. It’s preferred to isolate every single process and explicitly telling it how to communicate with other processes. It’s sane from many perspectives: security, maintainability, flexibility and speed. But again, where you draw the line is almost always a personal, aesthetic choice. In my opinion it could make sense to combine nginx and php-fpm in a single container.

Minimal approach

Lately, there has been some movement towards minimal distributions like Alpine Linux, and it has got a lot of positive attention from the Docker community. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox using a grsecurity/PaX patched Linux kernel and OpenRC as its init system. In its x86_64 ISO flavor, it weighs in at an 82MB and a container requires no more than 8 MB. Alpine provides a wealth of possible packages via its apk package manager. As it uses musl, you may run into some issues with environments expecting glibc-like behaviour (for example Kubernetes or with compiling some npm modules), but for most use cases it should work just fine. And with minimal base images it’s more convenient to divide your processes to many small containers.

Some advantages for using Alpine Linux are:

  • Speed in which the image is downloaded, installed and running on your Docker host
  • Security is improved as the image has a smaller footprint thus making the attack surface also smaller
  • Faster migration between hosts which is especially helpful in high availability and disaster recovery configurations.
  • Your system admin won’t complain as much as you will use less disk space

For my purposes, I need to run Spring Boot and Node.js applications on Docker containers, and they were easily switched from Debian based images to Alpine Linux without any changes. There are official Docker images for OpenJDK/OpenJRE on Alpine and Dockerfiles for running Oracle Java on Alpine. Although there isn’t an official Node.js image built on Alpine, you can easily make your own Dockerfile or use community provided files. When official Java Docker image is 642 MB, Alpine Linux with OpenJDK 8 is 150 MB and with Oracle JDK 382 MB (can be stripped down to 172 MB). With official Node.js image it’s 651 MB (or if using slim 211 MB) and with Alpine Linux that’s 36 MB. That’s a quite a reduction in size.

Examples of using minimal container based on Alpine Linux:

For Node.js:

FROM alpine:edge
 
ENV NODE_ALPINE_VERSION=6.2.0-r0
 
RUN apk update && apk upgrade \
    && apk add nodejs="$NODE_ALPINE_VERSION"

For Java applications with OpenJDK:

FROM alpine:edge
ENV LANG C.UTF-8
 
RUN { \
      echo '#!/bin/sh'; \
      echo 'set -e'; \
      echo; \
      echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
   } > /usr/local/bin/docker-java-home \
   && chmod +x /usr/local/bin/docker-java-home
 
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:$JAVA_HOME/bin
ENV JAVA_VERSION 8u92
ENV JAVA_ALPINE_VERSION 8.92.14-r0
 
RUN set -x \
    && apk update && apk upgrade \
    && apk add --no-cache bash \
    && apk add --no-cache \
      openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

If you want to read more about running services on Alpine Linux, check Atlassian’s Nicola Paolucci’s nice article about experiences of running Java apps on Alpine.

Go small or go home?

So, should you use Alpine Linux for running your application on Docker? As also Docker official images are moving to Alpine Linux then it seems to make perfect sense from both a performance and security perspectives to switch to Alpine. And if you don’t want to take the leap from Debian or Ubuntu or want support from the downstream vendor you should consider stripping it from unneeded files to make it smaller.