Web analytics with Piwik: keeping control over your own data

Web analytics is one the essential tools for a website and including measuring web traffic and getting information about the number of visitors it can be also used as a tool to assess and improve the effectiveness of a website. The most common way to collect data is to use on-site web analytics, measure a visitor’s behavior once on your website, with page tagging technology like on Google Analytics which is widely used web analytics service. But what would you use if you want to keep control over your own data?

You don’t have to look far as the only open source web analytics application is Piwik which aims to be the ultimate open alternative to Google Analytics. Here’s a short overview to Piwik Analytics and how to get started with it.

“Web analytics is the measurement, collection, analysis and reporting of web data for purposes of understanding and optimizing web usage.” – Wikipedia

Piwik Open Analytics Platform

Piwik is web analytics application which tracks online visits to one or more websites and displays reports on these visits for analysis. In short it aims to be the ultimate open source alternative to Google Analytics. The code is GPL v3 licensed and available in GitHub. In technical side Piwik is written in PHP, uses MySQL database and you can host it by yourself. And if you don’t want to setup or host Piwik yourself you can also get commercial services.

Piwik provides the usual features you would expect from a web analytics application. You get reports regarding the geographic location of visits, the source of visits, the technical capabilities of visitors, what the visitors did and the time of visits. Piwik also provides features for analysis of the data it accumulates such as saving notes to data, goals for actions, transitions for seeing how visitors navigate, overlaying analytics data on top of a website and displaying how metrics change over time. The easiest way to see what it has to offer is to check the Piwik online demo.

Feature highlights

You might ask how Piwik differs from other web analytics applications such as Google Analytics? One principle advantage of using Piwik is that you are in control. You can host Piwik on your own server and the data is tracked inside your MySQL database: you’ve full control over your data. Software as a service analytics applications on the other hand, have full access to the data users collect. Data privacy is essential for public sector and enterprises who can’t or don’t want to share it for example with Google. You ensure that your visitors behavior on your website is not shared with advertising companies.

Other interesting feature is that it provides advanced privacy options: ability to anonymize IP addresses, purge tracking data regularly (but not report data), opt-out support and Do Not Track support. Your website visitors can decide if they want to be tracked.

You can also do scheduled reports which are sent by e-mail, import data from web server logs, use the API for accessing reports and administrative functions and Piwik also has mobile app to access the analytics data. Piwik is also customizable with plugins and you can integrate it with WordPress and other applications.

Piwik’s User Interface

Piwik has clean and simple user interface as seen in the following screenshots (taken from the online demo).

Piwik main view
Piwik main view

Piwik visitors overview
Piwik visitors overview

Setting up Piwik

Setting up Piwik is easy and there’s good documention available for running Piwik web analytics. All you need is web server like Nginx, PHP 5.5 and MySQL or MariaDB. You can setup it manually but the most easiest way to start with it is to use the provided Docker image and docker-compose. The docker-compose file setups four containers (MySQL, Piwik, Nginx and Cron) and with compose you can start it up. The Piwik image is available from official docker-library.

The alternative is to do your own Docker image for Piwik and related services. In my opinion it makes sense to have just two containers: one for Piwik related web stuff and other for MySQL. The Piwik container runs Piwik, Nginx and Cron script with e.g. supervisor. The official image uses Debian (from PHP) but Piwik runs nicely also on Alpine Linux. One thing to tinker with when using Docker is to get MySQL access to Piwik’s assets for LOAD DATA INFILE which will greatly speed Piwik’s archiving process.

If you’re setting up Piwik manually you can watch a video of installation and after that a video of configuring settings. After you’re done with the 5 minute installation you get the JavaScript tag which you add to the bottom of each page of your website. If you’re using React there’s Piwik analytics component for React Router. Piwik will then record the activity across your website within your database.

And that’s about all there is to starting with Piwik. Simple setup with Docker or doing it manually, adding the JavaScript tag, configuring some options if needed and then just wait for the data from visitors.


Piwik is good and feature rich alternative for web analytics application. Setting it up isn’t as straightforward as using some hosted service as Google Analytics but that’s the way self-hosted services always are. If you need web analytics and want to keep control of your own data and don’t mind hosting it yourseld and paying for the server then Piwik is a good choice.

Weekly notes 9

Summer is here and mountain biking trails are calling but keeping up with what happens in the field never stops. This week Apple had their worldwide developers conference which filled up social media although didn’t present anything remarkable. In the other news there was good collection of slides for Java developers, ebook for DevOps and HyperDev looks interesting for quickly bang out JavaScript.

Weekly notes, issue 9, 17.6.2016

Java: stay updated, reactive and in the cloud

13 Decks Java developers must see to stay updated
Selection of nice slideshows for Java developers. Best practices, microservices, debugging, Elasticsearch, SQL.

Java SE 8 best practices
Java 8 best practices by Stephen Colebourne’s is good read. The slides cover all the basic uses, such as lambdas, exceptions, streams and interfaces. (from the “13 Decks Java developers” post)

Microservices + Oracle: A Bright Future
Good slides of what are microservices. Considerations, prerequisites, patterns, technologies and Oracle’s plans. (from the “13 Decks Java developers” post)

Notes on Reactive Programming, Part I: The Reactive Landscape and Part II: Writing Some Code
A solid intro to the reactive programming. And no, it’s no coincidence that this is first. A reactive system is an entirely different beast, and such a good fit for a small set of scenarios. (from Java Web Weekly, Issue 128)

Netflix OSS, Spring Cloud, or Kubernetes? How About All of Them!
The Netflix ecosystem of tools is based on practical usage at scale, so it’s always super useful to go deep into understanding their tools. (from Java Web Weekly, Issue 128)

Takeouts from WWDC 2016

Digging into the dev documentation for APFS, Apple’s new file system

Interesting low level stuff in Mac OS Sierra. APFS takes over HFS+, has native encryption, snapshots (Time Machine done right) and is case-sensitive. Hacker News comments are worth reading.

The 13 biggest announcements from Apple WWDC 2016
WWDC 2016 was about software and incremental changes. Siri is opening up to app developers, iOS is growing up, iOS gets Apple TV remote app and Apple introduces single sign-on system.

Continuous learning

DevOpsSec: Securing Software through Continuous Delivery
DevOpsSec free ebook is worth reading if you’re interested securing software through continuous delivery. Uses case studies from Etsy, Netflix, and the London Multi-Asset Exchange to illustrate the steps leading organizations have taken to secure their DevOps processes.

Microservice Pitfalls & AntiPatterns, Part 1
An anti-pattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn’t one. A pitfall is something that was never a good idea, even from the start. (from The Microservice Weekly #31)

Tools of the trade

Introducing HyperDev
HyperDev looks to be an interesting new product at Fog Creek Software (known from e.g. Trello). It’s developer playground for building full-stack web-apps fast. “The fastest way to bang out JavaScript code on Node.js and get it running on the internet.” as Joel Spolsky describes it.

V8, modern JavaScript, and beyond – Google I/O 2016
Debugging Node.js apps with Chrome Developer Tools is soon enabled by coming v8_inspector support.

Something different

Why do we have allergies?
Allergies such as peanut allergy and hay fever make millions of us miserable, but scientists aren’t even sure why they exist.

Docker containers and using Alpine Linux for minimal base images

After using Docker for a while, you quickly realize that you spend a lot of time downloading or distributing images. This is not necessarily a bad thing for some but for others that scale their infrastructure are required to store a copy of every image that’s running on each Docker host. One solution to make your images lean is to use Alpine Linux which is a security-oriented, lightweight Linux distribution.

Lately I’ve been working with our Docker images for Java and Node.js microservices and when our stack consist of over twenty services, one thing to consider is how we build our docker images and what distributions to use. Building images upon Debian based distributions like Ubuntu works nicely but it gives packages and services which we don’t need. And that’s why developers are aiming to create the thinnest most usable image possible either by stripping conventional distributions, or using minimal distributions like Alpine Linux.

Choosing your Linux distribution

What’s a good choice of Linux distribution to be used with Docker containers? There was a good discussion in Hacker News about small Docker images, which had good points in the comment section to consider when choosing container operating system.

For some, size is a tiny concern, and far more important concerns are, for example:

  • All the packages in the base system are well maintained and updated with security fixes.
  • It’s still maintained a few years from now.
  • It handles all the special corner cases with Docker.

In the end the choice depends on your needs and how you want to run your services. Some like to use the quite large Phusion Ubuntu base image which is modified for Docker-friendliness, whereas others like to keep things simple and minimal with Alpine Linux.

Divide and conquer?

One question to ask yourself is: do you need full operating system? If you dump an OS in a container you are treating it like a lightweight virtual machine and that might be fine in some cases. If you however restrict it to exactly what you need and its runtime dependencies plus absolutely nothing more then suddenly it’s something else entirely – it’s process isolation, or better yet, it’s portable process isolation.

Other thing to think about is if you should combine multiple processes in single container. For example if you care about logging you shouldn’t use a logger daemon or logrotate in a container, but you probably want to store them externally – in a volume or mounted host directory. SSH server in container could be useful for diagnosing problems in production, but if you have to log in to a container running in production – you’re doing something wrong (and there’s docker exec anyways). And for cron, run it in a separate container and give access to the exact things your cronjob needs.

There are a couple of different schools of thought about how to use docker containers: as a way to distribute and run a single process, or as a lighter form of a virtual machine. It depends on what you’re doing with docker and how you manage your containers/applications. It makes sense to combine some services, but on the other hand you should still separate everything. It’s preferred to isolate every single process and explicitly telling it how to communicate with other processes. It’s sane from many perspectives: security, maintainability, flexibility and speed. But again, where you draw the line is almost always a personal, aesthetic choice. In my opinion it could make sense to combine nginx and php-fpm in a single container.

Minimal approach

Lately, there has been some movement towards minimal distributions like Alpine Linux, and it has got a lot of positive attention from the Docker community. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox using a grsecurity/PaX patched Linux kernel and OpenRC as its init system. In its x86_64 ISO flavor, it weighs in at an 82MB and a container requires no more than 8 MB. Alpine provides a wealth of possible packages via its apk package manager. As it uses musl, you may run into some issues with environments expecting glibc-like behaviour (for example Kubernetes or with compiling some npm modules), but for most use cases it should work just fine. And with minimal base images it’s more convenient to divide your processes to many small containers.

Some advantages for using Alpine Linux are:

  • Speed in which the image is downloaded, installed and running on your Docker host
  • Security is improved as the image has a smaller footprint thus making the attack surface also smaller
  • Faster migration between hosts which is especially helpful in high availability and disaster recovery configurations.
  • Your system admin won’t complain as much as you will use less disk space

For my purposes, I need to run Spring Boot and Node.js applications on Docker containers, and they were easily switched from Debian based images to Alpine Linux without any changes. There are official Docker images for OpenJDK/OpenJRE on Alpine and Dockerfiles for running Oracle Java on Alpine. Although there isn’t an official Node.js image built on Alpine, you can easily make your own Dockerfile or use community provided files. When official Java Docker image is 642 MB, Alpine Linux with OpenJDK 8 is 150 MB and with Oracle JDK 382 MB (can be stripped down to 172 MB). With official Node.js image it’s 651 MB (or if using slim 211 MB) and with Alpine Linux that’s 36 MB. That’s a quite a reduction in size.

Examples of using minimal container based on Alpine Linux:

For Node.js:

FROM alpine:edge
RUN apk update && apk upgrade \
    && apk add nodejs="$NODE_ALPINE_VERSION"

For Java applications with OpenJDK:

FROM alpine:edge
RUN { \
      echo '#!/bin/sh'; \
      echo 'set -e'; \
      echo; \
      echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
   } > /usr/local/bin/docker-java-home \
   && chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
RUN set -x \
    && apk update && apk upgrade \
    && apk add --no-cache bash \
    && apk add --no-cache \
      openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

If you want to read more about running services on Alpine Linux, check Atlassian’s Nicola Paolucci’s nice article about experiences of running Java apps on Alpine.

Go small or go home?

So, should you use Alpine Linux for running your application on Docker? As also Docker official images are moving to Alpine Linux then it seems to make perfect sense from both a performance and security perspectives to switch to Alpine. And if you don’t want to take the leap from Debian or Ubuntu or want support from the downstream vendor you should consider stripping it from unneeded files to make it smaller.

Weekly notes 8

The Spring has been quite busy at work but Summer is just around the corner and that means either holidays or having some time to learn new things and see how things could be make better. My weekly notes has turned out to be monthly notes but that’s how things sometimes work out. But back to the issue which covers topics about continuous learning, best practices in development, looks into building blocks in Netflix’s stack and how to get started with ELK stack. And for the Summer project there’s Stanford’s Swift and iOS 9 course. Having done my iOS app with Swift it seems to be nice language.

Weekly notes, issue 8, 19.5.2016

Learning new things

Developing iOS 9 Apps With Swift from Stanford
Stanford iOS course is updated for Swift and iOS 9 and is good resource for learning iOS, Swift, or just to refresh yourself on best practices when developing for the platform. (Indie iOS focus weekly, issue 66)

Keep on learning and keep it simple

The single biggest mistake programmers make every day
Nice writeup of basic principles in programming. In short: Keep It Stupid Simple. Make it work, make it right, make it fast. Do One Thing.

Being A Developer After 40
Software development is always changing which this article tells nicely and gives good advice for the young at heart how to reach the glorious age of 40 as a happy software developer. tl;dr; Forget the hype, Choose your galaxy wisely, Learn about software history, Keep on learning, Teach, Workplaces suck, Know your worth, Send the elevator down, LLVM, Follow your gut, APIs are king, Fight complexity,

5 Tips To Improve Your JS with ES6
A well recorded hour long remote talk covering not only some handy ES6 tips, but how to work with ES6 generally and some of the tools available. (from JavaScript Weekly, issue 274)

Microservices, best practices and Java

Microservices are about applying a group of Best Practices
Moving an existing codebase to a microservice architecture is no small feat. And that’s not even taking into account the non-technical challenges. We definitely need more nuanced strategies based on actual production experience with microservices to help drive these architectural decisions. (from Java Web Weekly 123)

jDays 2016: Java EE Microservices Platforms
A lot of people preach that you can’t build microservices with Java EE but Steve Millidge’s talk about Java EE Microservices Platforms tells us that Payara Micro and Wildfly Swarm are fast and have a small memory footprint and that it does not require any code changes to port the application from one to other. (from Java Web Weekly 18/16)

The Netflix Stack: Part 1, Part 2 and Part 3
Microservices architecture in software development is what you should nowadays do but the question is how? The Netflix Stack article series covers some open source libraries you can use to build your architecture. Part 1 covers Eureka for service discovery and Part 2 is about Hystrix, latency and fault tolerance library. Part 3 is about creating rest clients for all of your services. The blog posts are an overview of what you can find in the accompanying repository.

Java app monitoring with ELK: part 1: Logstash and Logback and part 2: ElasticSearch
These blog posts tells you about the ELK stack (ElastichSearch, Logtash, Kibana) which is useful tool for logging visualization and analysis. (from Java Web Weekly 116)


10 SQL tricks that you didn’t think were possible
Lukas Eder tells you 10 SQL tricks that many of you might not have thought were possible. The article is a summary of his extremely fast-paced, ridiculously childish-humoured talk. “SQL is the original microservice”.

Tools of the trade

“A a simple starting point for a better Bash user experience out of the box.” These settings do make Bash easier and more useful. (from Weekend Reading)

Stranger Danger: Addressing the Security Risk in NPM Dependencies
Presentation from the O’Reilly Fluent Conference by Snyk co-founders which covers recently found exploit, and shows you how to use Snyk in your development workflow.

Something different

Interesting simulation with JavaScript how the web looks like to people with dyslexia. In the comments person with dyslexia tells that it’s easier to read when the text shifts. So, would dyslexia mode be good for website UX :) (from Weekend Reading)

Avoiding JVM delays caused by random number generation

The library used for random number generation in Oracle’s JVM relies on /dev/random by default for UNIX platforms. This can potentially block the WebLogic Server process because on some operating systems /dev/random waits for a certain amount of “noise” to be generated on the host machine before returning a result.

Although /dev/random is more secure, it’s recommended to use /dev/urandom if the default JVM configuration delays WebLogic Server startup. To determine if your operating system exhibits this behaviour, try displaying a portion of the file from a shell prompt: head -n 1 /dev/random

If the command returns immediately, you can use /dev/random as the default generator for JVM. If the command does not return immediately, use these steps to configure the JVM to use /dev/urandom:

  1. Open the $JAVA_HOME/jre/lib/security/java.security file in a text editor.
  2. Change the line “securerandom.source=file:/dev/random” to read: securerandom.source=file:/dev/./urandom
  3. Save your change and exit the text editor.

And because there’s a bug in JDK when you use /dev/urandom you have to set it up as /dev/./urandom

You can also set up system property “java.security.egd” which will override the securerandom.source setting.

Weekly notes 7

Easter and couple of days of free time is good for taking a break from the routines or finally have some time to develop your personal pet projects. At least my Highkara news reader for iOS needs some UI tests for screenshots and maybe I get to finish my imgur app for tvOS. But before that here’s the weekly notes.

This week we get overview to OWASP projects, see how Stack Overflow is built, learn to design for the Apple TV and get to run WebLogic on Docker container. Finally we discover how Spotify Discover Weekly playlists work.

Issue 7, 2016-03-24


Quick developer’s guide to OWASP projects
Interesting poster-type developer’s guide to OWASP projects. Learn how to secure your web apps against common web vulnerabilities.

How it’s built

Stack Overflow: The Architecture – 2016 Edition
If you’re wondering how’s Stack Overflow built and what’s the load check this article. Interesting. Running on Windows using IIS, ASP.Net, .Net, SQL Server and supported by CentOS and Redis, Elasticsearch.

Why I Left Gulp and Grunt for npm Scripts
Cory House explains how Gulp and Grunt are unnecessary abstractions, whereas npm scripts are plenty powerful and often easier to live with. It’s easier to debug as there’s no extra layer of abstraction, there’s no dependence on plugin authors to update, original tool is better and clearer documented. (from Web Design Weekly #219)

iOS and tvOS development

An in-app debugging and exploration tool for iOS
Excellent tool for iOS developer which helps you for example to simulate 3D Touch in the Simulator. Also in Xcode 7.3 you can now simulate 3D Touch without external tools if your trackpad has Force Touch.

Designing for the Apple TV
Michael Flarup writes some tips for getting design right when working with the Apple TV. He covers all of the basics but also some interesting points like making sure you meet the expectations of a TV based platform in terms of displaying and taking advantage of video based content. (from iOS dev Weekly #239)

This is pttrns for tvOS. Not a huge amount of data in yet but what’s there is worth a look. (from iOS dev Weekly #240)

Enterprise Java

WebLogic on Docker Containers Series: Part 1, Part 2 and Part 3
If you are using WebLogic as your application server, you should have a look at Bruno Borges series about running WebLogic on Docker. First post gets you started and shows how to create a basic Docker image with WebLogic and one with a configured WebLogic domain. The second post takes a more detailed look at the creation of the images, and the third one focusses on the domain configuration. (from Java Weekly 8/16)

Something different

I Documented Two Years of Travel By Painting In My Moleskine Notebook
Lovely hand-crafted art collection created by a traveler during her visits to different places around the world. An alternative to taking thousands of photos that no one will look at afterwards anyway and a beautiful, more emotional representation of lovely places. (from WDRL 126)

How Spotify Discover Weekly playlists work? and Recommending music on Spotify with deep learning
If you’re wondering how Spotify finds the tracks to your Discover Weekly list, read these two articles.

Container orchestration with CoreOS at Devops Finland meetup

Development and Operations, DevOps, is one of the important things when going beyond agile. It’s boosting the agile way of working and can be seen as an incremental way to improve our development practices. And what couldn’t be a good place to improve than learning at meetups how others are doing things. This time DevOps Finland meetup was about container orchestration with CoreOS and it was held at Oppex’s lounge in central Helsinki. The talks gave a nice dive into CoreOS, covering both beginner and seasoned expert points of view. Here’s my short notes about the presentations.

CoreOS intro for beginners, by beginners

The first talk was practically an interactive Core OS tutorial by Antti Vähäkotamäki and Frans Ojala. Their 99 slides showed how to get started with CoreOS on Vagrant step by step and what difficulties they experienced. Nothing special.

CoreOS in production, lessons learned

The more interesting talk about CoreOS was “CoreOS in production, lessons learned” by Vlad Bondarenko from Oppex where he told about their software stack and how they’re running it. In short, they’re running on baremetal with CoreOS Nginx for reverse proxy, Node.js for UI and API and RethinkDB and SolrCloud clusters. Deployment is made with Ansible and makefiles and Ship.it is used for Node.js. Service discovery is DNS based with docker-etcd-registrator component and they’ve also written their own DNS server. For Node.js config management with etcd they’ve made etcd-simple-config component. With Docker they use standard images with volumes and inject own data to the container.

CoreOS seemed to work quite well for them with easy cluster management, running multiple versions of 3rd party and own software and having zero downtime updates or rollbacks. But there were some cons also like maturity (bugs) and scripting systemd.

Kontena, CoreOS war stories

The last talk was about CoreOS war stories in Kontena by Jari Kolehmainen. The slides tell the story of how they use CoreOS on Kontena and what are the pain points. In story short it comes to configuration management and issues related to etcd.

For bootstrapping they use CloudInit which is de-facto way to initialize cloud instances and Integrated to CoreOS. The hard parts with etcd are discovery, security (tls certificates), using central services vs. workers and maintenance (you don’t do it). Now they run etcd inside a container, bind it only to localhost and overlay network (Weave Net) and master coordinates etcd discovery. With automatic updates they use the best-effort strategy: If etcd is running, locksmith coordinates the reboots; Otherwise just reboot when update is available.

Presentation’s summary was that the “OS” part is currently best option for containers and etcd is a must, but a little hard to handle. For the orchestrator they suggest that pick one which hides all the complexities. And automate all the things.

Problems with installing Oracle DB 12c EE, ORA-12547: TNS: lost contact

For development purposes I wanted to install Oracle Database 12c Enterprise Edition to Vagrant box so that I could play with it. It should’ve gone quite straight forwardly but in my case things got complicated although I had Oracle Linux and the pre-requirements fulfilled. Everything went fine until it was time to run the DBCA and create the database.

The DBCA gave “ORA-12547: TNS: lost contact” error which is quite common. Google gave me couple of resources to debug the issue. Oracle DBA Blog explained common issues which cause ORA-12547 and solutions to fix it.

One of the suggested solutions was to check to ensure that the following two files are not 0 bytes:

ls -lt $ORACLE_HOME/bin/oracle
ls -lt $ORACLE_HOME/rdbms/lib/config.o

And true, my oracle binary was 0 bytes

-rwsr-s--x 1 oracle oinstall 0 Jul  7  2014 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

To fix the binary you need to relink it and to do that rename the following file:

$ cd $ORACLE_HOME/rdbms/lib
$ mv config.o config.o.bad

Then, shutdown the database and listener and then “relink all”

$ relink all

If just things were that easy. Unfortunately relinking ended on error:

[oracle@oradb12c lib]$ relink all
/u01/app/oracle/product/12.1.0/dbhome_1/bin/relink: line 168: 13794 Segmentation fault      $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/install/modmakedeps.pl $ORACLE_HOME $ORACLE_HOME/inventory/make/makeorder.xml > $CURR_MAKEORDER
writing relink log to: /u01/app/oracle/product/12.1.0/dbhome_1/install/relink.log

After googling some more I found similar problem and solution: Relink the executables by running make install.

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk install
cd $ORACLE_HOME/network/lib
make -f ins_net_server.mk install
If needed you can also relink other executables:
<pre lang="shell">
make -kf ins_sqlplus.mk install (in $ORACLE_HOME/sqlplus/lib)
make -kf ins_reports60w.mk install (on CCMgr server)
make -kf ins_forms60w.install (on Forms/Web server)

But of course it didn’t work out of the box and failed to error:

/bin/ld: cannot find -ljavavm12
collect2: error: ld returned 1 exit status
make: *** [/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib/oracle] Error 1

The solution is to copy the libjavavm12.a to under $ORACLE_HOME lib as explained:

cp $ORACLE_HOME/javavm/jdk/jdk6/lib/libjavavm12.a $ORACLE_HOME/lib/

Run the make install commands from above again and you should’ve working oracle binary:

-rwsr-s--x 1 oracle oinstall 323649826 Feb 17 16:27 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

After this I ran the relink again which worked and also the install of the database worked fine.

relink all

Start the listener:

lsnrctl start LISTENER

Create the database:

dbca -silent -responseFile $ORACLE_BASE/installation/dbca.rsp

The problems I encountered while installing Oracle Database 12c Enterprise Edition to Oracle Linux 7 although in Vagrant and with Ansible were surprising as you would think that on certified platform it should just work. If I would’ve been using CentOS or Ubuntu it would’ve been totally different issue.

You can see the Ansible tasks I did to get Oracle DB 12c EE installed on Oracle Linux 7 in my vagrant-experiments GitHub repo.

Oracle DB 12c EE Ansible Tasks
Oracle DB 12c EE Ansible Tasks

Using Let’s Encrypt SSL certificates on Centos 6

Let's Encrypt all the things

Let’s Encrypt is now in public beta, meaning, you can get valid, trusted SSL certificates for your domains for free. Free SSL certificates for everyone! As Let’s Encrypt is relatively easy to setup, there’s now no reason not to use HTTPS for your sites. The needed steps are described in the documentation and here’s short guide how to setup Let’s Encrypt in CentOS 6.x and automate the SSL certificate renewal.

Let’s Encrypt installation

The Let’s Encrypt Client is a fully-featured, extensible client for the Let’s Encrypt CA that can automate the tasks of obtaining certificates and configuring web servers to use them. The installation is simple but in my case on CentOS 6.x I first needed to update to Python 2.7 as Let’s Encrypt supports Python 2.7+ only.

Installing Python 2.7 in Centos 6.x

# Install Epel Repository
yum install epel-release
# Install IUS Repository
rpm -ivh https://rhel6.iuscommunity.org/ius-release.rpm
# Install Python 2.7 and Git
yum --enablerepo=ius install python27 python27-devel python27-pip python27-setuptools python27-virtualenv -y

Setting up Lets encrypt

Install Git if you don’t have it yet.

yum install git

If letsencrypt is packaged for your operating system, you can install it from there, and the other solution is to use the letsencrypt-auto wrapper script, which obtains some dependencies from your operating system and puts others in a python virtual environment:

# Get letsencrypt
git clone https://github.com/letsencrypt/letsencrypt
# See help
./letsencrypt/letsencrypt-auto --help

Running the client

You can either just run letsencrypt-auto or letsencrypt, and the client will guide you through the process of obtaining and installing certs interactively or you you can tell it exactly what you want it to do from the command line.

For example obtain a cert for your domain using the Apache plugin to both obtain and install the certs, you could do this:

./letsencrypt-auto --apache -d thing.com -d www.thing.com -d otherthing.net

(The first time you run the command, it will make an account, and ask for an email and agreement to the Let’s Encrypt Subscriber Agreement; you can automate those with –email and –agree-tos)

Although you can use the Apache plugin to obtain and install the certs it didn’t work for me. I got an error: “The apache plugin is not working; there may be problems with your existing configuration.” This seems to be an issue with Apache 2.2 and until it’s fixed you can use the webroot authentication method as explained in documentation.

./letsencrypt-auto certonly --webroot -w /var/www/example/ -d example.com

The webroot plugin works by creating a temporary file for each of your requested domains in ${webroot-path}/.well-known/acme-challenge. Then the Let’s Encrypt validation server makes HTTP requests to validate that the DNS for each requested domain resolves to the server running letsencrypt. Note that to use the webroot plugin, your server must be configured to serve files from hidden directories.

Now your certificate and chain have been saved at Let’s Encrypt configuration directory at “/etc/letsencrypt” and “/etc/letsencrypt/live/ contains symlinks to the latest certificates. Making regular backups of this folder is ideal.

All we have to do now is set it up in Apache.

Configure Apache to use Let’s Encrypt certs

In Let’s Encrypt configuration directory at “/etc/letsencrypt/live/ the .pem files are as follows (from the Letsencrypt documentation):

  • privkey.pem: Private key for the certificate.
    • This must be kept secret at all times! Never share it with anyone, including Let’s Encrypt developers. You cannot put it into a safe, however – your server still needs to access this file in order for SSL/TLS to work.
    • This is what Apache needs for SSLCertificateKeyFile
  • cert.pem: Server certificate only.
    • This is what Apache needs for SSLCertificateFile.
  • chain.pem: All certificates that need to be served by the browser excluding server certificate, i.e. root and intermediate certificates only.
    • This is what Apache needs for SSLCertificateChainFile.
  • fullchain.pem: All certificates, including server certificate. This is concatenation of chain.pem and cert.pem.

Now that we know which file is which we can configure our VirtualHost to use SSL with our new certs. Change the following lines in your Apache’s virtualhost’s SSL configuration:

SSLCertificateFile /etc/letsencrypt/live/<your-domain>/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/<your-domain>/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/<your-domain>/chain.pem

Finally, restart apache

You can test that your SSL is working with SSL Labs.

Automate updating Let’s Encrypt certs

As you surely noticed Let’s Encrypt CA issues short lived certificates (90 days) and you have to renew the certificates at least once in 3 months. Nice way to force sysadmins to automate the process.

To obtain a new version of the certificate you can simply run Let’s Encrypt again but doing that manually is not feasible. Let’s Encrypt is working hard on automating the renewal process but until that we have to do it by ourselves.

Fortunately we don’t need to invent our own scripts as there’s excellent article about automating Let’s Encrypt and script for crontab.

Get the autole.sh -script from GitHub that automates tasks like:

  • Check the expire date of the certificate and renew when the remaining days are below a value
  • Check that the directory for the challenge is well mapped
  • Alert the admin if it’s not possible to renew the certificate

Now you can renew certain domain’s certificates with

./autole.sh www.mydomain.com

And to renew all your certificates use

./autole.sh --renew-all

Now you can add this to the crontab, run weekly, and your certificates will be ready and renew automatically. This cron job will execute the command every Monday at 08:30.

30 8 * * 1 /usr/local/sbin/autole.sh <your-domain> >> /var/log/autole.log

Now before I switch my WordPress over to HTTPS I have to do some find & replace in the database and fix the URL’s of the images to be protocol relative.

Weekly notes 6

This year has started slowly and weekly notes has frozen to monthly notes. This time they tell us i.a. how to put Spring Boot in Docker, useful features of Java EE 7, ponder what all there’s to do to launch your mobile app, read tips how to get better with Node.js and how smaller is better. And finally we have Yoga routine to keep our body in shape.

Issue 6, 2016-01-27

Java is strong with this one

Java EE 7 At A Glance and Top 10 Java EE 7 Backend Features
A rundown of some of the most useful Java EE features – most of which look quite handy.

New year’s Spring Boot tricks in a container
Read how you can combine Spring Boot’s hot restarting and running application in a Docker container. Of course you could just run Spring Boot from the IDE and expose the MongoDB container port for the application.

Nashorn: Run JavaScript on the JVM
Nashorn is a high-performance JavaScript runtime written in Java for the JVM. It allows developers to embed JavaScript code inside their Java applications and even use Java classes and methods from their JavaScript code. But why would you want to do that?

Mobile app development is fun?

Everything you need to launch your app
Good checklist to go through during app development and when you’re going to launch your app. Launching an app isn’t as straightforward as you would think. (from Indie iOS Focus Weekly 48)

Everything you need to know about app screenshots
And with everything it really means that. Making screenshots of your app isn’t as easy as you would think. (from Indie iOS Focus Weekly 46)

Creating perfect App Store Screenshots of your iOS App
More about app screenshots. This time doing it “the right way” for all device types and languages. Isn’t easy this time either but it’s automated. You just need to use snapshot, frameit and to use UI Tests.

Why you shouldn’t bother creating a mobile app
Post-mortem of Birdly, a receipt management app in the business to business market. Gives insight and lessons to learn about the App Store. Even though the app had good use case the users didn’t really need it. (from Indie iOS Focus Weekly 48)


Find & fix known vulnerabilities in Node.js dependencies
Snyk looks to be quite crafty tool to find & fix known vulnerabilities in Node.js dependencies. Integrate Snyk into your CI and monitoring your applications for newly disclosed vulnerabilities.

TL;DR; Simplified man pages
Simplified man pages for when you just need to get shit done. Finally! You can use different clients for it and install if from e.g. npm install -g tldr.

pre-commit hooks
Some out-of-the-box hooks for pre-commit. See also: pre-commit.

Getting better is good?

Reduce Your bundle.js File Size By Doing This One Thing
Simple! Use relative file paths. The article looks at two examples to show the difference.

The Website Obesity Crisis
Keynote from Web Directions 2015: The Website Obesity Crisis. Beautiful websites come in all sizes and page weights but mostly-text sites are growing bigger with every passing year when there’s no reason for that. There’s also video.

How to Become a Better Node.js Developer in 2016
Tips and best practices not just for development but how to operate Node.js infrastructures, how you should do your day-to-day development and other useful pieces of advice. (from Twitter)
TL;DR; Use ES2015, follow callback conventions and async patterns, take care with error handling, use JavaScript standard style, follow the Twelve-Factor application rules, monitor your applications, use build system, update dependencies weekly and keep up.

Something different

15-minute yoga routine to enhance balance and agility
See how yoga can help you to enhance your balance and agility, including a 15-minute video that demonstrates these principles. This is targeted more for mountainbike riders than developers but better agility and balance doesn’t hurt anyone :)

The 100 best photographs ever taken without photoshop
Nature and humankind are both great artists, and when they join forces, amazing masterpieces can be produced.