Nebula Tech Thursday – Beer & DevOps 2.3.2017

Agile software development to the cloud can be nowadays seen more as a rule than exception and that’s also what this year’s first Nebula Tech Thursday’s topics were about. The event was held 2.3.2017 at Woolshed Bar & Kitchen alongside good food and beer.

The event consisted of talks about “Building a Full Devops Pipeline with Open Source Tools” by Oleg Mironov from Eficode and “Cloud Analytics – Providing Insight on Application Health and Performance” by Markus Vuorinen & Jarkko Stråhle from Nebula. The presentations were a bit high level and directed more to the business level people than developers but there was some new information how different tools were used in practice.

Overall it was nice event to hear how things can be done and to talk with people. Here’s my short notes from the event.

Nebula Tech Day

Cloud Analytics – Providing Insight on Application Health and Performance

Markus Vuorinen & Jarkko Stråhle from Nebula talked about how to gather data to Elasticsearch, make it accessible and visualize it with Kibana and make actions based on that. The ELK-stack (Elasticsearch – Logstash – Kibana) is commonly used and the presentation showed nicely how to utilize it with cloud.

Technical setup
Technical setup
Data flow to Elastic
Data flow to Elastic
To visualization and alerts
To visualization and alerts
Kibana main view
Kibana main view
Kibana and response times
Kibana and response times

Building a Full Devops Pipeline with Open Source Tools

Oleg Mironov from Eficode showed the building blocks of how to build a Devops pipeline with Open Source Tools and demoed it. Nothing really special if you don’t count Rancher and Cattle. Just put your code to Git, use Ansible, run Jenkins jobs, build docker images, use RobotFramework for testing, push artifacts to Artifactory and deploy with Rancher.

Rancher overview
Rancher overview
DevOps Pipeline
DevOps Pipeline

DevOps Finland Meetup goes Mobile at Zalando

Development and operations, DevOps, is in my opinion essential for getting things done with timely manner and it’s always good to hear how others are doing it by attending meetups. This time DevOps Finland went Mobile and we heard nice presentations about continuous delivery for mobile applications, mobile testing with Appium and the Robot Framework and efficient mobile development cycle. Compared to developing Web applications mobile brings some extra hurdles to jump but nothing that’s not solvable. Here are my short notes about the meetup.

The meetup was hosted by Zalando Technology at their new office here in Helsinki. Zalando is known to many as that online store that sells shoes, clothing and other fashion items but things don’t sell themselves and behind the scenes they have lots of technologies to keep things running. For the record I think they said that the meetup had 65 attendees of the 100.

Also if mobile is your thing there’s a new Meetup group for mobile developers in Finland which was announced at the meetup. They’re also on Twitter and Facebook.

Continuous Delivery for Mobile Applications

Rami Rantala from Zalando talked about “Continuous Delivery for Mobile Applications” and how they’re managing releases of their Fleek app which is available for Android and iOS in German markets.

They didn’t arrive to the final setup straightforward and it was iterative approach with how Git is used, code merged and releases done. Using Fastlane for all tedious tasks, like generating screenshots, dealing with code signing, and releasing your application made automating things easier. Interesting note was that their build server slaves are ansible managed Mac Minis on Rami’s desk. They had solved the problems nicely but testing is still difficult.

DevOps and rollbacks don’t work together, you roll forward.

Mobile testing with Appium and the Robot Framework

Mobile testing can be done with different tools and one option is to use Robot Framework just like for Web applications. Elmeri Poikolainen from Eficode demoed how to use Appium and run Robot Framework tests on real device. It has some limitations and I think with native applications it could be better to use native test tools like what Xcode has to offer.

Efficient Mobile Development Cycle

The last and most fast-paced talk was by Jerry Jalava from Qvik about “Efficient Mobile Development Cycle“. He talked about different practices and tools in the development cycle and it was nice overview to the complexity of the process from design to done. You can for example run remote preview with 27 devices.

Docker containers and using Alpine Linux for minimal base images

After using Docker for a while, you quickly realize that you spend a lot of time downloading or distributing images. This is not necessarily a bad thing for some but for others that scale their infrastructure are required to store a copy of every image that’s running on each Docker host. One solution to make your images lean is to use Alpine Linux which is a security-oriented, lightweight Linux distribution.

Lately I’ve been working with our Docker images for Java and Node.js microservices and when our stack consist of over twenty services, one thing to consider is how we build our docker images and what distributions to use. Building images upon Debian based distributions like Ubuntu works nicely but it gives packages and services which we don’t need. And that’s why developers are aiming to create the thinnest most usable image possible either by stripping conventional distributions, or using minimal distributions like Alpine Linux.

Choosing your Linux distribution

What’s a good choice of Linux distribution to be used with Docker containers? There was a good discussion in Hacker News about small Docker images, which had good points in the comment section to consider when choosing container operating system.

For some, size is a tiny concern, and far more important concerns are, for example:

  • All the packages in the base system are well maintained and updated with security fixes.
  • It’s still maintained a few years from now.
  • It handles all the special corner cases with Docker.

In the end the choice depends on your needs and how you want to run your services. Some like to use the quite large Phusion Ubuntu base image which is modified for Docker-friendliness, whereas others like to keep things simple and minimal with Alpine Linux.

Divide and conquer?

One question to ask yourself is: do you need full operating system? If you dump an OS in a container you are treating it like a lightweight virtual machine and that might be fine in some cases. If you however restrict it to exactly what you need and its runtime dependencies plus absolutely nothing more then suddenly it’s something else entirely – it’s process isolation, or better yet, it’s portable process isolation.

Other thing to think about is if you should combine multiple processes in single container. For example if you care about logging you shouldn’t use a logger daemon or logrotate in a container, but you probably want to store them externally – in a volume or mounted host directory. SSH server in container could be useful for diagnosing problems in production, but if you have to log in to a container running in production – you’re doing something wrong (and there’s docker exec anyways). And for cron, run it in a separate container and give access to the exact things your cronjob needs.

There are a couple of different schools of thought about how to use docker containers: as a way to distribute and run a single process, or as a lighter form of a virtual machine. It depends on what you’re doing with docker and how you manage your containers/applications. It makes sense to combine some services, but on the other hand you should still separate everything. It’s preferred to isolate every single process and explicitly telling it how to communicate with other processes. It’s sane from many perspectives: security, maintainability, flexibility and speed. But again, where you draw the line is almost always a personal, aesthetic choice. In my opinion it could make sense to combine nginx and php-fpm in a single container.

Minimal approach

Lately, there has been some movement towards minimal distributions like Alpine Linux, and it has got a lot of positive attention from the Docker community. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox using a grsecurity/PaX patched Linux kernel and OpenRC as its init system. In its x86_64 ISO flavor, it weighs in at an 82MB and a container requires no more than 8 MB. Alpine provides a wealth of possible packages via its apk package manager. As it uses musl, you may run into some issues with environments expecting glibc-like behaviour (for example Kubernetes or with compiling some npm modules), but for most use cases it should work just fine. And with minimal base images it’s more convenient to divide your processes to many small containers.

Some advantages for using Alpine Linux are:

  • Speed in which the image is downloaded, installed and running on your Docker host
  • Security is improved as the image has a smaller footprint thus making the attack surface also smaller
  • Faster migration between hosts which is especially helpful in high availability and disaster recovery configurations.
  • Your system admin won’t complain as much as you will use less disk space

For my purposes, I need to run Spring Boot and Node.js applications on Docker containers, and they were easily switched from Debian based images to Alpine Linux without any changes. There are official Docker images for OpenJDK/OpenJRE on Alpine and Dockerfiles for running Oracle Java on Alpine. Although there isn’t an official Node.js image built on Alpine, you can easily make your own Dockerfile or use community provided files. When official Java Docker image is 642 MB, Alpine Linux with OpenJDK 8 is 150 MB and with Oracle JDK 382 MB (can be stripped down to 172 MB). With official Node.js image it’s 651 MB (or if using slim 211 MB) and with Alpine Linux that’s 36 MB. That’s a quite a reduction in size.

Examples of using minimal container based on Alpine Linux:

For Node.js:

FROM alpine:edge
 
ENV NODE_ALPINE_VERSION=6.2.0-r0
 
RUN apk update && apk upgrade \
    && apk add nodejs="$NODE_ALPINE_VERSION"

For Java applications with OpenJDK:

FROM alpine:edge
ENV LANG C.UTF-8
 
RUN { \
      echo '#!/bin/sh'; \
      echo 'set -e'; \
      echo; \
      echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
   } > /usr/local/bin/docker-java-home \
   && chmod +x /usr/local/bin/docker-java-home
 
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:$JAVA_HOME/bin
ENV JAVA_VERSION 8u92
ENV JAVA_ALPINE_VERSION 8.92.14-r0
 
RUN set -x \
    && apk update && apk upgrade \
    && apk add --no-cache bash \
    && apk add --no-cache \
      openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

If you want to read more about running services on Alpine Linux, check Atlassian’s Nicola Paolucci’s nice article about experiences of running Java apps on Alpine.

Go small or go home?

So, should you use Alpine Linux for running your application on Docker? As also Docker official images are moving to Alpine Linux then it seems to make perfect sense from both a performance and security perspectives to switch to Alpine. And if you don’t want to take the leap from Debian or Ubuntu or want support from the downstream vendor you should consider stripping it from unneeded files to make it smaller.

Container orchestration with CoreOS at Devops Finland meetup

Development and Operations, DevOps, is one of the important things when going beyond agile. It’s boosting the agile way of working and can be seen as an incremental way to improve our development practices. And what couldn’t be a good place to improve than learning at meetups how others are doing things. This time DevOps Finland meetup was about container orchestration with CoreOS and it was held at Oppex’s lounge in central Helsinki. The talks gave a nice dive into CoreOS, covering both beginner and seasoned expert points of view. Here’s my short notes about the presentations.

CoreOS intro for beginners, by beginners

The first talk was practically an interactive Core OS tutorial by Antti Vähäkotamäki and Frans Ojala. Their 99 slides showed how to get started with CoreOS on Vagrant step by step and what difficulties they experienced. Nothing special.

CoreOS in production, lessons learned

The more interesting talk about CoreOS was “CoreOS in production, lessons learned” by Vlad Bondarenko from Oppex where he told about their software stack and how they’re running it. In short, they’re running on baremetal with CoreOS Nginx for reverse proxy, Node.js for UI and API and RethinkDB and SolrCloud clusters. Deployment is made with Ansible and makefiles and Ship.it is used for Node.js. Service discovery is DNS based with docker-etcd-registrator component and they’ve also written their own DNS server. For Node.js config management with etcd they’ve made etcd-simple-config component. With Docker they use standard images with volumes and inject own data to the container.

CoreOS seemed to work quite well for them with easy cluster management, running multiple versions of 3rd party and own software and having zero downtime updates or rollbacks. But there were some cons also like maturity (bugs) and scripting systemd.

Kontena, CoreOS war stories

The last talk was about CoreOS war stories in Kontena by Jari Kolehmainen. The slides tell the story of how they use CoreOS on Kontena and what are the pain points. In story short it comes to configuration management and issues related to etcd.

For bootstrapping they use CloudInit which is de-facto way to initialize cloud instances and Integrated to CoreOS. The hard parts with etcd are discovery, security (tls certificates), using central services vs. workers and maintenance (you don’t do it). Now they run etcd inside a container, bind it only to localhost and overlay network (Weave Net) and master coordinates etcd discovery. With automatic updates they use the best-effort strategy: If etcd is running, locksmith coordinates the reboots; Otherwise just reboot when update is available.

Presentation’s summary was that the “OS” part is currently best option for containers and etcd is a must, but a little hard to handle. For the orchestrator they suggest that pick one which hides all the complexities. And automate all the things.

Problems with installing Oracle DB 12c EE, ORA-12547: TNS: lost contact

For development purposes I wanted to install Oracle Database 12c Enterprise Edition to Vagrant box so that I could play with it. It should’ve gone quite straight forwardly but in my case things got complicated although I had Oracle Linux and the pre-requirements fulfilled. Everything went fine until it was time to run the DBCA and create the database.

The DBCA gave “ORA-12547: TNS: lost contact” error which is quite common. Google gave me couple of resources to debug the issue. Oracle DBA Blog explained common issues which cause ORA-12547 and solutions to fix it.

One of the suggested solutions was to check to ensure that the following two files are not 0 bytes:

ls -lt $ORACLE_HOME/bin/oracle
ls -lt $ORACLE_HOME/rdbms/lib/config.o

And true, my oracle binary was 0 bytes

-rwsr-s--x 1 oracle oinstall 0 Jul  7  2014 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

To fix the binary you need to relink it and to do that rename the following file:

$ cd $ORACLE_HOME/rdbms/lib
$ mv config.o config.o.bad

Then, shutdown the database and listener and then “relink all”

$ relink all

If just things were that easy. Unfortunately relinking ended on error:

[oracle@oradb12c lib]$ relink all
/u01/app/oracle/product/12.1.0/dbhome_1/bin/relink: line 168: 13794 Segmentation fault      $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/install/modmakedeps.pl $ORACLE_HOME $ORACLE_HOME/inventory/make/makeorder.xml > $CURR_MAKEORDER
writing relink log to: /u01/app/oracle/product/12.1.0/dbhome_1/install/relink.log

After googling some more I found similar problem and solution: Relink the executables by running make install.

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk install
 
cd $ORACLE_HOME/network/lib
make -f ins_net_server.mk install
</re>
 
If needed you can also relink other executables:
<pre lang="shell">
make -kf ins_sqlplus.mk install (in $ORACLE_HOME/sqlplus/lib)
make -kf ins_reports60w.mk install (on CCMgr server)
make -kf ins_forms60w.install (on Forms/Web server)

But of course it didn’t work out of the box and failed to error:

/bin/ld: cannot find -ljavavm12
collect2: error: ld returned 1 exit status
make: *** [/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib/oracle] Error 1

The solution is to copy the libjavavm12.a to under $ORACLE_HOME lib as explained:

cp $ORACLE_HOME/javavm/jdk/jdk6/lib/libjavavm12.a $ORACLE_HOME/lib/

Run the make install commands from above again and you should’ve working oracle binary:

-rwsr-s--x 1 oracle oinstall 323649826 Feb 17 16:27 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

After this I ran the relink again which worked and also the install of the database worked fine.

cd $ORACLE_HOME/bin
relink all

Start the listener:

lsnrctl start LISTENER

Create the database:

dbca -silent -responseFile $ORACLE_BASE/installation/dbca.rsp

The problems I encountered while installing Oracle Database 12c Enterprise Edition to Oracle Linux 7 although in Vagrant and with Ansible were surprising as you would think that on certified platform it should just work. If I would’ve been using CentOS or Ubuntu it would’ve been totally different issue.

You can see the Ansible tasks I did to get Oracle DB 12c EE installed on Oracle Linux 7 in my vagrant-experiments GitHub repo.

Oracle DB 12c EE Ansible Tasks
Oracle DB 12c EE Ansible Tasks

ApiOPS and Test Automation at DevOps Finland Meetup

Couple of weeks ago at Tampere goes Agile the question was what’s beyond agile and partial answer was DevOps. I’ve read about DevOps before and tried to introduce it to use in my daily job but new things move slowly. So, it was good time to hear more about DevOps and how others are using it at DevOps Finland meetup about ApiOPs and Test Automation. The meetup was held at GE Healthcare building in Vallila and organized by Eficode. Delicious coffee and sandwiches were from Warrior coffee. Here’s my short notes about the topics discussed.

APIOps

Jarkko Moilanen, talked about APIOps – Focus on Iterative and Collaborative Design-First driven API development. How to automate and streamline API-strategy and development process. But what’s APIOps? In short, APItalist creates strategy and APIOps troops implement it.

APIOps

The talk was more about mindset related to developing APIs than tools but Swagger was mentioned for representing your API and SoapUI for testing. For API management Moilanen talked about APInf which is an API management platform.

Test automation with Robot Framework

Eficode guys talked about Test automation with Robot Framework which is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It’s originally developed in Nokia Networks 2005 and open sourced in 2006. Robot Framework uses keyword-driven testing approach and it’s capabilities can be extended by test libraries implemented either with Python or Java. Robot Framework is quite big in Finland but to get the work forward and more known worldwide there’s now Robot Framework Association put together by Eficode, Omenia, Reaktor, Eliga, Knowit, Qentinel and HiQ.

Automating payment terminal testing

After some technical difficulties with projector we heard intro to Robot Framework with Selenium2Library and saw video about using it. Selenium is a suite of tools to automate web browsers and with Selenium2Library you can use it with Robot Framework to easily implement and maintain automatic browser testing of your web application. Another use case which I find interesting is for testing REST APIs.

You can use Robot Framework in many was as we saw with the demo of a machine for automating payment terminal testing which Eficode had built (slides, blog post in Finnish). It was a Shapeoko 2 CNC milling machine where Arduino parsed g-code sent over terminal bus, payment terminal was captured with Tesseract OCR and it was controlled by Robot Framework running in Raspberry Pi. They had extended Robot Framework with new libraries for communicating over serial bus and reading images from Raspberry Pi camera.

What you can do with Robot Framework is up to you as the framework doesn’t limit you.

Future of DevOps Finland

The last talk of the meetup was about the future of DevOps Finland. DevOps Finland was started in 2013 by Erno Aapa and now the load is distributed over new planning team to keep things active. Sharing is caring and so we were encouraged to share our experiences and war stories about DevOps by talking in some future meetup.

Some possible future themes for the meetup were also discussed.

  • Infrastructure Orchestration
    • e.g. Coreos, Mesos, Kubernetes, AWS tools, Rancher.
  • DevOps on Windows
    • PoweShell and Azure.
  • DevOps without computers
    • AWS lambda, heroku, dokku, aws beanstalk, Google app engine, IBM Bluemix. (DevOps as a service).
  • How to move ops work to developers?
    • Hubot, configuration management, continuous delivery.
  • Security.
  • Ops do Dev. Dev do ops.
  • How to handle corporate IT?
  • Configuration management system
    • Chef, Puppet, Ansible, Salt.
  • Continuous integration
    • Jenkins, circleCI, Travis. Alternatives for Jenkins.
  • Working with legacy systems
    • Handling existing data, migrating legacy operations to modern operations, using old hardware to create a cloud.
  • DevOps in the cloud
    • what cloud services to use? Why?, developing in the cloud, build promoting practices
  • Measuring, monitoring, logging
    • elk-stack, Kafka, sentry, newrelic, loggly, graylog, practices & different needs
  • Containers
    • Docker, LXC, Xen, VMware, Qemu

Notes from Tampere goes Agile 2015

What could be a better way to spend a beautiful Autumn Saturday than visiting Tampere goes Agile and being inspired beyong agile. Well, I can think couple of activities which beat waking up 5:30 to catch a train to Tampere but attending a conference and listening to thought provoking presentations is always refreshing. So, what did they tell about being “Inspired beyond agile” in Tampere goes Agile 2015?

Tampere goes Agile 2015

Tampere goes Agile is a free to attend event about agile and this year the theme of the conference was “inspired beyond agile”. The event was held at Sokos Hotel Ilves and there were roughly 140 attendees. Agile as a topic isn’t interesting as it’s practices are widely in use, so the event went past agile and concentrated on “being agile”. How the organizational level and our mindsets has to change to make agile work. Waterfall mindset eats your agile culture for breakfast. And that’s the problem many presentations addressed.

Juho Vepsäläinen wrote also a great blog post about afterthoughts of Tampere Goes Agile 2015. It was nice to read a recap of the sessions which were in the other room than I was.

Keynote: After Agile

Event started with a keynote by Bob Marshall who asked what’s after agile. He has introduced the concept of right shifting where the core idea is that a large amount of organizations are underperforming. We’re always more or less prisoners of our mindset and existing ways.

“It’s not enough to do your best; you must know what to do, and then do your best.” – W. Edwards Deming

Marshall showed his right shifting organizational effectiveness chart where mean is around 1 (0 to 5 scale) and organizations using agile sit around 1.25 to 2. So, what’s beyond that? Agile thinking isn’t getting us there. What differs the organizations in the chart is their mindset: adhoc, analytic, synergistic and finally chaordic.

The Marshall model:

The Marshall model

In order to improve the effectiveness and efficiency of our organizations, we’ll need to be able to imagine better ones. The question is, what does an ideal organization look like? What kind of society we would build if it was wiped out? Starting from clean slate. We should look at the organization as a whole and what Marshall suggest is to use therapy to understand organization health and changing the mindset of the organization to one that’s more conducive for high performance.

The ideal model for IT company is built around: people, relationships between people, collective mindset, cognitive function and motivation. And it’s good to remember the difference between effectiveness and efficiency: Doing the right thing or doing the thing right.

Doctor, please fix my Agile!

Ville Törmälä talked about how we have seen changes on the method level, organizations are still mostly functioning the same ways as before. Many have tried to become more agile but without much success as there’s a waterfall way of doing everything.

Törmälä presented his definition of agile: 1) Make the work better 2) Make the work work better 3) Make lives better. But waterfall mindset eats our agile culture for breakfast so it’s about time to broaden our thinking about what really constitutes a long-term success in organizations doing any kind of knowledge work. Agile gives you tools and ideas but organizations can’t change or improve by “doing agile” better. If you fail with one agile “method”, you probably fail with the rest of them. It’s a systemic problem. It’s all built deep in to the thinking and structures of the organizations. That is the challenge.

“Every system is perfectly designed to achieve the results it gets”. We should change from “project thinking” to “stable teams thinking”. To change the power and influence structure from managing people to empowering people and further to liberating people.

One way of doing this is to use KBIs, Key Behaviour Indicators, where you write down examples of behaviour you want to see, think in what kind of environmental it’s possible or could happen and then create the environment, write down concrete actions.

“The supreme art of agile is to subdue the waterfall thinking without fighting” – Sun Tzu, The Art of War

In summary, we need to look beyond methods and practices. Organizations change by changing how they think and become better by understanding better how work works, how to create value and how to learn better. We’ve to work with the system, aiming to understand and affect its thinking.

Pairing is sharing

Pair programming is a core agile technical practice but many people still have reluctance to pair and Maaret Pyhäjärvi talked about the deliberate practice in building up the skill of pairing to allow pairing to take one’s skills on other activities to a new level. Pyhäjärvi shared her different stages of pairing and lessons picked up as a testing specialist.

Again, pairing is also about mindset and effective pairing is far from trivial – but it is skill that can be practiced. Pyhäjärvi talked about growth patterns from pairing with peers to pairing and mobbing with developers, from traditional style and side-by-side work to strong-style pairing and to pairing on both testing and programming activities.

Mindset: fixed <> growth

Listeners also got to test specific style of pairing, Strong-style pairing, where for an idea to go from your head to the computer it must go through someone else’s hands. You really need to think the steps through for the other to manage the given task.

One presented point about pairing was that you must unlearn ownership of ideas and contributions. Co-creation vs. collaboration.

Pyhäjärvi also told that selling pairing to team (of introvert programmers) is hard but Mob Programming has been their gateway to pair programming. It feels safer. You can read more about it from Mob Programming guide book.

Beyond Continuous Deployment: Documentation Pipeline

Before lunch there was also nice lightning talk about documentation pipeline by Antti Virtanen. He told about Lessons learnt from creating a Documentation Pipeline for Continuous Deployment with Jenkins and other open source tools. His slides are available from SlideShare.

1 ???
2 Continuous Delivery DevOps magic
3 ???
4 Profit

The DevOps magic with Jenkins was more or less standard practice and it was configured to generate documentation from database schema, JavaDocs, test coverage reports, performance test results and API specification. Reminded me of all the work I should introduce to our continuous integration.

3 standard tricks were presented:

  • Jenkins is the Swiss knife.
  • Database documentation in database metadata and generating ER-diagrams with SchemaSpy.
  • API documentation with Swagger.

When quality is just a cost: Useful approaches to testing

Testing is also important part of successful projects so Jani Grönman talked about useful approaches to testing and software quality.

“Software quality is measured by your customer success, not development project metrics and quality processes.”

Grönman approached the topic with often surprisingly common attitude towards testing and quality:

  • “Quality is just a cost and like other costs, it should be avoided or minimized.”
  • “Testing is it just another buffer in project’s budget”
  • “testers are not skilled labor, it’s enough if they can read and write.”
  • “What automation? They can quickly click trough the app can’t they?”.

And as you know this is all wrong. It’s true that testing is expensive but so is development. Can you afford not to test? You should think it as an investment. The presentation went through the reasons and motivations behind the various attitudes and explored differences in views and how to best tackle them using the right technology and approach. He also talked about the schools of testing: Analytical, Standard, Quality, Context driven, Agile.

But overall you should know that testing is skilled activity and part of the development. Testing provides information to the project and you should use mix of techniques like exploratory and automatisation. And think about what testing would be most effective now. You need to choose the right set of QA tools for the job. One size fits no-one.

DevOps: Boosting the agile way of working

DevOps has been quite the buzzword for some time, so it was interesting to hear what Timo Stordell had to say how Devops is boosting the agile way of working. In short DevOps isn’t anything revolutionary and should be seen an incremental way to improve our development practices. And talking about revolution, Stordell’s slides had nice Soviet theme.

The presentation was more or less what you would expect from a topic covering DevOps and has nice touch to it. In short: Small bangs over a big bang, requirements management meet acceptance testing, standardize development environments, monitor to understand what to develop.

Stordell had nice demo of how they perform acceptance testing using physical devices and automation. They have built a rig of CNC mill run by Raspberry Pi to test payment system.

For those interested about DevOps movement and everything around it there’s DevOps Finland meetup group. You can also download Eficode’s DevOps Quick Guide to read more about it.

Keynote: Beyond projects

Event’s final keynote was by Allan Kelly who spoke about #noprojects. Why projects are wrong and what to do instead. The main point of the keynote was that the project model doesn’t match software development and outlined an alternative to the project model and what companies need to do to achieve it. The presentation slides are available from SlideShare. Kelly has also written a book about team centric agile software developmen,: Xanpan, which combines Kanban and XP.

Going beyond projects is an interesting idea as everything we do is somewhat tied to doing things in projects. So, what’s wrong with projects? Projects are temporary whereas software is forever. Projects have end dates which in turn is against the defining feature of successful software: it doesn’t end. Software which is useful is used and demands change, stop changing it and you kill it. At worst the project metaphor leads to dead software, higher costs and missed business opportunities.

We should think projects more like a continuous flow where it’s success isn’t determined by staying on schedule, on budget, and with quality. We should concentrate on the value delivered and put value in flexibility as requirements change. This goes against the fixed nature of projects. Also after project you often break a functioning team and start all over again. We should put emphasis on teams, treat team as an unit and push work through it.

The other thing is that software is not milk. It’s cheapest in small packages, not in big cartons. Software development has not economics of scale. Big projects are risk. Think small and make regular delivery which increases ROI. Fail fast, fail cheap. Quite basic agile thinking.

So, beyond projects: waterfall 2.0, continuous flow

Continuous flow of waterfall

Now we have #noestimates, #nomanagement and #noprojects. Profit?

Summary

It was my first time visiting Tampere goes Agile and it was nice conference. The topics provided something to think about and not just the same agile thinking. You could clearly see the theme “Inspired beyond agile” working through different presentations and the emphasis was about changing our mindsets.

Going beyond agile isn’t easy as it’s more about thinking than tools. Old habits die hard and changing the waterfall way of thinking isn’t trivial. We should start with understanding our organization’s health and changing the mindset of the organization to one that’s more conducive for high performance. Switch from “project thinking” to “stable teams thinking” and change the power and influence structure from managing people to empowering people and further to liberating people.

The after party was at Ruby & Fellas but after early morning and couple of nice beers it was time to take train back home. But before that I had to visit the Moro Sky Bar with nice scenery over Tampere.

Tampere from Moro Sky Bar

Creating Vagrant Base Box with Veewee

Vagrant is a great tool for creating and configuring lightweight, reproducible, portable virtual machine environments but the first step for using Vagrant, downloading an existing “base box”, raises some questions. E.g. How are these unverified boxes built? So, you might end up building your own base box which is often time consuming and cumbersome. Fortunately there’s a tool called Veewee which aims to automate all the steps for building base boxes and to collect best practices in a transparent way.

Vagrant Base Box with Veewee

Veewee is a tool for easily (and repeatedly) building custom Vagrant base boxes, KVMs, and virtual machine images. You can use it to build a Vagrant box in Linux, Mac OS X and Windows but I found out that fulfilling the requirements on Windows is quite difficult (read Ruby and RVM) so just forget it.

To get you started there are some requirements you need to fulfill. First you’ll need to install at least one of the supported virtual machine providers like VirtualBox and second you need some development libraries.

On Ubuntu 15.04 Linux and using VirtualBox you need these packages:

$ apt-get install virtualbox git curl ruby ruby-dev libxslt1-dev libxml2-dev zlib1g-dev

Install RVM on Linux

For Ruby environment it’s recommended to use either rvm or rbenv. I chose the RVM and followed the RVM install documentation.

Install mpapis public key:

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

if keyserver fails, you can use $ curl -sSL https://rvm.io/mpapis.asc | gpg --import -

Install RVM stable with ruby:

$ \curl -sSL https://get.rvm.io | bash -s stable --ruby

Installing Veewee with RVM

With RVM already installed, ensure a ruby version that’s supported by Veewee is available on your machine:

$ source /home/marko/.rvm/scripts/rvm
$ rvm install ruby

Clone the veewee project from source:

$ cd <path_to_workspace>
$ git clone https://github.com/jedi4ever/veewee.git
$ cd veewee

Set the local gemset and ruby version within the current directory:

$ rvm use ruby@veewee --create

Run bundle install to install Gemfile dependencies for your local gemset:

$ gem install bundler
$ bundle install

Bundle install will take some time.

Building Vagrant Box with Veewee

Veewee uses definitions to build new virtual machines and ‘definition’ is derived from a ‘template’ and preconfigured templates are found in templates/ folder. Veewee Basics explains how you can create your own customized definition.

For my customized Vagrant Box I decided to use Tommy Muehle’s definition as a template as it contained what I wanted. Simple CentOS 6.6. Box with Puppet. I just changed the localization to Finland and made it bigger for WebLogic use case in mind. My definition for Vagrant Box can be found in GitHub.

To use my definition just clone the repository for CentOS 6.6 Box, copy the “centos-6.6-x86_64_puppet” folder to definitions/ folder under Veewee and make your own changes if needed. After you’re done run:

$ bundle exec veewee vbox build centos-6.6-x86_64_puppet

The build command runs Veewee scripts and automates the manual steps needed while installing a new Linux distribution.

Installing CentOS to Vagrant Box with Veewee

To export the Box for further use with Vagrant, run:

$ bundle exec veewee vbox export centos-6.6-x86_64_puppet

The above command is actually calling “vagrant package –base ‘centos-6.6-x86_64_puppet’ –output ‘boxes/centos-6.6-x86_64_puppet'”. The machine gets shut down, exported and will be packed in a centos-6.6-x86_64_puppet.box file inside the current directory.

And you’re all done. Now you can use your just created base box for Vagrant boxes. Import it into Vagrant’s box repository and use it to initialize a fresh project:

$ vagrant box add 'centos-6.6-x86_64_puppet' 'centos-6.6-x86_64_puppet.box'
$ vagrant init 'centos-6.6-x86_64_puppet'

Using Veewee to build a Vagrant Box is simple and what’s more important it’s automated and reproducible. Using Ruby and RVM on Windows 7 turned out to be practically impossible but old ThinkPad W510 with Ubuntu 15.04 worked nicely. Of course you could create a base box with Vagrant way which means installing and configuring your Linux manually. But why would you want to that if you can just automate it?