Notes from React Native EU 2022

React Native EU 2022 was held couple of weeks ago and it's a conference which focuses exclusively on React Native but consists also on general topics which are universal in software development while applied to RN context. This year the online event provided great talks and especially there were many presentations about apps performance improvements, achieving better code and identifying bugs. Here are my notes from the talks I found interesting. All of the talks are available in conference stream on Youtube.

Better performance and quality code from RN EU 2022

This year the React Native EU talks had among other presentations two common topics: performance and code quality. Important aspects of software development which are often dismissed until problems arise so it was refreshing to see it talked so much about.

Here are my notes from the talks I found most interesting. Also the "Can't touch this" talk about accessibility was a good reminder that not everyone use their mobile devices by hand.

How we made our app 80% faster, a data structure story

Marin Godechot gave an interesting talk of React Native apps performance improvements and one of the crucial tools for achieving that.

Validate your assumption before investing in solutions

The learnings of the talk which went through trying different approaches to fix the performance problem was that:

  • Validate your assumption before investing in solutions
  • Performance improvements without tooling is a guessing game
  • Slow Redux selectors are bad
  • Data structures and complexity matters

The breakthrough for finding the problem was after actually measuring the performance with Datadog Real User Monitoring (RUM) and better understanding of the bottlenecks. What you can't measure, you can't improve.

The issue wasn't with useless rerenders, lists and navigation stack but with the datamodels but not the one you would guess. Although the persisted JSON stringified state with Redux was transferred between the JS side and the Native (JS <-> json <-> Bridge <-> json <-> Native) it wasn't the issue.

The big reveal was that when they instrumented Redux's selectors, reducers and sagas to monitoring tool they found that as the user permissions were handled by using the Attribute Based Access Control model the data in JSON format had grown over the years from 15 permissions per user to 10000 permissions per user which caused problems with Redux selectors. The fix was relatively simple: change array to object -> {agengy: [agency:caregivers:manage]}

What you can't measure, you can't improve

You can go EVERYWHERE but I can go FAST - holistic case study on performance

Jakub Binda made a clever comparison in his talk that apps are like cars and same modifications apply to fast cars and fast applications.

The talk starts slow but gets to speed nicely with good overview how to create faster apps. Some of the points were:

  • Reduce weight:
    • Remove dead / legacy code
    • Avoid bundling heavy JSON files (e.g. translations)
    • Keep node_modules clean
  • Keep component structure "simple"
    • More complex structure leads to longer render time and more resources consumption and more re-renders
    • Shimmers are complex and heavy (improve UX by making an impression that content appers faster than it actually does)
  • Using hooks:
    • Might lead to unexpected re-renders when: their value change, dependency array is not set correctly
    • Lead to increased resources comsumption if: their logic is complex, the consumer of the results is hidden behind the flag
  • Tooling:
    • Flipper debug tool: flame charts & performance monitoring tooling

Reducing bugs in a React codebase

Darshita Chaturvedi's hands-on and thoroughly explained talk for identifying bugs in React codebase was insightful.

Reducing bugs in a React codebase

Getting Better All the Time: How to Escape Bad Code

Josh Justice had a great presentation with hands-on example of an application with needed fixes and features and how to apply them with refactoring. And what is TDD by recreating the application.

The slides are available with rest of the TDD sequence and pointers to more resources on TDD in RN.

Refactoring

The key point of the talk was about what do you do about bad code? Do you work around bad code or rewrite bad code? The answer is refactoring: small changes that improve the arrangements of the code without changing its functionality. And comprehensive tests will save you with refactoring. The value is that you're making the improvements that pay off right away and code is shippable after each refactoring. But how do you get there? By using Test-Driven Development (TDD)

"TDD is too much work"
Living with bad code forever is also a lot of work

Getting Better All the Time: How to Escape Bad Code

"Make code better all the time (with refactoring and better test coverage)"

Visual Regression Testing in React Native

Rob Walker talked about visual regression testing and about using React Native OWL. Slides available.

React Native OWL:

  • Visual regression testing for React Native
  • CLI to build and run
  • Jest matcher
  • Interaction API
  • Report generator
  • Inspired by Detox
Baseline, latest, diff
Easy to approach (in theory)

Different test types:

  • Manual testing pros: great for exploratory testing, very flexible, can be outsourced
  • Manual testing cons: easy to miss things, time consuming, hard to catch small changes
  • Jest snapshot tests pros: fast to implement, fast to run, runs on CI
  • Jest snapshot tests cons: only tests in isolation, does not test flow, only comparing JSX
  • Visual regression tests pros: tests entire UI, checks multi-step flows, runs on CI
  • Visual regression tests cons: can be difficult to setup, slower to run, potentially flaky
Use case for visual regression tests

Can't touch this - different ways users interact with their mobile devices

Eevis talked about accessibility and what does this mean for a developer. The slides give a good overview to the topic.

  • Methods talked about
    • Screen reader: trouble seeing or understanding screen, speech or braille feedback, voiceover, talkback
    • Physical keyboard: i.a. tremors
    • Switch devices: movement limiting disabilites
    • Voice recognition
    • Zooming / screen magnifying: low vision
  • 4 easy to start with tips
    • Use visible focus styles
    • Annotate headings (accessibilityRole)
    • Add name, role and state for custom elements
    • Respect reduced motion
  • Key takeaways
    • Users use mobile devices differently
    • Test with different input methods
    • Educate yourself
4 easy to start with tips

Performance issues - the usual suspects

Alexande Moureaux hands-on presentation of how to use Flipper React DevTools profiler and Flamegraph and Android performance profiler to debug Android application performance issues. E.g. why the app has below 60 fps? Concentrates on debugging Android but similar approaches work also for iOS.

First make your measures deterministic, average your measures over several iterations, keep the same conditions for every measure and automate the behavior you want to test.

The presentation showed how to use Flipper React DevTools profiler Flamegraph to find which component is slow and causes e.g. rerendering. Then refactoring the view by moving and the updating component which caused rerendering. And using @perf-profiler/web-reporter to visualize the measures from JSON files.

You can record the trace Hermes profiler and open it on Google Chrome:

  • bundleInDebug: true
  • Start profiler in simulator, trace is saved in device
  • npx react-native profile-hermes pulls the trace and converts it for usable format
  • Findings: filtering tweets list by parsing weekday from tweet was slow (high complexity), replace it with Date.getDay from tweet.createdAt

Connect the app to Android Studio for profiling: finding long running animation (background skeleton content "grey boxes")

How to actually improve the performance of a RN App?

Michal Chudziak presented how to use Define, Measure, Analyze, Improve, Control (DMAIC) pattern to improve performance of a React Native application.

  • Define where we want to go, listen to customers
  • Make it measurable
  • Analyze common user paths
  • Choose priorities

Define your goals:

Define

Measure: where are we?

  • React Profiler API
  • react-native-performance
  • Native IDEs
  • Screen capture
  • Flipper & perf monitor
Measure

Measureme

  • Measurement needs to be accurate and precise

Analyze phase: how do we get there?

  • List potential root causes
  • Cause and effect diagram (fish bone diagram)
  • Narrow the list
Analyze

Improve phase

  • Identify potential solutions
  • Select the best solution
  • Implement and test the solution
Improve

Control phase: are we on track?

Performance will degrade if it's not under control so create a control plan.

Control

Monitor regressions

  • Reassure: performance regression testing (local + CI)
  • Firebase: performance monitoring on production
  • Sentry:  performance monitoring on production

Notes from DEVOPS 2020 Online conference

DevOps 2020 Online was held 21.4. and 22.4.2020 and the first day talked about Cloud & Transformation and the second was 5G DevOps Seminar. Here are some quick notes from the talks I found the most interesting. The talk recordings are available from the conference site.

DevOps 2020

How to improve your DevOps capability in 2020

Marko Klemetti from Eficode presented three actions you can take to improve your DevOps capabilities. It looked at current DevOps trends against organizations on different maturity levels and gave ideas how you can improve tooling, culture and processes.

  1. Build the production pipeline around your business targets.
    • Automation build bridges until you have self-organized teams.
    • Adopt a DevOps platform. Aim for self-service.
  2. Invest in a Design System and testing in natural language:
    • brings people in organization together.
    • Testing is the common language between stakeholders.
    • You can have discussion over the test cases: automated quality assurance from stakeholders.
  3. Validate business hypothesis in production:
    • Enable canary releasing to lower the deployment barrier.
    • You cannot improve what you don't see. Make your pipeline data-driven.

The best practices from elite performers are available for all maturity levels: DevOps for executives.

Practical DevSecOps Using Security Instrumentation

Jeff Williams from Contrast Security talked about how we need a new approach to security that doesn't slow development or hamper innovation. He shows how you can ensure software security from the "inside out" by leveraging the power of software instrumentation. It establishes a safe and powerful way for development, security, and operations teams to collaborate.

DevSecOps is about changing security, not DevOps
What is security instrumentation?
  1. Security testing with instrumentation:
    • Add matchers to catch potentially vulnerable code and report rule violations when it happens, like using unparameterized SQL. Similar what static code analysis does.
  2. Making security observable with instrumentation:
    • Check for e.g. access control for methods
  3. Preventing exploits with instrumentation:
    • Check that command isn't run outside of scope

The examples were written with Java but the security checks should be implementable also on other platforms.

Modern security (inside - out)

Their AppSec platform's Community Edition is free to try out but only for Java and .Net.

Open Culture: The key to unlocking DevOps success

Chris Baynham-Hughes from RedHat talked how blockers for DevOps in most organisations are people and process based rather than a lack of tooling. Addressing issues relating to culture and practice are key to breaking down organisational silos, shortening feedback loops and reducing the time to market.

Start with why
DevOps culture & Practice Enablement: openpracticelibrary.com

Three layers required for effective transformation:

  1. Technology
  2. Process
  3. People and culture
Open source culture powers innovation.

Scaling DevSecOps to integrate security tooling for 100+ deployments per day

Rasmus Selsmark from Unity talked how Unity integrates security tooling better into the deployment process. Best practices for securing your deployments involve running security scanning tools as early as possible during your CI/CD pipeline, not as an isolated step after service has been deployed to production. The session covered best security practices for securing build and deployment pipeline with examples and tooling.

  • Standardized CI/CD pipeline, used to deploy 200+ microservices to Kubernetes.
Shared CI/CD pipeline enables DevSecOps
Kubernetes security best practices
DevSecOps workflow: Early feedback to devs <-----> Collect metrics for security team
  • Dev:
    • Keep dependencies updated: Renovate.
    • No secrets in code: unity-secretfinder.
  • Static analysis
    • Sonarqube: Identify quality issues in code.
    • SourceClear: Information about vulnerable libraries and license issues.
    • trivy: Vulnerability Scanner for Containers.
    • Make CI feedback actionable for teams, like generating notifications directly in PRs.
  • When to trigger deployment
    • PR with at least one approver.
    • No direct pushes to master branch.
    • Only CI/CD pipeline has staging and production deployment access.
  • Deployment
    • Secrets management using Vault. Secrets separate from codebase, write-only for devs, only vault-fetcher can read. Values replaced during container startup, no environment variables passed outside to container.
  • Production
    • Container runtime security with Falco: identify security issues in containers running in production.
Standarized CI/CD pipeline allows to introduce security features across teams and microservices

Data-driven DevOps: The Key to Improving Speed & Scale

Kohsuke Kawaguchi, Creator of Jenkins, from Launchable talked how some organizations are more successful with DevOps than others and where those differences seem to be made. One is around data (insight) and another is around how they leverage "economy of scale".

Cost/time trade-off:

  • CFO: why do we spend so much on AWS?
    • Visibility into cost at project level
    • Make developers aware of the trade-off they are making: Build time vs. Annual cost
      • Small: 15 mins / $1000; medium: 10 mins / $2000; large: 8 mins / $3000
  • Whose problem is it?
    • A build failed: Who should be notified first?
      • Regular expression pattern matching
      • Bayesian filter

Improving software delivery process isn't get prioritized:

  • Data (& story) helps your boss see the problem you see
  • Data helps you apply effort to the right place
  • Data helps you show the impact of your work

Cut the cost & time of the software delivery process

  1. Dependency analysis
  2. Predictive test selection
    • You wait 1 hour for CI to clear your pull request?
    • Your integration tests only run nightly?
Predictive test selection
  • Reordering tests: Reducing time to first failure (TTFF)
  • Creating an adaptive run: Run a subset of your tests?

Deployment risk prediction: Can we flag risky deployments beforehand?

  • Learn from previous deployments to train the model

Conclusions

  • Automation is table stake
  • Using data from automation to drive progress isn't
    • Lots of low hanging fruits there
  • Unicorns are using "big data" effectively
    • How can the rest of us get there?

Moving 100,000 engineers to DevOps on the public cloud

Sam Guckenheimer from Microsoft talked how Microsoft transformed to using Azure DevOps and GitHub with a globally distributed 24x7x365 service on the public cloud. The session covered organizational and engineering practices in five areas.

Customer Obsession

  • Connect our customers directly and measure:
    • Direct feedback in product, visible on public site, and captured in backlog
  • Develop personal Connection and cadence
    • For top customers, have a "Champ" which maintain: Regular personal contact, long-term relationship, understanding customer desires
  • Definition of done: live in production, collecting telemetry that examines the hypothesis which motivated the deployment
Ship to learn

You Build It, You Love It

  • Live site incidents
    • Communicate externally and internally
    • Gather data for repair items & mitigate for customers
    • Record every action
    • Use repair items to prevent recurrence
  • Be transparent

Align outcomes, not outputs

  • You get what you measure (don't measure what you don't want)
    • Customer usage: acquisition, retention, engagement, etc.
    • Pipeline throughput: time to build, test, deploy, improve, failed and flaky automation, etc.
    • Service reliability: time to detect, communicate, mitigate; which customers affected, SLA per customer, etc.
    • "Don't" measure: original estimate, completed hours, lines of code, burndown, velocity, code coverage, bugs found, etc.
  • Good metrics are leading indicators
    • Trailing indicators: revenue, work accomplished, bugs found
    • Leading indicators: change in monthly growth rate of adoption, change in performance, change in time to learn, change in frequency of incidents
  • Measure outcomes not outputs

Get clean, stay clean

  • Progress follows a J-curve
    • Getting clean is highly manual
    • Staying clean requires dependable automation
  • Stay clean
    • Make technical debt visible on every team's dashboard

Your aim won't be perfect: Control the impact radius

  • Progressive exposure
    • Deploy one ring at a time: canary, data centers with small user counts, highest latency, th rest.
    • Feature flags control the access to new work: setting is per user within organization

Shift quality left and right

  • Pull requests control code merge to master
  • Pre-production test check every CI build

Weekly notes 1

For some time I've been reading several newsletters to keep note what happens in the field of software development and the intention was also to share the interesting parts here. And now it's time to move from intent to action.

In the new "Weekly notes" series I share what interesting articles I have read with short comments. The overall topic is technology but other than that they can cover all things related to software development, from web applications to mobile development and from devops to user experience. I'll publish my reading list every week or every two weeks.

I also tweet about interesting topics so follow me on Twitter:

Issue #1 // Week 48, 2015

Technical

Ludicrously Fast Page Loads - A Guide for Full-Stack Devs
Nate Berkopec explains in detail how can you diagnose performance issues on the frontend of your site with the use of Chrome Timeline. (from CSS Weekly, #185)

The Docker Monitoring problem
Detailed look at why monitoring containers is both different and difficult for traditional tools. Also a good introduction to Linux containers. (from DevOps weekly, Issue 204)

Tools of the trade

Continuous Integration Platform using Docker Container: Jenkins, SonarQube, Nexus, GitLab
Setting up CI is easy but moving beyond that and getting value out of the CI is different matter. This article covers some of the practices to employ. (from Java Web Weekly 42)

Amazon Web Services in Plain English
Amazon's services are everywhere but with 50 plus opaquely named services, some plain English descriptions were needed. (from Hacker News)

Modern Java - A Guide to Java 8
Java 8 brings quite a lot of new things like default interface methods, lambda expressions, method references and repeatable annotations. This tutorial guides you step by step through all new language features. (from Hacker News)

To think about

Corporations and OSS Do Not Mix
Maintaining open source projects isn't easy and that's not about technology.

Not once has a company said to me:
"This bug is costing us $X per day. Can we pay you $Y to focus on it and get a fix out as soon as possible?"

(from Weekend reading)

Sustainable Open Source
Continuing the previous article's topic, good read for anyone involved in or planning a community-driven open source project. (from Weekend reading)

Edward Snowden explains how to reclaim your privacy
"Operations security (Opsec) is important even if you’re not worried about the NSA." Snowden gives 4 quick tips: Encrypt, use password manager, use 2-factor authentication, use Tor.

Event Sourcing - Capturing all changes to an application state as a sequence of events
Application architecture is the base for everything and Martin Fowler's reference intro to this powerful style of architecture is worth reading.

Something different

How snowmaking works
If the Mother Nature isn't doing its job and making snow, we can do it by ourselves. Important topic as couple of Winters even here in Finland have been mild and it's not looking good this year either. "A resort that can guarantee 5+ inches of powder every day is a license to print money." (from Hacker News)

Notes from Tampere goes Agile 2015

What could be a better way to spend a beautiful Autumn Saturday than visiting Tampere goes Agile and being inspired beyong agile. Well, I can think couple of activities which beat waking up 5:30 to catch a train to Tampere but attending a conference and listening to thought provoking presentations is always refreshing. So, what did they tell about being "Inspired beyond agile" in Tampere goes Agile 2015?

Tampere goes Agile 2015

Tampere goes Agile is a free to attend event about agile and this year the theme of the conference was "inspired beyond agile". The event was held at Sokos Hotel Ilves and there were roughly 140 attendees. Agile as a topic isn't interesting as it's practices are widely in use, so the event went past agile and concentrated on "being agile". How the organizational level and our mindsets has to change to make agile work. Waterfall mindset eats your agile culture for breakfast. And that's the problem many presentations addressed.

Juho Vepsäläinen wrote also a great blog post about afterthoughts of Tampere Goes Agile 2015. It was nice to read a recap of the sessions which were in the other room than I was.

Keynote: After Agile

Event started with a keynote by Bob Marshall who asked what's after agile. He has introduced the concept of right shifting where the core idea is that a large amount of organizations are underperforming. We're always more or less prisoners of our mindset and existing ways.

"It's not enough to do your best; you must know what to do, and then do your best." - W. Edwards Deming

Marshall showed his right shifting organizational effectiveness chart where mean is around 1 (0 to 5 scale) and organizations using agile sit around 1.25 to 2. So, what's beyond that? Agile thinking isn't getting us there. What differs the organizations in the chart is their mindset: adhoc, analytic, synergistic and finally chaordic.

The Marshall model:

The Marshall model

In order to improve the effectiveness and efficiency of our organizations, we'll need to be able to imagine better ones. The question is, what does an ideal organization look like? What kind of society we would build if it was wiped out? Starting from clean slate. We should look at the organization as a whole and what Marshall suggest is to use therapy to understand organization health and changing the mindset of the organization to one that's more conducive for high performance.

The ideal model for IT company is built around: people, relationships between people, collective mindset, cognitive function and motivation. And it's good to remember the difference between effectiveness and efficiency: Doing the right thing or doing the thing right.

Doctor, please fix my Agile!

Ville Törmälä talked about how we have seen changes on the method level, organizations are still mostly functioning the same ways as before. Many have tried to become more agile but without much success as there's a waterfall way of doing everything.

Törmälä presented his definition of agile: 1) Make the work better 2) Make the work work better 3) Make lives better. But waterfall mindset eats our agile culture for breakfast so it's about time to broaden our thinking about what really constitutes a long-term success in organizations doing any kind of knowledge work. Agile gives you tools and ideas but organizations can't change or improve by "doing agile" better. If you fail with one agile "method", you probably fail with the rest of them. It's a systemic problem. It's all built deep in to the thinking and structures of the organizations. That is the challenge.

"Every system is perfectly designed to achieve the results it gets". We should change from "project thinking" to "stable teams thinking". To change the power and influence structure from managing people to empowering people and further to liberating people.

One way of doing this is to use KBIs, Key Behaviour Indicators, where you write down examples of behaviour you want to see, think in what kind of environmental it's possible or could happen and then create the environment, write down concrete actions.

"The supreme art of agile is to subdue the waterfall thinking without fighting" - Sun Tzu, The Art of War

In summary, we need to look beyond methods and practices. Organizations change by changing how they think and become better by understanding better how work works, how to create value and how to learn better. We've to work with the system, aiming to understand and affect its thinking.

Pairing is sharing

Pair programming is a core agile technical practice but many people still have reluctance to pair and Maaret Pyhäjärvi talked about the deliberate practice in building up the skill of pairing to allow pairing to take one's skills on other activities to a new level. Pyhäjärvi shared her different stages of pairing and lessons picked up as a testing specialist.

Again, pairing is also about mindset and effective pairing is far from trivial - but it is skill that can be practiced. Pyhäjärvi talked about growth patterns from pairing with peers to pairing and mobbing with developers, from traditional style and side-by-side work to strong-style pairing and to pairing on both testing and programming activities.

Mindset: fixed <> growth

Listeners also got to test specific style of pairing, Strong-style pairing, where for an idea to go from your head to the computer it must go through someone else's hands. You really need to think the steps through for the other to manage the given task.

One presented point about pairing was that you must unlearn ownership of ideas and contributions. Co-creation vs. collaboration.

Pyhäjärvi also told that selling pairing to team (of introvert programmers) is hard but Mob Programming has been their gateway to pair programming. It feels safer. You can read more about it from Mob Programming guide book.

Beyond Continuous Deployment: Documentation Pipeline

Before lunch there was also nice lightning talk about documentation pipeline by Antti Virtanen. He told about Lessons learnt from creating a Documentation Pipeline for Continuous Deployment with Jenkins and other open source tools. His slides are available from SlideShare.

1 ???
2 Continuous Delivery DevOps magic
3 ???
4 Profit

The DevOps magic with Jenkins was more or less standard practice and it was configured to generate documentation from database schema, JavaDocs, test coverage reports, performance test results and API specification. Reminded me of all the work I should introduce to our continuous integration.

3 standard tricks were presented:

  • Jenkins is the Swiss knife.
  • Database documentation in database metadata and generating ER-diagrams with SchemaSpy.
  • API documentation with Swagger.

When quality is just a cost: Useful approaches to testing

Testing is also important part of successful projects so Jani Grönman talked about useful approaches to testing and software quality.

"Software quality is measured by your customer success, not development project metrics and quality processes."

Grönman approached the topic with often surprisingly common attitude towards testing and quality:

  • "Quality is just a cost and like other costs, it should be avoided or minimized."
  • "Testing is it just another buffer in project's budget"
  • "testers are not skilled labor, it’s enough if they can read and write."
  • "What automation? They can quickly click trough the app can't they?".

And as you know this is all wrong. It's true that testing is expensive but so is development. Can you afford not to test? You should think it as an investment. The presentation went through the reasons and motivations behind the various attitudes and explored differences in views and how to best tackle them using the right technology and approach. He also talked about the schools of testing: Analytical, Standard, Quality, Context driven, Agile.

But overall you should know that testing is skilled activity and part of the development. Testing provides information to the project and you should use mix of techniques like exploratory and automatisation. And think about what testing would be most effective now. You need to choose the right set of QA tools for the job. One size fits no-one.

DevOps: Boosting the agile way of working

DevOps has been quite the buzzword for some time, so it was interesting to hear what Timo Stordell had to say how Devops is boosting the agile way of working. In short DevOps isn't anything revolutionary and should be seen an incremental way to improve our development practices. And talking about revolution, Stordell's slides had nice Soviet theme.

The presentation was more or less what you would expect from a topic covering DevOps and has nice touch to it. In short: Small bangs over a big bang, requirements management meet acceptance testing, standardize development environments, monitor to understand what to develop.

Stordell had nice demo of how they perform acceptance testing using physical devices and automation. They have built a rig of CNC mill run by Raspberry Pi to test payment system.

For those interested about DevOps movement and everything around it there's DevOps Finland meetup group. You can also download Eficode's DevOps Quick Guide to read more about it.

Keynote: Beyond projects

Event's final keynote was by Allan Kelly who spoke about #noprojects. Why projects are wrong and what to do instead. The main point of the keynote was that the project model doesn't match software development and outlined an alternative to the project model and what companies need to do to achieve it. The presentation slides are available from SlideShare. Kelly has also written a book about team centric agile software developmen,: Xanpan, which combines Kanban and XP.

Going beyond projects is an interesting idea as everything we do is somewhat tied to doing things in projects. So, what's wrong with projects? Projects are temporary whereas software is forever. Projects have end dates which in turn is against the defining feature of successful software: it doesn't end. Software which is useful is used and demands change, stop changing it and you kill it. At worst the project metaphor leads to dead software, higher costs and missed business opportunities.

We should think projects more like a continuous flow where it's success isn't determined by staying on schedule, on budget, and with quality. We should concentrate on the value delivered and put value in flexibility as requirements change. This goes against the fixed nature of projects. Also after project you often break a functioning team and start all over again. We should put emphasis on teams, treat team as an unit and push work through it.

The other thing is that software is not milk. It's cheapest in small packages, not in big cartons. Software development has not economics of scale. Big projects are risk. Think small and make regular delivery which increases ROI. Fail fast, fail cheap. Quite basic agile thinking.

So, beyond projects: waterfall 2.0, continuous flow

Continuous flow of waterfall

Now we have #noestimates, #nomanagement and #noprojects. Profit?

Summary

It was my first time visiting Tampere goes Agile and it was nice conference. The topics provided something to think about and not just the same agile thinking. You could clearly see the theme "Inspired beyond agile" working through different presentations and the emphasis was about changing our mindsets.

Going beyond agile isn't easy as it's more about thinking than tools. Old habits die hard and changing the waterfall way of thinking isn't trivial. We should start with understanding our organization's health and changing the mindset of the organization to one that's more conducive for high performance. Switch from "project thinking" to "stable teams thinking" and change the power and influence structure from managing people to empowering people and further to liberating people.

The after party was at Ruby & Fellas but after early morning and couple of nice beers it was time to take train back home. But before that I had to visit the Moro Sky Bar with nice scenery over Tampere.

Tampere from Moro Sky Bar