Prettifying AWS S3 Bucket public index list

Sometimes it's useful to have a index listing on a AWS S3 bucket. Here are some solutions to configure it with nice template. If having a public index list on a S3 Bucket is a good idea or not I'm not saying yay or nay.

First set the correct Bucket Policy

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::tmfg-tiesaahistoria/*"

Next set Permissions

Everyone, List objects Yes

Create a index.html

For the index.html you have couple of choices:

  1. Use the index.hml with modifications by Nolan Lawson (see also: Lawson's blog post and code)
  2. Use more up to date forked index.html
  3. Use original file by Francesco Pasqualini
  4. Use AWS S3 Bucket Browser

To use it, just upload the index.html file into the root of your public S3 bucket.

That’s it!

What software and hardware I use

There was a discussion in Koodiklinikka Slack about what software people use and that people have made "/uses" pages for that purpose. And inspired by Wes Bos /uses from "Syntax" Podcast here's my list.

Check my /uses page to see what software and hardware I use for full-stack development in JavaScript, Node.js, Java, Kotlin, GraphQL, PostgreSQL and more. The list excludes the tools from different customers like using GitLab, Rocket.Chat, etc.

For more choices check

Monthly Notes 52

Issue 52, 9.9.2020

Software development

Field Ops Guide
"The Field Ops Guide (by Futurice) is a booklet that makes it possible to survive a software development project. It's a distillation of years of wisdom gathered working in client projects."


Threat matrix for Kubernetes
"While Kubernetes has many advantages, it also brings new security challenges that should be considered. Therefore, it is crucial to understand the various security risks that exist in containerized environments, and specifically in Kubernetes."


Faster Builds and Smaller Images Using BuildKit and Multistage Builds
"Multistage builds feature in Dockerfiles enables you to create smaller container images with better caching and smaller security footprint. In this blog post, I’ll show some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing to get most out of the feature."


"Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder."

GraphQL Voyager
"Represent any GraphQL API as an interactive graph."

SQL diagrams

Something different

Cheating in eSports: How to cheat at virtual cycling

Notes from HelSec Virtual Meetup 1

This year has been challenging for meetups and gatherings but one good side of the restrictions was that remote work has become more acceptable and also meetups and conferences have invested to streaming and virtual participation which is great for people living in an area where there's no meetups.

In early May HelSec kept their first Virtual Meetup with great topics. Here's my short notes (finally four months later). The meetup was streamed via HelSec Twitch channel and the discussions were in HelSec Events Discord. The meetup recording is available from Twitch.

HelSec Virtual Meetup 2020-05-07

HelSec Virtual Meetup #1 (7.5.2020)

Fighting alert fatigue and visibility issues in SOC

Juuso Myllylä from OptimeSys talked about fighting alert fatigue in security operations center (stream from 41:41 onwards). The goal in the talk was to improve automated detection, introduce "detection logic killchain" framework he has worked on as his master's thesis and shift our minds from signature based detection and move towards intelligence based.

Threat detection pyramid

Threat detection framework based on design science research method:

  1. Identify: What is a threat? What kind of things make up a threat?
    1. Mitre's ATT&CK framework
    2. Mitre's ATT&CK: Design and philosophy
    3. Example: hijacked Azure AD account detection
    4. Tactic = initial access
  2. Detect: How we can detect a threat?
    1. Logs, logs, logs
    2. Technique detection is also valid
    3. Example technique: valid accounts or phishing
  3. Use Case: Search queries, log sources, etc.
    1. Convert your idea into a security information and event management (SIEM) search query
    2. Procedures: many APT (Advanced Persistent Threat) groups have used valid accounts as an entrypoint
  4. Demonstrate:
    1. Deploy the use case
  5. Evaluate: evaluate detection logic
    1. Analyze the SIEM logs once your SIEM use case has been deployed
    2. e.g. check Azure AD audit logs, eliminate non-related data
    3. Applies also to threat hunting
  6. Communicate: Document your detection logic in Sigma form
    1. Can be shared with others, try to be SIEM agnostic

iPhone BFU Acquisition and Analysis

The meetup continued with iPhone forensic from @1:19 by Timo Miettinen from Nixu. The presentation explained first how the iPhone iOS filesystem's two main partitions are protected: non-encrypted System and encrypted Data. Data partition is encrypted with burned-into-hardware UID key. The files have additionally 4 classes of Data Protection.

From forensics point of view access to data is protected with many layers: USB connectivity is restricted; Logical extraction is divided to iTunes backup + some media files, password protected backups contain more data, backup password can be reset but has deviations; Full file system extraction needs jailbraking the device; iCloud extraction (synced backup).

The case talked in the talk was a lost iPhone which was later returned by law enforcement. The question was: What was done with it while missing? Was it stolen or just inspected by friendly authorities? Phone was powered off and passcode was changed.

So they had BFU device in their hands for data extraction: device that has been powered off or rebooted and has never been subsequently unlocked. The amount of the data they can theoretically get is really limited.

In BFU the file encryption keys are wiped from the device RAM and only unencrypted class D protected files are available. Biometric authentication is not possible, USB restricted mode is enabled (need biometric authentication or passcode to activate data connections), lockdown records become useless (logical data acquisition impossible) and passcode recovery attack falls to BFU speeds.

Acquisition methods:

  • Utilizing exploits and jailbreaks:
    • checkm8: unpatchable bootrom exploit released by axi0mx on September 2019 which enables jailbreaks, activation lock bypass etc.
    • checkra1n: jailbreak released on November 2019 which utilizes the checkm8 exploit to run unsigned code on an iOS device. Doesn't always pass the USB restricted mode, depends on the combinations of hardware and software versions.
  • Open source and free tools:
    • libimobiledevices has collection of useful tools:
      • SSH over USB using iproxy
      • ideviceinfo gives iOS and HW versions
      • idevicecrashreport gets crash logs from the device
      • many more
    • ios_bfu_triage: extract avaible data
    • iTUnes if you don't have the BFU restriction
  • Commercial tools: Belkasoft Evidence Center, BlackBag Mobilyze, Cellebrite UFED / Physical Analyzer, Elcomsoft Phone Viewer, Magnet AXIOM, MSAB XRY, Oxygen Forensics Extractor

In their use case the checkra1n jailbreak didn't work and USB restricted mode was activated. Also some of the commercial tools enabled to extract some data but wasn't able to read the archive format the software created. They decided to the analysis manually which is a good idea even if the tools are working.

Some open source or free tools for analysis:

  • APOLLO (Apple Pattern of Life Lazy Output'er): parses pattern of life data from databases and properties into human readable format.
  • iOS sysdiagnose forensic scripts: parses iOS sysdiagnose logs.
  • iPhone Backup Analyzer: allows the user to browse the content of an iOS backup.
  • iLEAPP (iOS Logs, Events, And Preferences Parser)
  • iBackup Viewer: browse the content of an iOS backup and extract files.
  • ftree: crawl any directory and identify all files etc.
  • deserializer: converts from NSKeyedArchive to normal plist
  • For reading plists you can use: plutil -p <filename>
  • DB Browser for SQLite
  • Google's protobuf utilities (protoc)

When doing analysis you should look for plists, binary plists, plists in plists, blobs may contain binary plist files and SQLite databases (Shared Memory file .shm, write ahead log .wal). Some applications store data in protocol buffers (protobufs) in SQLite database blobs, plist files or just data files. Tools find most of the interesting data but you can make your own script to dump all text files, convert plist files to readable format, dumps data from every database, get all embedded binary plists from plist files and databases and convert them to readable format.

In their case they found out that the phone was reinstalled 12 hours after it was lost. Mobile banking, social media and instant messaging applications were installed. The device was used to communicate with several contacts and used around the city. The phone was stolen and reinstalled with intention to use it.

Still Fuzzing Faster (U Fool)

Joona Hoikkala talked about Web Fuzzing and using fuff tool for fuzzing directories, login, basic auth, virtual domain, content id:s and more. Follow the talk from the stream at 2:19:00 and the demo starts around @2:33:00. The slides are good starting point.

Kind of reverse but describes web fuzzing

You can use fuzzing with different input contents to target i.a. GET parameters (names, values or both), headers (Host, authentication, cookies, proxy headers) and POST data (form data, JSON, files). What to look for (matching)? Response codes, content (regexes) response sizes (bytes, # of words).

Resources: SecLists

Price of a digital identity

Laura Kankaala, from Robocorp and Team Whack fame talked about the price of a digital identity starting at 3:20:40. Data is central, security - privacy: how companies view data and how data sellers view data.

Digital identity:

  • What we are
  • What we have
  • What we produce

Laura also presented that ~90-99% of data collected is dark data which is collected but not really utilized. But we are just getting started. But it's good to remember that our data belongs to us. We give permission to collectors and controllers.

Do you know what's your data worth? Data is valuable and for example there are companies like and datum which tries to monetize it so that also user gets part of it. But yet the data is used more for targeted ads, providing content just for us, increasing efficiency and creating better services. And of course every one remembers Cambridge Analytica and (trying) to affect electoral processes in the US.

The most valuable things being sold online are: credit cards, identity numbers, passports, credentials, phone numbers, home address. Passports are quite logical of value and e.g. France passport goes for $124, USA $115, Canada $103, UK $60 and so on, depending of the data included with it.

Kankaala talked about how the collecting of our data has sneak to our lives (e.g. social credit systems). Companies collect data and when our normal live becomes very tangled with our live online it becomes easier to monitor us, to see what we're up to and to moderate our behaviour. We need to be careful when we allow new type of access to our live, e.g. COVID-19 tracking.

Regulation, awareness and education is at least a patch to some of these issues. We are hackers and we should be the pathfinders and show people that it doesn't have to be that if something works the way it does today and although it works it doesn't mean it works right or ethically.

We are all vulnerable

Magnus Lundgren from Recorded Future told a tale of two databases, a panda and someone who was listening starting at 4:25:00.

There's a race of when vulnerability is found and assigned a CVE number until it's either patched or exploited. 12 517 CVEs were first published on NVD in 2016-2017 and it takes average 33 days until an initial assesment of the vulnerability is made available via NIST's NVD. For example Dirty Cow (CVE-2016-519) it took 21 days to initial release on NVD but it took only 8 days to create an exploit (Proof of Concept shared on Pastebin) for it and sold/shared on the deep and dark web.

A tale of two databases: NVD (NIST) and CNNVD (CNITSEC). In the Chinese CNNVD it takes on average only 13 days compared to 33 days on NVD for initial assesment. The difference comes from the detail that CNNCD is doing active collection while NVD is doing passive collection from vendors. But always it doesn't be that way like it was the case with some Android backdoor where it took 236 days from CNNVD and 60 days from NVD. It takes longer for CNNVD to publish high threat vulnerabilities than low threat ones and during the publication lag Chinese APT groups are exploiting those vulnerabilities.

When the Recorded Future published a blog post identifying 343 "outlier" CVEs (regarding the issue the of CNNVD lag) CNNVD backdated 338 of those CVEs. Someone was listening.


  • Deep / Dark web monitoring of activity is crucial for a good patching cadence.
  • Magic can be done with threat intel data that has been organiced for analysis.
  • Chinese intersection is particularly vicious for foreign companies: Ministry of State Security (China) runs multiple threat actors e.g. APT3, runs CNNVD and cherry picks CNNVD vulnerabilities for targeting.

Resources: Inside Security Intelligence podcast

Monthly notes 51

It's August and after summer holidays it's time to get back to monthly notes. If you read only one note, check the "Some important things to keep in mind when you work remotely" which has good tips also in general. Happy reading :)

Issue 51: 2020-08-07


How to gracefully shut down Pods without dropping production traffic in Kubernetes?
If you've ever noticed dropped connection after a rolling upgrade, read Daniele Polencic Twitter thread which digs into the details with detailed pictures.

Web development

Prevent Info leaks and enable powerful features: COOP and COEP
"Cross-Origin Embedder Policy (COEP) and Cross-Origin Opener Policy (COOP) isolate your origin and enable powerful features." The video by @agektmr helps you understand how it works and why this is important. Unlock access to new perf API's to help you identify JS bottlenecks, memory leaks, and more. (from @igrigorik)

How To Setup Your Local Node.js Development Environment Using Docker
(from @Docker)

Web Stories are coming to WordPress!
Web Stories are tappable, engaging visual stories brought to the web. They’re powered by AMP technology. (from @pbakaus)

Working remotely

Some important things to keep in mind when you work remotely
Check the Twitter thread for 10 great tips for working remotely. They are also good tips also in general. I've also found the tip 8. be great. Writing notes and making (public) blog posts of them helps you to process new information better and also help other developers. Documentation is often undervalued and it takes time to do it correctly.

Software development

It's probably time to stop recommending Clean Code
"There is a growing movement against Rob Martin's books (e.g., Clean Code). After reading the article, I have to agree with a lot of it, but I also hope that this movement doesn't push too far to the other side." (from @maybeFrederick) My take is that don't believe everything you read be it on a book or nowadays in the Internet. Use your own thinking and reasoning. "Clean Code" has good points and suggestions but also goes a bit overboard with how "clean" things should look.


"Boop is a place to paste text, and transform it using basic operations. The goal is to allow quick experimentation and avoid using random websites to do that stuff. It's super useful when working with logs, JSON data, etc." (from @OKatBest). This is what I've always needed. No more searching for online tool for a specific task (or looking it from which is a great collection).

Fully embedded bug-tracker in git: you only need your git repository to have a bug tracker.

Something different

Remy Metailler Smashes Squamish Mountain Bike Trails

Following a Pro Enduro Racer Down Whistler's Hardest Trails // Wyn Masters

Hands-on learning Cloud Technologies with QwikLabs

I've used Google Cloud Platform for some time and got a opportunity to attend Codemen Cloud Academy's Google Workshop which concentrated to "Kubernetes in the Google Cloud" and "Google Cloud Run Serverless Workshop" topics using the Qwiklabs is a platform. Here's my (very) short notes from the workshop and using Qwiklabs. Most of the things I had used already by running our service on GKE but there's always something to learn from other's experiences.

Google Cloud Workshop with Qwiklabs

Qwiklabs is a platform for learning cloud technologies by following exercises and hands-on training. It gives temporary credentials to Google Cloud Platform and Amazon Web Services, so you can learn the cloud using the real thing.

The workshop used Cloud Study Jams 2020 session contents. After we completed the first lab, we were automatically granted 30-day pass to continue doing the rest of the labs. The quests in the labs are "priced" with credits which you can buy ($1 per credit) or get with workshop code.

Kubernetes in Google Cloud

The "Kubernetes in Google Cloud" quest in Qwiklabs is an advanced-level quest which gets you hands-on practice of configuring Docker images and containers, and deploying fully-fledged Kubernetes Engine applications. It teaches you the practical skills needed for integrating container orchestration into your own workflow.

Kubernetes in Google Cloud quests outline

There's nothing much to tell about the quests contents except bunch of docker, gcloud and kubectl commands so I'll not go through them here.

The Kubernetes in Google Cloud Quest in QwikLabs was as hands-on as it promised and the final quest "Challenge Lab" put all the things together with quite strict time limit. Although I had made notes from the previous quests I just and just managed to paste the commands, wait for the cloud to provision and especially for the Jenkins service to run continuous integration jobs.

Google Cloud console


Overall the "Kubernetes in Google Cloud" lab was excellent overview to Kubernetes and how things work in Google Cloud. It covered essential topics and showed how to do things in practice. It helped to have previous experience with Google Cloud but everything was explained and shown so you can learn by doing.

Qwiklabs Google Cloud quests

Qwiklabs has also other Google Cloud related labs as shown below but I didn't had time to go through them (I totally forgot :/) although the participants who completed the Kubernetes course got two month's free pass to the platform.

Infrastructure and Architecture quests
Machine Learning and Data quests
BigQuery quests

Monthly notes 50

Issue 50, 15.6.2020


AWS Lambda — should you have few monolithic functions or many single-purposed functions?
Interesting question of if single responsibility principle (SRP) should be followed in the serverless world. What is a “function” if not SRP? TL;DR; many single-purposed functions are better.


Twitter search of "telling early-in-career engineers stories of times you messed something up real bad is a good way to help them combat their own impostor syndrome." from (@ElleArmageddon)


In Kubernetes, what should I use as CPU requests and limits?
Good Twitter thread of what are the difference of requests and limits.

How should I answer a health check?
Explains how to use liveness and readiness probes (on Kubernetes). Heard that liveness probe should be always off unless there’s a bug in app which it can’t recover. And long checks can be cached.

Managed Kubernetes Price Comparison (2020)
"TL;DR: Azure and Digital Ocean don’t charge for the compute resources used for the control plane, making AKS and DO the cheapest for running many, smaller clusters. For running fewer, larger clusters GKE is the most affordable option. Also, running on spot/preemptible/low-priority nodes or long-term committed nodes makes a massive impact across all of the platforms."


Performance profiling for Web Applications with Sam Saccone
"How to use Chrome DevTools to understand a Web application's performance bottlenecks. Goes over a few different workflows that will help us to answer the question "Why is this slow and how can I fix it"."


GNU/Linux port of the Little Snitch application firewall. (from Hacker Newslettter #490, comments)

kubectl-debug is an out-of-tree solution for troubleshooting running pods, which allows you to run a new container in running pods for debugging purpose (examples). The new container will join the pid, network, user and ipc namespaces of the target container, so you can use arbitrary trouble-shooting tools without pre-installing them in your production container image.

Lighthouse audit add-on for Firefox
"Report, Performance, Accessibility, PWAs, SEO scores for any public site. Without opening DevTools."

Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."

You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

"keys": [
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"

Convert the PEM to JWK format with e.g. pem-jwk or with The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'


To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!

Notes from DEVOPS 2020 Online conference

DevOps 2020 Online was held 21.4. and 22.4.2020 and the first day talked about Cloud & Transformation and the second was 5G DevOps Seminar. Here are some quick notes from the talks I found the most interesting. The talk recordings are available from the conference site.

DevOps 2020

How to improve your DevOps capability in 2020

Marko Klemetti from Eficode presented three actions you can take to improve your DevOps capabilities. It looked at current DevOps trends against organizations on different maturity levels and gave ideas how you can improve tooling, culture and processes.

  1. Build the production pipeline around your business targets.
    • Automation build bridges until you have self-organized teams.
    • Adopt a DevOps platform. Aim for self-service.
  2. Invest in a Design System and testing in natural language:
    • brings people in organization together.
    • Testing is the common language between stakeholders.
    • You can have discussion over the test cases: automated quality assurance from stakeholders.
  3. Validate business hypothesis in production:
    • Enable canary releasing to lower the deployment barrier.
    • You cannot improve what you don't see. Make your pipeline data-driven.

The best practices from elite performers are available for all maturity levels: DevOps for executives.

Practical DevSecOps Using Security Instrumentation

Jeff Williams from Contrast Security talked about how we need a new approach to security that doesn't slow development or hamper innovation. He shows how you can ensure software security from the "inside out" by leveraging the power of software instrumentation. It establishes a safe and powerful way for development, security, and operations teams to collaborate.

DevSecOps is about changing security, not DevOps
What is security instrumentation?
  1. Security testing with instrumentation:
    • Add matchers to catch potentially vulnerable code and report rule violations when it happens, like using unparameterized SQL. Similar what static code analysis does.
  2. Making security observable with instrumentation:
    • Check for e.g. access control for methods
  3. Preventing exploits with instrumentation:
    • Check that command isn't run outside of scope

The examples were written with Java but the security checks should be implementable also on other platforms.

Modern security (inside - out)

Their AppSec platform's Community Edition is free to try out but only for Java and .Net.

Open Culture: The key to unlocking DevOps success

Chris Baynham-Hughes from RedHat talked how blockers for DevOps in most organisations are people and process based rather than a lack of tooling. Addressing issues relating to culture and practice are key to breaking down organisational silos, shortening feedback loops and reducing the time to market.

Start with why
DevOps culture & Practice Enablement:

Three layers required for effective transformation:

  1. Technology
  2. Process
  3. People and culture
Open source culture powers innovation.

Scaling DevSecOps to integrate security tooling for 100+ deployments per day

Rasmus Selsmark from Unity talked how Unity integrates security tooling better into the deployment process. Best practices for securing your deployments involve running security scanning tools as early as possible during your CI/CD pipeline, not as an isolated step after service has been deployed to production. The session covered best security practices for securing build and deployment pipeline with examples and tooling.

  • Standardized CI/CD pipeline, used to deploy 200+ microservices to Kubernetes.
Shared CI/CD pipeline enables DevSecOps
Kubernetes security best practices
DevSecOps workflow: Early feedback to devs <-----> Collect metrics for security team
  • Dev:
    • Keep dependencies updated: Renovate.
    • No secrets in code: unity-secretfinder.
  • Static analysis
    • Sonarqube: Identify quality issues in code.
    • SourceClear: Information about vulnerable libraries and license issues.
    • trivy: Vulnerability Scanner for Containers.
    • Make CI feedback actionable for teams, like generating notifications directly in PRs.
  • When to trigger deployment
    • PR with at least one approver.
    • No direct pushes to master branch.
    • Only CI/CD pipeline has staging and production deployment access.
  • Deployment
    • Secrets management using Vault. Secrets separate from codebase, write-only for devs, only vault-fetcher can read. Values replaced during container startup, no environment variables passed outside to container.
  • Production
    • Container runtime security with Falco: identify security issues in containers running in production.
Standarized CI/CD pipeline allows to introduce security features across teams and microservices

Data-driven DevOps: The Key to Improving Speed & Scale

Kohsuke Kawaguchi, Creator of Jenkins, from Launchable talked how some organizations are more successful with DevOps than others and where those differences seem to be made. One is around data (insight) and another is around how they leverage "economy of scale".

Cost/time trade-off:

  • CFO: why do we spend so much on AWS?
    • Visibility into cost at project level
    • Make developers aware of the trade-off they are making: Build time vs. Annual cost
      • Small: 15 mins / $1000; medium: 10 mins / $2000; large: 8 mins / $3000
  • Whose problem is it?
    • A build failed: Who should be notified first?
      • Regular expression pattern matching
      • Bayesian filter

Improving software delivery process isn't get prioritized:

  • Data (& story) helps your boss see the problem you see
  • Data helps you apply effort to the right place
  • Data helps you show the impact of your work

Cut the cost & time of the software delivery process

  1. Dependency analysis
  2. Predictive test selection
    • You wait 1 hour for CI to clear your pull request?
    • Your integration tests only run nightly?
Predictive test selection
  • Reordering tests: Reducing time to first failure (TTFF)
  • Creating an adaptive run: Run a subset of your tests?

Deployment risk prediction: Can we flag risky deployments beforehand?

  • Learn from previous deployments to train the model


  • Automation is table stake
  • Using data from automation to drive progress isn't
    • Lots of low hanging fruits there
  • Unicorns are using "big data" effectively
    • How can the rest of us get there?

Moving 100,000 engineers to DevOps on the public cloud

Sam Guckenheimer from Microsoft talked how Microsoft transformed to using Azure DevOps and GitHub with a globally distributed 24x7x365 service on the public cloud. The session covered organizational and engineering practices in five areas.

Customer Obsession

  • Connect our customers directly and measure:
    • Direct feedback in product, visible on public site, and captured in backlog
  • Develop personal Connection and cadence
    • For top customers, have a "Champ" which maintain: Regular personal contact, long-term relationship, understanding customer desires
  • Definition of done: live in production, collecting telemetry that examines the hypothesis which motivated the deployment
Ship to learn

You Build It, You Love It

  • Live site incidents
    • Communicate externally and internally
    • Gather data for repair items & mitigate for customers
    • Record every action
    • Use repair items to prevent recurrence
  • Be transparent

Align outcomes, not outputs

  • You get what you measure (don't measure what you don't want)
    • Customer usage: acquisition, retention, engagement, etc.
    • Pipeline throughput: time to build, test, deploy, improve, failed and flaky automation, etc.
    • Service reliability: time to detect, communicate, mitigate; which customers affected, SLA per customer, etc.
    • "Don't" measure: original estimate, completed hours, lines of code, burndown, velocity, code coverage, bugs found, etc.
  • Good metrics are leading indicators
    • Trailing indicators: revenue, work accomplished, bugs found
    • Leading indicators: change in monthly growth rate of adoption, change in performance, change in time to learn, change in frequency of incidents
  • Measure outcomes not outputs

Get clean, stay clean

  • Progress follows a J-curve
    • Getting clean is highly manual
    • Staying clean requires dependable automation
  • Stay clean
    • Make technical debt visible on every team's dashboard

Your aim won't be perfect: Control the impact radius

  • Progressive exposure
    • Deploy one ring at a time: canary, data centers with small user counts, highest latency, th rest.
    • Feature flags control the access to new work: setting is per user within organization

Shift quality left and right

  • Pull requests control code merge to master
  • Pre-production test check every CI build

Monthly notes 49

Working From Home edition.

Issue 49, 27.3.2020


Now that COVID-19 has all of us in corontine and working from home, also the technology conferences have been moved to online and free. Here's some.

Working From Home Conf with talks from technology to projects, best practices, lessons learned and about working from home. Agenda and recorded videos: part 1, part 2, part 3.

DEVOPS 2020, April 21 to 22
The Next Decade. The first day (main conference) has interesting talks from use cases and lessons learned, scaling devsecops, transformation journey and more.

MagnoliaJS, April 15-17
JavaScript heavy talks like component reusability, modern JavaScript for modern browsers, JavaScript’s exciting new features, how to supercharge teams and much more.

Red Hat Summit 2020, April 28-29

Tips and tricks

How to do effective video calls
tl;dr; TL;DR "Get good audio, use gallery view, mute if not talking, and welcome the cat."


Work together like you’re in the same room. Fast screen sharing with multiplayer control, drawing & video. (from @use_screen)

An open-source screen recorder built with web technology.

The paramount collection of productive Mac apps.