December is full Christmas carrols and hassle before holidays. So, take a short break and learn to master Kubernetes, become better human and developer and make remote (working) a success. Also think about privacy. Good reading and happy holidays!
Issue 46, 17.12.2019
Mastering the KUBECONFIG file Good tips like Auto-$KUBECONFIG based on directory with direnv; Know which context you’re pointing at with kube-ps1; Save GKE contexts to separate files. (from @walokra)
20 ways to become a better Node.js developer in 2020 "20 skills, technologies and considerations on choosing between them. Picking the right tools became one of our greatest challenges — the Node.js ecosystem has matured and present attractive options in almost every field. Vanilla or TypeScript? Ava, Mocha or Jest? Express, Fastify or Koa? or maybe Nest?"
How to Make Remote a Success "It's all about sharing and communicating". E.g. Write down everything: knowledge base to blog posts, make weekly notes; Make everyone feel connected: smarter meetings, daily check-ins/check-outs. Hacker News comments
Falco Falco is an automatic, easy-to-use Web Performance auditing tool. Open Source WebPageTest runner which helps you monitor, analyze, and optimize your websites. (from @PHacks)
My favourite Git commit Good example how git commit messages should be done especially if the change is ambiguous. Doing explanatory commits need extra effort than just “Fixed it” but it pays out later. (from @walokra)
A Practical Framework for DevSecOps Nice overview to key #DevSecOps domains and activities. “With a limited budget start with Monitoring and Responding. Then focus on how to prevent vulnerabilities from being introduced in the first place.” (from @walokra)
Docker for Pentesters Docker has completely changed my workflow, and I wrote up 10 examples and scripts for how pentesters can leverage Docker to speed up testing. Lmk how you use Docker - this could be a series! (from @walokra)
The secret life of GPS trackers "We decided to take a look at several child (GPS) trackers available on Amazon, eBay, and Alibaba to see how they stood up to our scrutiny."
Dumbass Home 2.0 Excellent overview to “Smart” home and available solutions. "S in IoT stands for Security” so use separate WiFi, Zigbee, hub with Raspberry Pi, Raspbee & Home Assistant (or Hue/SmartThings), gadgets from Trådfri, Xiaomi (~), Philips & Osram with discount. (from @walokra)
What could be more annoying than committing code changes to repository and noticing afterwards that formatting isn't right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately with small enhancements to your development workflow you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.
Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. They're a built-in feature and run locally. Hook scripts are only limited by a developer's imagination. Some example hook scripts include:
pre-commit: Check the commit for linting errors.
pre-receive: Enforce project coding standards.
post-commit: Email team members of a new commit.
post-receive: Push the code to production.
Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You're free to change or update these scripts as necessary, and Git will execute them when those events occur.
Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready for commit or pushing to remote repository.
For example the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The "export PATH=/usr/local/bin:$PATH" is for SourceTree to find git on MacOS.
The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.
Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.
It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project's initial lifecycle phase. It's good to notice that the plugin rewrites hooks.
On Node.js projects you can define scripts in package.json and run them with npm which enables an another approach to running Git hooks.
🐶 Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.
Installing Husky is like any other npm library
npm install husky --save-dev
The following configuration on your package.json runs lint (e.g. eslint with --fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to remote repository.
"pre-commit": "npm run lint",
"pre-push": "npm run lint && npm run test"
Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.
Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.
This article was originally published at 15.7.2019 on Gofore's blog.
Summer holidays are over and it's time to get back to work and monthly notes. I spent almost whole August enjoying nature, mountain biking, hiking and coaching young mountainbikers. Less computers, more relaxing. This month's notes are about writing great Docker images, validate code using git hooks, log management, story about npm registry, working remotely and effective Kotlin. Happy reading.
Automate validating code changes with Git hooks What could be more annoying than committing code changes and noticing afterwards that the formatting isn’t right or tests are failing? Read these tips how automate validating code changes with git hooks and make your flow smooth.
Fast log management for your apps Nicolas Frankel talked at Berlin Buzzwords about logging. Good overview to the issue. TL;DR; no computation to logs, filesystem matters, asynchronous vs. reliability, no expensive meta-data, schema on write, send JSON.
11 Best Practices for Working Remotely Good tips for working remotely. The biggest hurdles are communication, social opportunities and loneliness and isolation. "With consistent effort, you can overcome the challenges of remote work and create a healthy, happy, productive environment for yourself and for your team." (from @dunjardl)
Effective Kotlin beta release Adding this to my reading list! "First official version of Effective Kotlin is finally in distribution (as an ebook)". Having read Effective Java this book is totally worth it.
How to write great container images Article shows the principles of what writes consider "Dockerfile best practices", and simultaneously walks through them with a real example. I would add that use small base image like Alpine Linux if possible.
Micro Frontends The article describes breaking up frontend monoliths into many smaller, more manageable pieces, and how this architecture can increase the effectiveness and efficiency of teams working on frontend code. As well as talking about the various benefits and costs, it covers some of the implementation options that are available, and dives deep into a full example application that demonstrates the technique.
Performance Analysis Methodology Informative presentation of Performance Analysis Methodology by Brendan Gregg at LISA '12. Focus on the USE method which all staff can use for identifying common bottlenecks and errors. Check for: Utilization, Saturation, Errors. (from walokra)
Fast log management for your apps You've migrated your application to Reactive Microservices to get the last ounce of performance from your servers. But what about logs? Logs can be one of the few roadblocks on the road to ultimate performance. Nicolas Frankel shows in his talk at Berlin Buzzwords 2019 some insider tips and tricks taken from our experience put you on the track toward fast(er) log management.
highly readable and consistent across different developers on a team.
The focus is put on quality and coherence across the different pieces of
Nginx Admin's Handbook nginx is a powerful web server but with great power comes great responsibility (to configure it for security and performance). "Nginx Admin's Handbook" is a good collection of rules, helpers, notes and papers, best practices and recommendations to achieve it. (from walokra)
GOTCHA: Taking phishing to a whole new level Without X-FRAME-OPTIONS you can build a UI redressing attack that allows attackers to extract valuable information from API endpoints. tl; dr; extract chars with CSS, add captcha form, scramble chars, get user to fill in the password-captcha.
Before committing code to the Subversion repository we always set the svn:ignore property on the directory to prevent some files and directories to be checked in. You would usually want to exclude the IDE project files and the target/ directory.
It's useful to put all the ignored files and directories into a file: .svnignore. Your .svnignore could look like:
Put the .svnignore file in the project folder and commit it to your repository so the ignored files are shared between committers.
Now reference the file with the -F option: $ svn propset svn:ignore -F .svnignore.
Of course I hope everyone has by now moved to git and uses .gitignore for this same purpose.
Midsummer is couple of days away and it's time to take a short break from work and enjoy the Summer nights and nature. And if you have time here is a short list of articles to read and videos from React Finland 2019 conference to watch.
Issue 42, 20.6.2019
Consulting or con-$ulting A theory on how Hertz’s inexperience in buying software — combined with Accenture’s incompetence to deliver it — flushed $32M+ down the drain. "The lack of transparency and technical expertise combined with the lack of ownership/responsibility was ultimately the reason why Hertz managed to blow tens of millions USD, instead of just a couple." Lessons learned: "If you are buying software for tens of millions, you must have an in-house technical expert as part of the software development process".
Be careful with CTE in PostgreSQL PostgreSQL doesn't inline common table expressions, WITH clause, it materializes it and thus is unable to utilize the index => expensive. Good to know if you're used to Oracle which doesn't materialize CTEs by default. (from walokra)
Can't Unsee "The devil is in the details". A game where your attention to details earns you a lot of coins. Fun game which teaches you some UX rules and attention to details. With 5780 coins I'm a beginner :/ (or need glasses :))
It feels fine on my phone "You literally can't afford desktop or iphone levels of JS if you're trying to make good web experiences for anyone but the world's richest users, and that likely means re-evaluating your toolchain."
Gitmoji If not considering the issue on Bamboo with this (thread), Using Emojis in Git commit messages is a nice idea. There's even cool emoji guide for your commit messages. Going to take this into use 😊 (from walokra)
Happy Friday, Don't push to production? Good thread of how you should treat your deploys to production. You should deploy often and have good CI/CD practices but the overall question isn't black or white. "Nothing goes wrong until it does, and then you'd want your people available." "If you're scared of pushing to production on Fridays, I recommend reassigning all your developer cycles off of feature development and onto your CI/CD process and observability tooling for as long as it takes to ✨fix that✨." (from walokra)
Sleep quality and stress level matter and after 24 hours awake "Your sleep quality and stress level matter far, far more than the languages you use or the practices you follow. Nothing else comes close". Good notes of why sleeping and rest matters (thread) 😴 There's always more work to do, take care of yourself first! (from walokra)
"Work starts from problems and learning starts from questions. Work is creating value and learning is creating knowledge. Both work and learning require the same things: interaction and engagement." (from EskoKilpi)
Using version control is an essential part of modern software development and using it efficiently should be part of every developer's tool kit. Knowing the basic rules makes it even more useful. Here are some best practices that help you on your way.
Commit logical changesets (atomic commits)
Commit Early, Commit Often
Write Reasonable Commit Messages
Don't Commit Generated Sources
Don't Commit Half-Done Work
Test Before You Commit
Agree on a Workflow
Commit logical changesets (atomic commits)
A commit should be a wrapper for related changes. Make sure your change reflects a single purpose: the fixing of a specific bug, the addition of a new feature, or some particular task. Small commits make it easier for other developers to understand the changes and roll them back if something went wrong.
Your commit will create a new revision number which can forever be used as a "name" for the change. You can mention this revision number in bug databases, or use it as an argument to merge should you want to undo the change or port it to another branch. Git makes it easy to create very granular commits.
So if you do many changes to multiple logical components at the same time, commit them in separate parts. That way it's easier to follow changes and their history. So working with features A, B and C and fixing bugs 1, 2 and 3 should make at least 6 commits.
Commit Early , Commit Often
It is recommended to commit code to version control often which keeps your commits small and, again, helps you commit only related changes. It also allows you to share your code more frequently with others.
It's easier for everyone to integrate changes regularly and avoid having merge conflicts. Having few large commits and sharing them rarely, in contrast, makes it hard to solve conflicts.
"If the code isn't checked into source control, it doesn't exist."
Always write some reasonable comment on your commit. It should be short and descriptive and tell what was changed and why.
Begin your message with a short summary of your changes (up to 50 characters as a guideline). Separate it from the following body by including a blank line.
It is also useful to add some prefix to your message like Fix or Add, depending on what kind of changes you did. Use the imperative, present tense ("change", not "changed" or "changes") to be consistent with generated messages from commands like git merge.
If fixing some bug or making some feature and it has a JIRA ticket, add the ticket identifier as a prefix.
For example: "Fix a few bugs in the interface. Added an ID field. Removed a couple unnecessary functions. Refactored the context check." or "Fix bad allocations in image processing routines".
Not like this: "Fixed some bugs."
The body of your message should provide detailed answers to the following questions: What was the motivation for the change? How does it differ from the previous implementation?
"If the changes you made are not important enough to comment on, they probably are not worth committing either."
Don't commit files which are generated dynamically or which are user dependent. Like target folder or IDEA's .iml files or Eclipse's .settings and .project files. They change depending what the user likes and don't relate to project's code.
Also project's binary files and Javadocs are files that don't belong to version control.
Don't Commit Half-Done Work
You should only commit code when it's completed. Split the feature's implementation into logical chunks and remember to commit early and often. Use branches or consider using Git's Stash feature if you need a clean working copy (to check out a branch, pull in changes, etc.).
On the other hand you should never leave the office without commiting your changes.
"It's better to have a broken build in your working repository than a working build on your broken hard drive."
You should only commit code which is tested and passes tests. And this includes code formatting with linters. Write tests and run tests to make sure the feature or bug fix really is completed and has no side effects (as far as one can tell).
Having your code tested is even more important when it comes to pushing / sharing your code with others.
Branching is one of Git's most powerful features – and this is not by accident: quick and easy branching was a central requirement from day one. Branches are the perfect tool to help you avoid mixing up different lines of development.
You should use branches extensively in your development workflows: for new features, bug fixes and ideas.
Agree on a Workflow
Git lets you pick from a lot of different workflows: long-running branches, topic branches, merge or rebase, git-flow.
Which one you choose depends on a couple of factors: your project, your overall development and deployment workflows and (maybe most importantly) on your and your teammates' personal preferences. However you choose to work, just make sure to agree on a common workflow that everyone follows.
Using version control is usually and fortunately an acknowledged best practice and part of software development. By using even couple of the above practices makes working with the code much more pleasant. Adopting at least "Commit logical changesets" and "Reasonable Commit Messages" helps a lot.
Playing with data in databases is sometimes tricky but when you get down to it it's just couple of lines on the command line. Sometime ago we switched from Piwik PRO to Matomo and of course we wanted to migrate logs. We couldn't just use the full MySQL / MariaDB database dump and go with it as table names and the schema was different (Piwik PRO 3.1.1. -> Matomo 3.5.1). In short we needed to export couple of tables and rename them to match new instance similarly as discussed in Stack Overflow.
There's a VisitExport plugin for Piwik/Matomo which lets you export and import log tables with PHP and JSON files but it didn't seem usable approach for our use case with tables being 500 MB or so.
The more practical solution was to simply create a dump of the tables we wished to restore separately.