Refactoring, computer science concepts on day job, doing better code reviews, battling CSS and watching cat videos. That's Monthly notes for April. Not much so enjoy slowly :)
Issue 40, 4.2019
Refactoring.Guru Refactoring.Guru makes it easy for you to discover everything you need to know about refactoring, design patterns, SOLID principles and other smart programming topics.
CompSci and My Day Job Rob Conery talked at NDC Conference London 2019 about computer science concepts he used on his day job without actually knowing them. All of this changed as he put together the first two volumes of The Imposter's Handbook. He talks what he has learned and applied to the applications created on his day job. And gives you more tools under your belt to help you do your job better.
Code Review: How can we do it better? Fun Fun Function talks about how to become a better code reviewer and reviews some listeners sent code. General rules for pull requests: make everything readable by humans, title, description, commit comments and most important - your code. DRY KISS
"However, none of the [Formula One] teams used any of the big modern frameworks. They’re mostly WordPress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web."
When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.
Code quality in software development projects is important and a good metric to follow. Code coverage, technical debt, vulnerabilities in dependencies and conforming to code style rules are couple of things you should follow. There are some de facto tools you can use to visualize things and one of them is SonarQube. Here's a short technical note of how to setup it on Kotlin project and visualize metrics from different tools.
Including what analysis SonarQube's default plugins provide we are also using Detekt for static source code analysis and OWASP Dependency-Check to detect publicly disclosed vulnerabilities contained within project dependencies.
Visualizing Kotlin project metrics on SonarQube
SonarQube is nice graphical tool to visualize different metrics of your project. Lately it has started to support also Kotlin with SonarKotlin plugin and sonar-kotlin plugin. From typical Java project you need some extra settings to get things working. It's also good to notice that the support for Kotlin isn't quite yet there and sonar-kotlin provides better information i.e. what comes to code coverage
Steps to integrate reporting to Sonar with maven build:
Add configuration in project pom.xml: Surefire, Failsafe, jaCoCo, Detekt, Dependency-Check
Run Sonar in Docker
Maven build with sonar:sonar option
Check Sonar dashboard
Configure Kotlin project
Configure your Kotlin project built with Maven to have test reporting and static analysis. We are using Surefire to run unit tests, Failsafe for integration tests and JaCoCo generates reports for e.g. SonarQube. See the full pom.xml from example project (coming soon).
Detekt static code analysis configuration as AntRun. There's also unofficial Maven plugin for Detekt. It's good to notice that there are some "false positive" findings on Detekt and you can either customize detekt rules or suppress findings if they are intentional such as @Suppress("MagicNumber").
Use the Kotlin plugin which comes with SonarQube (SonarKotlin) or install the sonar-kotlin plugin which shows information differently. If you want to use sonar-kotlin and are using the official Docker image for SonarQube then you've to first remove the SonarKotlin plugin.
You now have metrics on Sonar to show to stakeholders but what should you do with those numbers?
One use case is to set quality gates on SonarQube to check that a set of conditions must be met before project can be released into production. Ensuring code quality of "new" code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis. By default the rules are: coverage on new code < 80%; percentage of duplicated lines on new code > 3; maintainability, reliability or security rating is worse than A.