Short notes on tech 48/2020

Week 48, 2020

Tools of the Trade

Next.js 10
Built-in Image Component and Automatic Image Optimization, Internationalized Routing, Next.js Analytics, React 17 Support.

Node.js 15
Throw on unhandled rejections, pm 7 includes yarn.lock file support, peer dependencies are now installed by default, V8 8.6.

kachkaev/njt
"njt (npm jump to): a quick navigation tool for npm packages". This is super useful: njt react h brings the home page, njt graphql g takes you to GitHub, other jump points include changelog, source code, issues, and more.

Coding Fonts
A microsite that shows off fonts specifically designed for writing code.

Upptime
Open source uptime and status page system, powered entirely by GitHub Actions and Issues.

Gitlint
Git commit message linter (for Linux and Mac, experimental on Windows), that checks your commit messages for style.

Alternatives to JIRA which is moving to cloud only:
Asana
ClickUp
Linear
Redmine

Nova app from Panic
Native code editor for Mac.

Microsoft Clarity is out of beta
Tool for visualizing user experience. Click and scroll heatmaps, individual session replay, rage clicks metric, and more.

Apple

Does it ARM?
"Apps that are reported to support Apple Silicon"

Accessibility

Atkinson
New free and hyperlegible font published by the Braille institute.

Web

Apple now lets us integrate Face ID and Touch ID on the web
"Building it on top of the Web Authentication API. Imagine how this can improve the logging in experience for a good part of your user base."

Monthly notes 54

Working from home continues as COVID-19 still surges and if you yet haven't checked your video call capabilities, read the How to make video calls almost as good as face-to-face article. The remote working isn't going away as this year has shown that pendeling to offices every day isn't really needed.

Issue 54, 6.11.2020

"Nobody gets hacked"

Working from home

Companies plans for remote work going forward
Twitter thread by Chris Herd of what he learned by speaking to 1,000 companies over the last 6 months about their plans for remote work going forward. Office space going down; flexi-work; people working too hard; burnouts; asynchronous communication is difficult; invest to ergonomic working equipment; workers will be happier as a result of remote work; need tools to track output; documentation is the unspoken superpower of remote teams; coaching and facilitators are needed;

How to make video calls almost as good as face-to-face
How much nicer video calls would feel if the problems with low-quality microphones and webcams, lag and such would be solved? The post summarizes what can be done by fiddling with gear and software. TL;DR; Get away from other people; Throw your wireless headset in the trash; Don’t mute; Get a better microphone; Listen to yourself; Improve your lighting; Use your real background; Don’t bother with webcams;

Docker and Kubernetes security

Dockerfile Security Best Practices
List of common security issues and how to avoid them. For every issue there's an Open Policy Agent (OPA) rule ready to be used to statically analyze your Dockerfiles with conftest. TL;DR; Do not store secrets in environment variables; Only use trusted base images; Do not use ‘latest’ tag for base image; Avoid curl bashing; Do not upgrade your system packages; Do not use ADD if possible; Do not root; Do not sudo;

Docker Threat Model

The Current State of Kubernetes Threat Modelling
"If you are planning on using Kubernetes in production, one of the key things to consider from a security perspective is your threat model."

Arsenal of Cloud Native (Security) Tools
Marco Lancini's curated list of tools he finds useful, alongside a quick “usage” guide for each one of them. i.a.: Docker Bench, kube-bench, kube-hunter, AWS Security Benchmark,

Something different

2020 UCI Cycling eSports World Champs heads to Zwift’s Watopia in December
"2020 UCI Cycling eSports World Championships are set to take place on virtual ride platform Zwift in their online Watopia environment. Garmin-Tacx will supply all of the connected trainer for with elite men and women to race each other virtually"

Prettifying AWS S3 Bucket public index list

Sometimes it's useful to have a index listing on a AWS S3 bucket. Here are some solutions to configure it with nice template. If having a public index list on a S3 Bucket is a good idea or not I'm not saying yay or nay.

First set the correct Bucket Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::tmfg-tiesaahistoria/*"
        }
    ]
}

Next set Permissions

Everyone, List objects Yes

Create a index.html

For the index.html you have couple of choices:

  1. Use the index.hml with modifications by Nolan Lawson (see also: Lawson's blog post and code)
  2. Use more up to date forked index.html
  3. Use original file by Francesco Pasqualini
  4. Use AWS S3 Bucket Browser

To use it, just upload the index.html file into the root of your public S3 bucket.

That’s it!

What software and hardware I use

There was a discussion in Koodiklinikka Slack about what software people use and that people have made "/uses" pages for that purpose. And inspired by Wes Bos /uses from "Syntax" Podcast here's my list.

Check my /uses page to see what software and hardware I use for full-stack development in JavaScript, Node.js, Java, Kotlin, GraphQL, PostgreSQL and more. The list excludes the tools from different customers like using GitLab, Rocket.Chat, etc.

For more choices check uses.tech.

Monthly Notes 52

Issue 52, 9.9.2020

Software development

Field Ops Guide
"The Field Ops Guide (by Futurice) is a booklet that makes it possible to survive a software development project. It's a distillation of years of wisdom gathered working in client projects."

Kubernetes

Threat matrix for Kubernetes
"While Kubernetes has many advantages, it also brings new security challenges that should be considered. Therefore, it is crucial to understand the various security risks that exist in containerized environments, and specifically in Kubernetes."

Docker

Faster Builds and Smaller Images Using BuildKit and Multistage Builds
"Multistage builds feature in Dockerfiles enables you to create smaller container images with better caching and smaller security footprint. In this blog post, I’ll show some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing to get most out of the feature."

Tools

img
"Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder."

GraphQL Voyager
"Represent any GraphQL API as an interactive graph."

SQL diagrams

Something different

Cheating in eSports: How to cheat at virtual cycling

Notes from HelSec Virtual Meetup 1

This year has been challenging for meetups and gatherings but one good side of the restrictions was that remote work has become more acceptable and also meetups and conferences have invested to streaming and virtual participation which is great for people living in an area where there's no meetups.

In early May HelSec kept their first Virtual Meetup with great topics. Here's my short notes (finally four months later). The meetup was streamed via HelSec Twitch channel and the discussions were in HelSec Events Discord. The meetup recording is available from Twitch.

HelSec Virtual Meetup 2020-05-07

HelSec Virtual Meetup #1 (7.5.2020)

Fighting alert fatigue and visibility issues in SOC

Juuso Myllylä from OptimeSys talked about fighting alert fatigue in security operations center (stream from 41:41 onwards). The goal in the talk was to improve automated detection, introduce "detection logic killchain" framework he has worked on as his master's thesis and shift our minds from signature based detection and move towards intelligence based.

Threat detection pyramid

Threat detection framework based on design science research method:

  1. Identify: What is a threat? What kind of things make up a threat?
    1. Mitre's ATT&CK framework
    2. Mitre's ATT&CK: Design and philosophy
    3. Example: hijacked Azure AD account detection
    4. Tactic = initial access
  2. Detect: How we can detect a threat?
    1. Logs, logs, logs
    2. Technique detection is also valid
    3. Example technique: valid accounts or phishing
  3. Use Case: Search queries, log sources, etc.
    1. Convert your idea into a security information and event management (SIEM) search query
    2. Procedures: many APT (Advanced Persistent Threat) groups have used valid accounts as an entrypoint
  4. Demonstrate:
    1. Deploy the use case
  5. Evaluate: evaluate detection logic
    1. Analyze the SIEM logs once your SIEM use case has been deployed
    2. e.g. check Azure AD audit logs, eliminate non-related data
    3. Applies also to threat hunting
  6. Communicate: Document your detection logic in Sigma form
    1. Can be shared with others, try to be SIEM agnostic

iPhone BFU Acquisition and Analysis

The meetup continued with iPhone forensic from @1:19 by Timo Miettinen from Nixu. The presentation explained first how the iPhone iOS filesystem's two main partitions are protected: non-encrypted System and encrypted Data. Data partition is encrypted with burned-into-hardware UID key. The files have additionally 4 classes of Data Protection.

From forensics point of view access to data is protected with many layers: USB connectivity is restricted; Logical extraction is divided to iTunes backup + some media files, password protected backups contain more data, backup password can be reset but has deviations; Full file system extraction needs jailbraking the device; iCloud extraction (synced backup).

The case talked in the talk was a lost iPhone which was later returned by law enforcement. The question was: What was done with it while missing? Was it stolen or just inspected by friendly authorities? Phone was powered off and passcode was changed.

So they had BFU device in their hands for data extraction: device that has been powered off or rebooted and has never been subsequently unlocked. The amount of the data they can theoretically get is really limited.

In BFU the file encryption keys are wiped from the device RAM and only unencrypted class D protected files are available. Biometric authentication is not possible, USB restricted mode is enabled (need biometric authentication or passcode to activate data connections), lockdown records become useless (logical data acquisition impossible) and passcode recovery attack falls to BFU speeds.

Acquisition methods:

  • Utilizing exploits and jailbreaks:
    • checkm8: unpatchable bootrom exploit released by axi0mx on September 2019 which enables jailbreaks, activation lock bypass etc.
    • checkra1n: jailbreak released on November 2019 which utilizes the checkm8 exploit to run unsigned code on an iOS device. Doesn't always pass the USB restricted mode, depends on the combinations of hardware and software versions.
  • Open source and free tools:
    • libimobiledevices has collection of useful tools:
      • SSH over USB using iproxy
      • ideviceinfo gives iOS and HW versions
      • idevicecrashreport gets crash logs from the device
      • many more
    • ios_bfu_triage: extract avaible data
    • iTUnes if you don't have the BFU restriction
  • Commercial tools: Belkasoft Evidence Center, BlackBag Mobilyze, Cellebrite UFED / Physical Analyzer, Elcomsoft Phone Viewer, Magnet AXIOM, MSAB XRY, Oxygen Forensics Extractor

In their use case the checkra1n jailbreak didn't work and USB restricted mode was activated. Also some of the commercial tools enabled to extract some data but wasn't able to read the archive format the software created. They decided to the analysis manually which is a good idea even if the tools are working.

Some open source or free tools for analysis:

  • APOLLO (Apple Pattern of Life Lazy Output'er): parses pattern of life data from databases and properties into human readable format.
  • iOS sysdiagnose forensic scripts: parses iOS sysdiagnose logs.
  • iPhone Backup Analyzer: allows the user to browse the content of an iOS backup.
  • iLEAPP (iOS Logs, Events, And Preferences Parser)
  • iBackup Viewer: browse the content of an iOS backup and extract files.
  • ftree: crawl any directory and identify all files etc.
  • deserializer: converts from NSKeyedArchive to normal plist
  • For reading plists you can use: plutil -p <filename>
  • DB Browser for SQLite
  • Google's protobuf utilities (protoc)

When doing analysis you should look for plists, binary plists, plists in plists, blobs may contain binary plist files and SQLite databases (Shared Memory file .shm, write ahead log .wal). Some applications store data in protocol buffers (protobufs) in SQLite database blobs, plist files or just data files. Tools find most of the interesting data but you can make your own script to dump all text files, convert plist files to readable format, dumps data from every database, get all embedded binary plists from plist files and databases and convert them to readable format.

In their case they found out that the phone was reinstalled 12 hours after it was lost. Mobile banking, social media and instant messaging applications were installed. The device was used to communicate with several contacts and used around the city. The phone was stolen and reinstalled with intention to use it.

Still Fuzzing Faster (U Fool)

Joona Hoikkala talked about Web Fuzzing and using fuff tool for fuzzing directories, login, basic auth, virtual domain, content id:s and more. Follow the talk from the stream at 2:19:00 and the demo starts around @2:33:00. The slides are good starting point.

Kind of reverse but describes web fuzzing

You can use fuzzing with different input contents to target i.a. GET parameters (names, values or both), headers (Host, authentication, cookies, proxy headers) and POST data (form data, JSON, files). What to look for (matching)? Response codes, content (regexes) response sizes (bytes, # of words).

Resources: SecLists

Price of a digital identity

Laura Kankaala, from Robocorp and Team Whack fame talked about the price of a digital identity starting at 3:20:40. Data is central, security - privacy: how companies view data and how data sellers view data.

Digital identity:

  • What we are
  • What we have
  • What we produce

Laura also presented that ~90-99% of data collected is dark data which is collected but not really utilized. But we are just getting started. But it's good to remember that our data belongs to us. We give permission to collectors and controllers.

Do you know what's your data worth? Data is valuable and for example there are companies like doc.ai and datum which tries to monetize it so that also user gets part of it. But yet the data is used more for targeted ads, providing content just for us, increasing efficiency and creating better services. And of course every one remembers Cambridge Analytica and (trying) to affect electoral processes in the US.

The most valuable things being sold online are: credit cards, identity numbers, passports, credentials, phone numbers, home address. Passports are quite logical of value and e.g. France passport goes for $124, USA $115, Canada $103, UK $60 and so on, depending of the data included with it.

Kankaala talked about how the collecting of our data has sneak to our lives (e.g. social credit systems). Companies collect data and when our normal live becomes very tangled with our live online it becomes easier to monitor us, to see what we're up to and to moderate our behaviour. We need to be careful when we allow new type of access to our live, e.g. COVID-19 tracking.

Regulation, awareness and education is at least a patch to some of these issues. We are hackers and we should be the pathfinders and show people that it doesn't have to be that if something works the way it does today and although it works it doesn't mean it works right or ethically.

We are all vulnerable

Magnus Lundgren from Recorded Future told a tale of two databases, a panda and someone who was listening starting at 4:25:00.

There's a race of when vulnerability is found and assigned a CVE number until it's either patched or exploited. 12 517 CVEs were first published on NVD in 2016-2017 and it takes average 33 days until an initial assesment of the vulnerability is made available via NIST's NVD. For example Dirty Cow (CVE-2016-519) it took 21 days to initial release on NVD but it took only 8 days to create an exploit (Proof of Concept shared on Pastebin) for it and sold/shared on the deep and dark web.

A tale of two databases: NVD (NIST) and CNNVD (CNITSEC). In the Chinese CNNVD it takes on average only 13 days compared to 33 days on NVD for initial assesment. The difference comes from the detail that CNNCD is doing active collection while NVD is doing passive collection from vendors. But always it doesn't be that way like it was the case with some Android backdoor where it took 236 days from CNNVD and 60 days from NVD. It takes longer for CNNVD to publish high threat vulnerabilities than low threat ones and during the publication lag Chinese APT groups are exploiting those vulnerabilities.

When the Recorded Future published a blog post identifying 343 "outlier" CVEs (regarding the issue the of CNNVD lag) CNNVD backdated 338 of those CVEs. Someone was listening.

Conclusions:

  • Deep / Dark web monitoring of activity is crucial for a good patching cadence.
  • Magic can be done with threat intel data that has been organiced for analysis.
  • Chinese intersection is particularly vicious for foreign companies: Ministry of State Security (China) runs multiple threat actors e.g. APT3, runs CNNVD and cherry picks CNNVD vulnerabilities for targeting.

Resources: Inside Security Intelligence podcast

Monthly notes 51

It's August and after summer holidays it's time to get back to monthly notes. If you read only one note, check the "Some important things to keep in mind when you work remotely" which has good tips also in general. Happy reading :)

Issue 51: 2020-08-07

Kubernetes

How to gracefully shut down Pods without dropping production traffic in Kubernetes?
If you've ever noticed dropped connection after a rolling upgrade, read Daniele Polencic Twitter thread which digs into the details with detailed pictures.

Web development

Prevent Info leaks and enable powerful features: COOP and COEP
"Cross-Origin Embedder Policy (COEP) and Cross-Origin Opener Policy (COOP) isolate your origin and enable powerful features." The video by @agektmr helps you understand how it works and why this is important. Unlock access to new perf API's to help you identify JS bottlenecks, memory leaks, and more. (from @igrigorik)

How To Setup Your Local Node.js Development Environment Using Docker
(from @Docker)

Web Stories are coming to WordPress!
Web Stories are tappable, engaging visual stories brought to the web. They’re powered by AMP technology. (from @pbakaus)

Working remotely

Some important things to keep in mind when you work remotely
Check the Twitter thread for 10 great tips for working remotely. They are also good tips also in general. I've also found the tip 8. be great. Writing notes and making (public) blog posts of them helps you to process new information better and also help other developers. Documentation is often undervalued and it takes time to do it correctly.

Software development

It's probably time to stop recommending Clean Code
"There is a growing movement against Rob Martin's books (e.g., Clean Code). After reading the article, I have to agree with a lot of it, but I also hope that this movement doesn't push too far to the other side." (from @maybeFrederick) My take is that don't believe everything you read be it on a book or nowadays in the Internet. Use your own thinking and reasoning. "Clean Code" has good points and suggestions but also goes a bit overboard with how "clean" things should look.

Tools

Boop
"Boop is a place to paste text, and transform it using basic operations. The goal is to allow quick experimentation and avoid using random websites to do that stuff. It's super useful when working with logs, JSON data, etc." (from @OKatBest). This is what I've always needed. No more searching for online tool for a specific task (or looking it from tiny-helpers.dev which is a great collection).

Git-bug
Fully embedded bug-tracker in git: you only need your git repository to have a bug tracker.

Something different

Remy Metailler Smashes Squamish Mountain Bike Trails

Following a Pro Enduro Racer Down Whistler's Hardest Trails // Wyn Masters

Hands-on learning Cloud Technologies with QwikLabs

I've used Google Cloud Platform for some time and got a opportunity to attend Codemen Cloud Academy's Google Workshop which concentrated to "Kubernetes in the Google Cloud" and "Google Cloud Run Serverless Workshop" topics using the Qwiklabs is a platform. Here's my (very) short notes from the workshop and using Qwiklabs. Most of the things I had used already by running our service on GKE but there's always something to learn from other's experiences.

Google Cloud Workshop with Qwiklabs

Qwiklabs is a platform for learning cloud technologies by following exercises and hands-on training. It gives temporary credentials to Google Cloud Platform and Amazon Web Services, so you can learn the cloud using the real thing.

The workshop used Cloud Study Jams 2020 session contents. After we completed the first lab, we were automatically granted 30-day pass to continue doing the rest of the labs. The quests in the labs are "priced" with credits which you can buy ($1 per credit) or get with workshop code.

Kubernetes in Google Cloud

The "Kubernetes in Google Cloud" quest in Qwiklabs is an advanced-level quest which gets you hands-on practice of configuring Docker images and containers, and deploying fully-fledged Kubernetes Engine applications. It teaches you the practical skills needed for integrating container orchestration into your own workflow.

Kubernetes in Google Cloud quests outline

There's nothing much to tell about the quests contents except bunch of docker, gcloud and kubectl commands so I'll not go through them here.

The Kubernetes in Google Cloud Quest in QwikLabs was as hands-on as it promised and the final quest "Challenge Lab" put all the things together with quite strict time limit. Although I had made notes from the previous quests I just and just managed to paste the commands, wait for the cloud to provision and especially for the Jenkins service to run continuous integration jobs.

Google Cloud console

Summary

Overall the "Kubernetes in Google Cloud" lab was excellent overview to Kubernetes and how things work in Google Cloud. It covered essential topics and showed how to do things in practice. It helped to have previous experience with Google Cloud but everything was explained and shown so you can learn by doing.

Qwiklabs Google Cloud quests

Qwiklabs has also other Google Cloud related labs as shown below but I didn't had time to go through them (I totally forgot :/) although the participants who completed the Kubernetes course got two month's free pass to the platform.

Infrastructure and Architecture quests
Machine Learning and Data quests
BigQuery quests

Monthly notes 50

Issue 50, 15.6.2020

Serverless

AWS Lambda — should you have few monolithic functions or many single-purposed functions?
Interesting question of if single responsibility principle (SRP) should be followed in the serverless world. What is a “function” if not SRP? TL;DR; many single-purposed functions are better.

Stories

Twitter search of "telling early-in-career engineers stories of times you messed something up real bad is a good way to help them combat their own impostor syndrome." from (@ElleArmageddon)

Kubernetes

In Kubernetes, what should I use as CPU requests and limits?
Good Twitter thread of what are the difference of requests and limits.

How should I answer a health check?
Explains how to use liveness and readiness probes (on Kubernetes). Heard that liveness probe should be always off unless there’s a bug in app which it can’t recover. And long checks can be cached.

Managed Kubernetes Price Comparison (2020)
"TL;DR: Azure and Digital Ocean don’t charge for the compute resources used for the control plane, making AKS and DO the cheapest for running many, smaller clusters. For running fewer, larger clusters GKE is the most affordable option. Also, running on spot/preemptible/low-priority nodes or long-term committed nodes makes a massive impact across all of the platforms."

Learning

Performance profiling for Web Applications with Sam Saccone
"How to use Chrome DevTools to understand a Web application's performance bottlenecks. Goes over a few different workflows that will help us to answer the question "Why is this slow and how can I fix it"."

Tools

OpenSnitch
GNU/Linux port of the Little Snitch application firewall. (from Hacker Newslettter #490, comments)

Kubectl-debug
kubectl-debug is an out-of-tree solution for troubleshooting running pods, which allows you to run a new container in running pods for debugging purpose (examples). The new container will join the pid, network, user and ipc namespaces of the target container, so you can use arbitrary trouble-shooting tools without pre-installing them in your production container image.

Lighthouse audit add-on for Firefox
"Report, Performance, Accessibility, PWAs, SEO scores for any public site. Without opening DevTools."

Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."

jwt.io

You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/CN=ruleoftech.com" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
OR
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

" 
{
"keys": [
….,
{
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"
}
]
}
"

Convert the PEM to JWK format with e.g. pem-jwk or with pem_to_jwks.py. The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'

...

To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!