Generating JWT and JWK for information exchange between services

Securely transmitting information between services and authorization can be achieved with using JSON Web Tokens. JWTs are an open, industry standard RFC 7519 method for representing claims securely between two parties. Here's a short explanation and guide of what they are, their use and how to generate the needed things.

"JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA."

jwt.io

You should read the introduction to JWT to understand it's role and there's also a handy JWT Debugger to test things. For more detailed info you can read a JWT handbook.

In short, authorization and information exchange are some scenarios where JSON Web Tokens are useful. They essentially encode any sets of identity claims into a payload, provide some header data about how it is to be signed, then calculate a signature using one of several algorithms and append that signature to the header and claims. JWTs can also be encrypted to provide secrecy between parties. When a server receives a JWT, it can guarantee the data it contains can be trusted because it's signed by the source.

Usually two algorithms are supported for signing JSON Web Tokens: RS256 and HS256. RS256 generates an asymmetric signature, which means a private key must be used to sign the JWT and a different public key must be used to verify the signature.

JSON Web Key

JSON Web Key (JWK) provides a mechanism for distributing the public keys that can be used to verify JWTs. The specification is used to represent the cryptographic keys used for signing RS256 tokens. This specification defines two high level data structures: JSON Web Key (JWK) and JSON Web Key Set (JWKS):

  • JSON Web Key (JWK): A JSON object that represents a cryptographic key. The members of the object represent properties of the key, including its value.
  • JSON Web Key Set (JWKS): A JSON object that represents a set of JWKs. The JSON object MUST have a keys member, which is an array of JWKs. The JWKS is a set of keys containing the public keys that should be used to verify any JWT.

In short, the service signs JWT-tokens with it's private key (in this case PKCS12 format) and the receiving service checks the signature with the public key which is in JWK format.

Generating keys and certificate for JWT

In this example we are using JWTs for information exchange as they are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs — you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Generate the certificate for JWT with OpenSSL, in this case self-signed is enough:

$ openssl genrsa -out private.pem 4096

Generate public key from earlier generated private key for if pem-jwk needs it, it isn't needed otherwise

$ openssl rsa -in private.pem -out public.pem -pubout

If you try to insert private and public keys to PKCS12 format without a certificate you get an error:

openssl pkcs12 -export -inkey private.pem -in public.pem -out keys.p12
unable to load certificates

Generate self-signed certificate with aforesaid key for 10 years. This certificate isn't used for anything as the counterpart is JWK with just public key, no certificate.

$ openssl req -key private.pem -new -x509 -days 3650 -subj "/C=FI/ST=Helsinki/O=Rule of Tech/OU=Information unit/CN=ruleoftech.com" -out cert.pem

Convert the above private key and certificate to PKCS12 format

$ openssl pkcs12 -export -inkey private.pem -in cert.pem -out keys.pfx -name "my alias"

Check the keystore:

$ keytool -list -keystore keys.pfx
OR
$ keytool -v -list -keystore keys.pfx -storetype PKCS12 -storepass
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
1, Jan 18, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA-256): 0D:61:30:12:CB:0E:71:C0:F1:A0:77:EB:62:2F:91:9B:55:08:FC:3B:A5:C8:B4:C7:B4:CD:08:E9:2C:FD:2D:8A

If you didn't set alias for the key when creating the PKCS12 you can change it

keytool -changealias -alias "original alias" -destalias "my awesome alias" -keystore keys.pfx -storetype PKCS12 -storepass "password"

Now we finally get to the part where we generate the JWK. The final result is a JSON file which contains the public key from earlier created certificate in JWK-format so that the service can accept the signed tokens.

The JWK is in format of:

" 
{
"keys": [
….,
{
"kid": "something",
"kty": "RSA",
"use": "sig",
"n": "…base64 public key values …",
"e": "…base64 public key values …"
}
]
}
"

Convert the PEM to JWK format with e.g. pem-jwk or with pem_to_jwks.py. The key is in pkcs12 format. The values for public key's values n and e are extracted from private key with following commands. jq part extracts the public parts and excludes the private parts.

$ npm install -g pem-jwk
$ ssh-keygen -e -m pkcs8 -f private.pem | pem-jwk | jq '{kid: "something", kty: .kty , use: "sig", n: .n , e: .e }'

...

To check things, you can do the following.

Extract a private key and certificates from a PKCS12 file using OpenSSL:

$ openssl pkcs12 -in keys.p12 -out keys_out.txt

The private key, certificate, and any chain files will be parsed and dumped into the "keys_out.txt" file. The private key will still be encrypted.

To extract just the private key from p12 (key is still encrypted):

$ openssl pkcs12 -in keys.p12 -nocerts -out privatekey.pem

Decrypt the private key:

$ openssl rsa -in privatekey.pem -out privatekey_uenc.pem

Now if you convert the PEM to JWK you should get the same values as before.

More to read: JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!

Using NGINX Ingress Controller on Google Kubernetes Engine

If you've used Kubernetes you might have come across Ingress which manages external access to services in a cluster, typically HTTP. When running with GKE the "default" is GLBC which is a "load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API". It's easy to use but doesn't let you to to customize it much. The alternative is to use for example NGINX Ingress Controller which is more down to earth. Here are my notes of configuring ingress-nginx with cert manager on Google Cloud Kubernetes Engine.

This article takes much of it's content from the great tutorial at Digital Ocean.

Deploying ingress-nginx to GKE

Provider specific steps for installing ingress-nginx to GKE are quite simple.

First you need to initialize your user as a cluster-admin with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
   --clusterrole cluster-admin \
   --user $(gcloud config get-value account)

Then if you are using a Kubernetes version previous to 1.14, you need to change kubernetes.io/os to beta.kubernetes.io/os at line 217 of mandatory.yaml.

Now you're ready to create mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

$ kubectl apply -f ingress-nginx_mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

Create the LoadBalancer Service:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created

Verify installation:

$ kubectl get svc --namespace=ingress-nginx
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.10.10.1   1.1.1.1   80:30598/TCP,443:31334/TCP   40s

$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          2m17s

Configure proxy settings

In some situations the payload for ingress-nginx might be too large and you have to increase it. Add the "nginx.ingress.kubernetes.io/proxy-body-size" annotation to your ingress metadata with value you need. 0 to not limit the body size.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"

Troubleshooting

Check the Ingress Resource Events:

$ kubectl get ing ingress-nginx

Check the Ingress Controller Logs:

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          149m

$ kubectl logs -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7

Check the Nginx Configuration:

kubectl exec -it -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7  cat /etc/nginx/nginx.conf

Check if used Services Exist:

kubectl get svc --all-namespaces

Promote ephemeral to static IP

If you want to keep the IP you got for the ingress-nginx then promote it to static. As we bound our ingress-nginx IP to a subdomain we want to retain that IP.

To promote the allocated IP to static, you can update the Service manifest:

kubectl --namespace=ingress-nginx patch svc ingress-nginx -p '{"spec": {"loadBalancerIP": "1.1.1.1"}}'

And promote the IP to static in GKE/GCE:

gcloud compute addresses create ingress-nginx --addresses 1.1.1.1 --region europe-north1

Creating the Ingress Resource

Creating your Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service and apply it to Kubernetes cluster.

$ kubectl apply -f ingress.yaml
ingress.extensions/ingress created

Verify installation:

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

Installing and Configuring Cert-Manager

Next we'll install cert-manager into our cluster. It's a Kubernetes service that provisions TLS certificates from Let’s Encrypt and other certificate authorities and manages their lifecycles.

Create namespace:

kubectl create namespace cert-manager

install cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers.

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.1/cert-manager.yaml

Verify installation:

kubectl get pods --namespace cert-manager

Rolling Out Production Issuer

Create a production certificate ClusterIssuer, prod_issuer.yaml:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: your-name@yourdomain.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

Apply production issuer using kubectl:

kubectl create -f prod_issuer.yaml

Update ingress.yml to use "letsencrypt-prod" issuer:

metadata:
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"

Apply the changes:

kubectl apply -f ingress.yaml

Verify that things look good:

kubectl describe ingress
kubectl describe certificate

Done;

Reset Hasura migrations and squash files

Using GraphQL for creating REST APIs is nowadays popular and there are different tools you can use. One of them is Hasura which is an open-source engine that gives you realtime GraphQL APIs on new or existing Postgres databases. Hasura is quite easy to work with but if your GraphQL schemas change a lot it creates plentiful of migration files. This has some unwanted consequences (for example slowing down the hasura migrate apply or even blocking it). Here’s some notes how to reset the state and create new migrations from the state that is on the server.

Note: From Hasura 1.0.0 onwards squashing is easier with hasura migrate squash command. It's still in preview. But before Hasura 1.0.0 version you have to squash migrations manually and this blog post explains how. The results are the same: squashing multiple migrations into a single one.

Hasura documentation provides a good guide how to squash migrations but in practice there are couple of other things you may need to address. So let’s combine the steps Hasura gives and some extra steps.

Reset Hasura migrations

First make a backup branch:

  1. $ git checkout master
  2. Create a backup branch:
    $ git checkout -b backup/migrations-before-resetting-20XX-XX-XX
  3. Update the backup branch to origin:
    $ git push origin backup/migrations-before-resetting-20XX-XX-XX

We are assuming you've local Hasura running on Docker with something like the following docker-compose.yml

version: "3.6"
services:
  postgres:
    image: postgres:11-alpine
    restart: always
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data
    command: postgres -c max_locks_per_transaction=2000
  graphql-engine:
    image: hasura/graphql-engine:v1.0.0-beta.6
    ports:
      - "8080:8080"
    depends_on:
      - "postgres"
    restart: always
    environment:
      HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:@postgres:5432/postgres
      HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
      HASURA_GRAPHQL_ADMIN_SECRET: changeme
      HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
volumes:
  db_data:

Create local instance of Hasura with up to date migrations:

  1. $ docker-compose down -v
  2. $ docker-compose up
  3. $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Reset migrations to master:

  1. git checkout master
  2. git checkout -b reset-hasura-migrations
  3. rm -rf migrations/*

Reset the migration history on server. On hasura SQL console, http://localhost:8080/console:

TRUNCATE hdb_catalog.schema_migrations;

Setup fresh migrations by taking the schema and metadata from the server. By default init only takes public schema if others not mentioned with the --schema "your schema" parameter. Note down the version for later use.

  1. Create migration file:
    $ hasura migrate create "init" --from-server
  2. Mark the migration as applied on this server:
    $ hasura migrate apply --version "" --skip-execution
  3. Verify status of migrations, should show only one migration with Present status:
    $ hasura migrate status
  4. You have brand new migrations now!

Resetting migrations on other environments

  1. Checkout the reset branch on local machine:
    $ git checkout -b reset-hasura-migrations
  2. Reset the migration history on remote server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to remote server:
    $ hasura migrate apply --version "<version>" --skip-execution

Local environment Hasura status

For other developers please refer these instructions in order to get the backend into same state.

Option 1: Keep old data

  1. Checkout the backup branch on local machine:
    $ git checkout backup/migrations-before-resetting-20XX-XX-XX
  2. Reset the migration history on local server. On Hasura SQL console:
    TRUNCATE hdb_catalog.schema_migrations;
  3. Apply migration status to local server:
    $ hasura migrate apply --version "<version>" --skip-execution

Option 2: Remove all and start from beginning

  1. Clean up the old docker volumes:
    $ docker-compose down -v
  2. Start up services:
    $ docker-compose up
  3. Checkout master:
    $ git checkout master
  4. Apply migrations:
    $ hasura migrate apply --endpoint=http://localhost:8080 --admin-secret=changeme

Possible extra steps

Now your Hasura migrations and database tables are in one migration init file but sometimes things don’t work out when applying it to empty database. We are using Hasura audit-trigger and had to reorder the SQL clauses done by the migrate init and add some missing parts.

  1. Move schema creations after audit clauses
  2. Move audit.audit_table(target_table regclass) to last audit clause and copy it from audit.sql
  3. Add pg_trgm extension as done previously (fixes "operator does not exist: text <%!t(MISSING)ext" in public.search_customers_by_name)
  4. Drop session constraints / index before creating new
  5. Create session table only if not exists

Problems with installing Oracle DB 12c EE, ORA-12547: TNS: lost contact

For development purposes I wanted to install Oracle Database 12c Enterprise Edition to Vagrant box so that I could play with it. It should've gone quite straight forwardly but in my case things got complicated although I had Oracle Linux and the pre-requirements fulfilled. Everything went fine until it was time to run the DBCA and create the database.

The DBCA gave "ORA-12547: TNS: lost contact" error which is quite common. Google gave me couple of resources to debug the issue. Oracle DBA Blog explained common issues which cause ORA-12547 and solutions to fix it.

One of the suggested solutions was to check to ensure that the following two files are not 0 bytes:

ls -lt $ORACLE_HOME/bin/oracle
ls -lt $ORACLE_HOME/rdbms/lib/config.o

And true, my oracle binary was 0 bytes

-rwsr-s--x 1 oracle oinstall 0 Jul  7  2014 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

To fix the binary you need to relink it and to do that rename the following file:

$ cd $ORACLE_HOME/rdbms/lib
$ mv config.o config.o.bad

Then, shutdown the database and listener and then “relink all”

$ relink all

If just things were that easy. Unfortunately relinking ended on error:

[oracle@oradb12c lib]$ relink all
/u01/app/oracle/product/12.1.0/dbhome_1/bin/relink: line 168: 13794 Segmentation fault      $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/install/modmakedeps.pl $ORACLE_HOME $ORACLE_HOME/inventory/make/makeorder.xml > $CURR_MAKEORDER
writing relink log to: /u01/app/oracle/product/12.1.0/dbhome_1/install/relink.log

After googling some more I found similar problem and solution: Relink the executables by running make install.

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk install
 
cd $ORACLE_HOME/network/lib
make -f ins_net_server.mk install
</re>
 
If needed you can also relink other executables:
<pre lang="shell">
make -kf ins_sqlplus.mk install (in $ORACLE_HOME/sqlplus/lib)
make -kf ins_reports60w.mk install (on CCMgr server)
make -kf ins_forms60w.install (on Forms/Web server)

But of course it didn't work out of the box and failed to error:

/bin/ld: cannot find -ljavavm12
collect2: error: ld returned 1 exit status
make: *** [/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib/oracle] Error 1

The solution is to copy the libjavavm12.a to under $ORACLE_HOME lib as explained:

cp $ORACLE_HOME/javavm/jdk/jdk6/lib/libjavavm12.a $ORACLE_HOME/lib/

Run the make install commands from above again and you should've working oracle binary:

-rwsr-s--x 1 oracle oinstall 323649826 Feb 17 16:27 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

After this I ran the relink again which worked and also the install of the database worked fine.

cd $ORACLE_HOME/bin
relink all

Start the listener:

lsnrctl start LISTENER

Create the database:

dbca -silent -responseFile $ORACLE_BASE/installation/dbca.rsp

The problems I encountered while installing Oracle Database 12c Enterprise Edition to Oracle Linux 7 although in Vagrant and with Ansible were surprising as you would think that on certified platform it should just work. If I would've been using CentOS or Ubuntu it would've been totally different issue.

You can see the Ansible tasks I did to get Oracle DB 12c EE installed on Oracle Linux 7 in my vagrant-experiments GitHub repo.

Oracle DB 12c EE Ansible Tasks
Oracle DB 12c EE Ansible Tasks

Creating Vagrant Base Box with Veewee

Vagrant is a great tool for creating and configuring lightweight, reproducible, portable virtual machine environments but the first step for using Vagrant, downloading an existing "base box", raises some questions. E.g. How are these unverified boxes built? So, you might end up building your own base box which is often time consuming and cumbersome. Fortunately there's a tool called Veewee which aims to automate all the steps for building base boxes and to collect best practices in a transparent way.

Vagrant Base Box with Veewee

Veewee is a tool for easily (and repeatedly) building custom Vagrant base boxes, KVMs, and virtual machine images. You can use it to build a Vagrant box in Linux, Mac OS X and Windows but I found out that fulfilling the requirements on Windows is quite difficult (read Ruby and RVM) so just forget it.

To get you started there are some requirements you need to fulfill. First you'll need to install at least one of the supported virtual machine providers like VirtualBox and second you need some development libraries.

On Ubuntu 15.04 Linux and using VirtualBox you need these packages:

$ apt-get install virtualbox git curl ruby ruby-dev libxslt1-dev libxml2-dev zlib1g-dev

Install RVM on Linux

For Ruby environment it's recommended to use either rvm or rbenv. I chose the RVM and followed the RVM install documentation.

Install mpapis public key:

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

if keyserver fails, you can use $ curl -sSL https://rvm.io/mpapis.asc | gpg --import -

Install RVM stable with ruby:

$ \curl -sSL https://get.rvm.io | bash -s stable --ruby

Installing Veewee with RVM

With RVM already installed, ensure a ruby version that's supported by Veewee is available on your machine:

$ source /home/marko/.rvm/scripts/rvm
$ rvm install ruby

Clone the veewee project from source:

$ cd <path_to_workspace>
$ git clone https://github.com/jedi4ever/veewee.git
$ cd veewee

Set the local gemset and ruby version within the current directory:

$ rvm use ruby@veewee --create

Run bundle install to install Gemfile dependencies for your local gemset:

$ gem install bundler
$ bundle install

Bundle install will take some time.

Building Vagrant Box with Veewee

Veewee uses definitions to build new virtual machines and 'definition' is derived from a 'template' and preconfigured templates are found in templates/ folder. Veewee Basics explains how you can create your own customized definition.

For my customized Vagrant Box I decided to use Tommy Muehle's definition as a template as it contained what I wanted. Simple CentOS 6.6. Box with Puppet. I just changed the localization to Finland and made it bigger for WebLogic use case in mind. My definition for Vagrant Box can be found in GitHub.

To use my definition just clone the repository for CentOS 6.6 Box, copy the "centos-6.6-x86_64_puppet" folder to definitions/ folder under Veewee and make your own changes if needed. After you're done run:

$ bundle exec veewee vbox build centos-6.6-x86_64_puppet

The build command runs Veewee scripts and automates the manual steps needed while installing a new Linux distribution.

Installing CentOS to Vagrant Box with Veewee

To export the Box for further use with Vagrant, run:

$ bundle exec veewee vbox export centos-6.6-x86_64_puppet

The above command is actually calling "vagrant package --base 'centos-6.6-x86_64_puppet' --output 'boxes/centos-6.6-x86_64_puppet'". The machine gets shut down, exported and will be packed in a centos-6.6-x86_64_puppet.box file inside the current directory.

And you're all done. Now you can use your just created base box for Vagrant boxes. Import it into Vagrant's box repository and use it to initialize a fresh project:

$ vagrant box add 'centos-6.6-x86_64_puppet' 'centos-6.6-x86_64_puppet.box'
$ vagrant init 'centos-6.6-x86_64_puppet'

Using Veewee to build a Vagrant Box is simple and what's more important it's automated and reproducible. Using Ruby and RVM on Windows 7 turned out to be practically impossible but old ThinkPad W510 with Ubuntu 15.04 worked nicely. Of course you could create a base box with Vagrant way which means installing and configuring your Linux manually. But why would you want to that if you can just automate it?

Disabling Derby in Oracle WebLogic 12c

Oracle WebLogic has some interesting traits to help developers frustrate. From Weblogic 10.3.4 and above the Apache Derby Database is included in the installation. That's fine but from 12.1.2 release it also starts automatically which is usually unwanted, useless and waste of resources. Previous versions of WebLogic didn't automatically start the Derby database.

Fortunately you can disable it as basically there is a simple IF statement in the "$WL_DOMAIN_HOME$\bin\setDomainEnv.cmd" file:

@REM Set DERBY_FLAG, if derby is available.
 
if exist %WL_HOME%\common\derby\lib\derby.jar (
    set DERBY_FLAG=true
)

If you want to prevent Derby form starting you have three options:

  • Rename "derby.jar" to something else
  • Delete the IF statement from start-up script
  • Set the DERBY_FLAG to false in the startWeblogic.cmd script

I couldn't find Oracle's documentation about Derby in Weblogic but those four options seems to work. I prefer the third option which is quite easy to configure. (via Oracle Community)

In my "$WL_DOMAIN_HOME$\bin\startWebLogic.cmd" I added

...
@REM Call setDomainEnv here.
 
@REM Disabling Derby
set DERBY_FLAG = false
...

Connecting Jabra HALO2 Bluetooth Headset with Windows 7

Recently I got Jabra HALO2 Bluetooth headset for teleconferences but had problems to get it work with Windows 7 and Dell Latitude E6530. Windows found the device and wanted to install drivers but couldn't find any. The solution was easy: update your laptops' Bluetooth drivers. I downloaded Dell Wireless 380 Bluetooth Application version 6.5.1.4000,A02 from Dell's drivers page and got it working.

Jabra HALO2 is a wireless Bluetooth headset with dual microphone for noise filtering and can be paired with 2 Bluetooth-enabled devices. It can be also used with USB cable and 3,5mm cord and can control music player and sound volume. The battery last for 8 hours talk or music and 13 days on standby.

The wireless headset works with e.g. Windows 7 but some laptops like my Dell Latitude E6530 needs specific manufacturer's Bluetooth drivers before Windows starts to play nice with them. At first I got the "Bluetooth peripheral device driver not found" error when trying to connect a Bluetooth device and as the Jabra HALO2 headset doesn't need drivers it was time to look for them from Dell's support.

Dell's drivers page doesn't have Bluetooth drivers directly so I figured to get the "Dell Wireless 380 Bluetooth Application" version 6.5.1.4000,A02 (31/10/2013) which provides an application for DW380 Bluetooth. After installing the 245 MB package Windows started to install the missing drivers and the Bluetooth headset's hardware functions got found: AV, Hands-free, Headset and Remote Control. I can't say that updating the drivers will help everyone but what I searched about this issue it was the solution which got these and other Bluetooth headphones working with different laptops.

The start with my new headset wasn't the easiest but after I got it working the Jabra HALO2 Bluetooth headset works nicely and is pleasant to use for teleconferences with Lync and with my Lumia 800 mobile phone.

Jabra HALO2 bluetooth headset connected
Headset shown in playback devices
Headset's Bluetooth services

Do a clean install of Windows 8 with an upgrade key

There are times when you have to do a clean install of your Windows 8 but if you have just an upgrade key you need to make couple of extra hoops before you can activate the new install. The upgrade key doesn't prevent you installing to a clean disk but when you try to activate, you get an error 0x8007007B, saying your product key can only be used for upgrading. Another fine example how Microsoft makes things complicated for legitimate users.

Fortunately there's a way to fix that issue as Lifehacker's article tells:

  • Open the Registry Editor (Win + R, type regedit).
  • Navigate to "HKEY_LOCAL_MACHINE/Software/Microsoft/Windows/CurrentVersion/Setup/OOBE/" and change the MediabootInstall key's value from 1 to 0.
  • Open the Command Prompt (Win + R, type cmd). Right-click on the Command Prompt icon and run it as an administrator.
  • Type slmgr -rearm and press Enter.
  • Reboot Windows.

After that is done and you get back into Windows, you should be able to run the Activation utility and activate Windows as normal, without getting an error.

Apparently you can also call Microsoft Support and they will walk you through the proper way doing this because they understand you may have bought a new drive. There is a dialog you can get to in system tools where it will ask you for a numerical code. The support personnel will give you the number, you click "OK" after typing it in, then go the activation again and it works. This process is likely doing the work around mention above, but through an approved administrative process.

By the way, restarting Windows 8 is most easily done by left-clicking once on an empty spot on the desktop and holding Alt + F4).

Setting up LAMP stack on OS X

Setting up LAMP stack for web development on OS X can be done with 3rd party software like MAMP but as Mac OS X comes with pre-installed Apache and PHP it's easy to use the native setup. You just need to configure Apache, PHP and install MySQL.

Setup Apache2

Set up the Server Name to localhost to suppress the warning about fully qualified domain name and enable PHP module.

$ sudo vim /etc/apache2/httpd.conf
ServerName localhost:80
LoadModule php5_module libexec/apache2/libphp5.so

Create "virtual hosts" under your Sites. Change the username to your account's username.

~$ sudo vim /etc/apache2/users/username.conf
<VirtualHost *:80>
  ServerName dev
  DocumentRoot /Users/username/Sites
  VirtualDocumentRoot /Users/username/Sites/%-2/htdocs
  UseCanonicalName Off
 
  <Directory "/Users/username/Sites/*/htdocs">
    AllowOverride All
    Order allow,deny
    Allow from all
  </Directory>
</VirtualHost>

Now Apache serves your projects from your home directory's Sites folder. Apache will serve files from the htdocs folder like "~/Sites/projectname/htdocs".

Now just restart Apache and check that it's running.

$ sudo apachectl restart
$ ps aux | grep httpd

Setup PHP

$ sudo cp /etc/php.ini.default /etc/php.ini

Edit php.ini for easier debugging:

error_reporting  =  E_ALL | E_STRICT
display_errors = On
html_errors = On

Setup MySQL

MySQL can be installed directly from Oracle's MySQL packages or by using Homebrew.

Install Homebrew

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install MySQL using Homebrew

$ brew install mysql

Install the MySQL system tables and have it run as your system user:

$ unset TMPDIR
$ mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp

Start MySQL and check that it's running

$ mysql.server start
$ ps aux | grep mysql

Reset the root password. Change the "5.5.27" to your installed version number.

$ /usr/local/Cellar/mysql/5.5.27/bin/mysqladmin -u root password 'YOUR_NEW_PASSWORD'

As we are using the Homebrew package for MySQL and the default php.ini file then PHP is trying to connect to MySQL through the default_socket at /var/mysql/mysql.sock which doesn't exist as MySQL is using /tmp/mysql.sock. Just change all instances of /var/mysql/mysql.sock to /tmp/mysql.sock.

$ sudo sed -i "" "s:/var/mysql/mysql.sock:/tmp/mysql.sock:g" /etc/php.ini

And you're done.

Web application test automation with Robot Framework

Software quality has always been important but seems that lately it has become more generally acknowledged fact that quality assurance and testing aren't things to be left behind. With Java EE Web applications you have different ways to achieve test coverage and test that your application works with tools like JUnit, Mockito and DBUnit. But what about testing your web application with different browsers? One great way is to use Robot Framework which is a generic test automation framework and when combined with Selenium 2 it makes both writing your tests and running them quite intuitive.

Contents

Introduction

Robot Framework which is a generic test automation framework for acceptance testing and its tabular test data syntax is almost plain English and easy to understand. Its testing capabilities can be extended by test libraries implemented either with Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. Robot Framework itself is open source and released under Apache License 2.0, and most of the libraries and tools in the ecosystem are also open source. The development of the core framework is supported by Nokia Siemens Networks.

Robot Framework doesn't do any specific testing activity but instead it acts as a front end for libraries like Selenium2Library. Selenium2Library is a web testing library for Robot Framework that leverages the Selenium 2 (WebDriver) libraries from the Selenium project. In practice it starts the browser (eg. IE, Firefox, Chrome) and runs the tests against it natively as a user would. There's no need to manually click through the user interface.

Robot Framework has good documentation and by going through the "Web testing with Robot Framework and Selenium2Library" demo you see how it's used in web testing, get introduction to test data syntax, how tests are executed, and how logs and reports look like. For more detailed view about Robot Framework's features you can read the User Guide.

Installing test tools

The "Web testing with Robot Framework and Selenium2Library" demo is good starting point for getting to know Robot Framework but it more or less skips the details of setting up the system and as the installation instructions are a bit too verbose here is an example how to install and use Robot Framework and Selenium 2 in 64-bit Windows 7.

Python installation

First we need Python as a precondition to run Robot Framework and we install Python version 2.7.x as Robot Framework is currently not compatible with Python 3.x. From the Python download page select Python 2.7.9 Windows X86-64 Installer.

For using the RIDE editor we also need wxPython. From the download page select wxPython2.8-win64-unicode-py27 for 64-bit Python 2.7.

Next we need to set up the PATH environment variable in Windows if you didn't setup it when you installed Python.

Open Start > Settings > Control Panel > System > Advanced > Environment Variables
Select System variables > PATH > Edit and add e.g. ;\Python27;C:\Python27\Scripts at the end of the value.
Exit the dialog with OK to save the changes.

Starting from Python 2.7.9, the standard Windows installer by default installs and activates pip.

Robot Framework and Selenium2Library installation

In practice it is easiest to install Robot Framework and Selenium2Library along with its dependencies using pip package manager. Once you have pip installed, all you need to do is running these commands in your Command Prompt:

1. pip install robotframework
2. pip install robotframework-selenium2library

It's good to notice that pip has a "feature" that unless a specific version is given, they install the latest possible version even if that is an alpha or beta release. A workaround is giving the version explicitly. like pip install robotframework==2.7.7

RIDE installation

RIDE is a light-weight and intuitive editor for Robot Framework test case files. It can be installed by using Windows installer (select robotframework-ride-1.1.win-amd64.exe) or with pip using:

pip install robotframework-ride

The Windows installer does a shortcut to the desktop and you can start it from Command Prompt with command ride.py.

Now you have everything you need to create and execute Robot Framework tests.

Executing Robot Framework tests

As described in WebDemo running tests requires the demo application located under demoapp directory to be running. It can be started by executing it from the command line:

python demoapp/server.py

After the demo application is started, it is be available at http://localhost:7272 and it needs to be running while executing the automated tests. It can be shut down by using Ctrl-C.

In Robot Framework each file contains one or more tests and is treated as a test suite. Every directory that contains a test suite file or directory is also a test suite. When Robot Framework is executed on a directory it will go through all files and directories of the correct kind except those that start with an underscore character.

WebDemo's test cases are located in login_tests directory and to execute them all type in your Command Prompt:

pybot login_tests

Running the tests opens a browser window which Selenium 2 is driving natively as a user would and you can see the interactions.
When the test is finished executing four files will have been generated: report.html, log.html and output.xml. On failed tests selenium takes screenshots which are named like selenium-screenshot-1.png. The browser can also be run on a remote machine using the Selenium Server.

You can also run an individual test case file and use various command line options (see pybot --help) supported by Robot Framework:

pybot login_tests/valid_login.txt
pybot --test InvalidUserName --loglevel DEBUG login_tests

If you selected Firefox as your browser and get an error like "Type Error: environment can only contain strings" that's a bug in Selenium's Firefox profile. You can fix it with a "monkey patch" to C:\Python27\Lib\site-packages\selenium\webdriver\firefox\firefox_profile.py.

Using different browsers

The browser that is used is controlled by ${BROWSER} variable defined in resource.txt resource file. Firefox browser is used by default, but that can be easily overridden from the command line.

pybot --variable BROWSER:Chrome login_tests
pybot --variable BROWSER:IE login_tests

Browsers like Chrome and Internet Explorer require separate Internet Explorer Driver and Chrome Driver to be installed before they can be used. InternetExplorerDriver can be downloaded from Selenium project and ChromeDriver from Chromium project. Just place them both somewhere in your PATH.

With Internet Explorer Driver you can get an error like "'Unexpected error launching Internet Explorer. Protected Mode settings are not the same for all zones. Enable Protected Mode must be set to the same value (enabled or disabled) for all zones.'". As it reads in the driver's configuration you must set the Protected Mode settings for each zone to be the same value. To set the Protected Mode settings in Internet Explorer, choose "Internet Options..." from the Tools menu, and click on the Security tab. For each zone, there will be a check box at the bottom of the tab labeled "Enable Protected Mode".

Reading the results

After the tests have run there are couple of result files to read: report.html and log.html.

The report.html shows the results of your tests and its background is green when all tests have passed and red if any have failed. It also shows "Test Statistics" for how many tests have passed and failed. "Test Details" shows how long the test took to run and, if it failed, what the fail message was.

The log.html gives you more detailed information about why some test fails if the fail message doesn't make it obvious. It also gives a detailed view of the execution of each of the tests.

Summary

From the short experience I have played with Robot Framework it seems to be powerful tool for designing and executing tests and good way to improve your application's overall quality.

Next it's time to get to know the Robot Framework syntax better, write some tests and run Selenium Server. Also the Maven plugin and RobotFramework-EclipseIDE plugin looks interesting.

References

Robot Framework documentation
Robot Framework User Guide
Web testing with Robot Framework and Selenium2Library demo
RIDE: light-weight and intuitive editor for Robot Framework test case files