Using NGINX Ingress Controller on Google Kubernetes Engine

If you've used Kubernetes you might have come across Ingress which manages external access to services in a cluster, typically HTTP. When running with GKE the "default" is GLBC which is a "load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API". It's easy to use but doesn't let you to to customize it much. The alternative is to use for example NGINX Ingress Controller which is more down to earth. Here are my notes of configuring ingress-nginx with cert manager on Google Cloud Kubernetes Engine.

This article takes much of it's content from the great tutorial at Digital Ocean.

Deploying ingress-nginx to GKE

Provider specific steps for installing ingress-nginx to GKE are quite simple.

First you need to initialize your user as a cluster-admin with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
   --clusterrole cluster-admin \
   --user $(gcloud config get-value account)

Then if you are using a Kubernetes version previous to 1.14, you need to change kubernetes.io/os to beta.kubernetes.io/os at line 217 of mandatory.yaml.

Now you're ready to create mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

$ kubectl apply -f ingress-nginx_mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

Create the LoadBalancer Service:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created

Verify installation:

$ kubectl get svc --namespace=ingress-nginx
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.10.10.1   1.1.1.1   80:30598/TCP,443:31334/TCP   40s

$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          2m17s

Configure proxy settings

In some situations the payload for ingress-nginx might be too large and you have to increase it. Add the "nginx.ingress.kubernetes.io/proxy-body-size" annotation to your ingress metadata with value you need. 0 to not limit the body size.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"

Troubleshooting

Check the Ingress Resource Events:

$ kubectl get ing ingress-nginx

Check the Ingress Controller Logs:

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          149m

$ kubectl logs -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7

Check the Nginx Configuration:

kubectl exec -it -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7  cat /etc/nginx/nginx.conf

Check if used Services Exist:

kubectl get svc --all-namespaces

Promote ephemeral to static IP

If you want to keep the IP you got for the ingress-nginx then promote it to static. As we bound our ingress-nginx IP to a subdomain we want to retain that IP.

To promote the allocated IP to static, you can update the Service manifest:

kubectl --namespace=ingress-nginx patch svc ingress-nginx -p '{"spec": {"loadBalancerIP": "1.1.1.1"}}'

And promote the IP to static in GKE/GCE:

gcloud compute addresses create ingress-nginx --addresses 1.1.1.1 --region europe-north1

Creating the Ingress Resource

Creating your Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service and apply it to Kubernetes cluster.

$ kubectl apply -f ingress.yaml
ingress.extensions/ingress created

Verify installation:

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

Installing and Configuring Cert-Manager

Next we'll install cert-manager into our cluster. It's a Kubernetes service that provisions TLS certificates from Let’s Encrypt and other certificate authorities and manages their lifecycles.

Create namespace:

kubectl create namespace cert-manager

install cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers.

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.1/cert-manager.yaml

Verify installation:

kubectl get pods --namespace cert-manager

Rolling Out Production Issuer

Create a production certificate ClusterIssuer, prod_issuer.yaml:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: your-name@yourdomain.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

Apply production issuer using kubectl:

kubectl create -f prod_issuer.yaml

Update ingress.yml to use "letsencrypt-prod" issuer:

metadata:
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"

Apply the changes:

kubectl apply -f ingress.yaml

Verify that things look good:

kubectl describe ingress
kubectl describe certificate

Done;

Monthly notes 48

This time monthly notes is for learning Node.js best practices and some interesting approaches for (Node.js) software architecture. Happy reading and be a better developer!

Issue 48, 25.2.2020

Learning

Docker and Node.js Best Practices talk at DockerCon 2019
Slides and Examples .
tl;dr; Use even numbered LTS releases; Don’t use :latest tag; Use Debian:slim/stretch or Alpine; Add node_modules to .dockerignore; Use node user; Proper shutdown (--init, tini, capture SIGINT); Multi-stage builds; healthchecks;

Node.js Best Practices
More than 80 best practices, style guides, and architectural tips with additional info. The repository is a summary and curation of the top-ranked content on Node.js best practices.

Testing in production: ideas, experiences, limits, roadblocks
Talk from Bristech 2019 by Jorge Marin. "Are you afraid of testing in production? Do you test in production? Do you use real data? By definition testing in production is hard. This talk puts together my experience testing in production a large scale system that affects millions of users."

Software Architecture

Using Clean Architecture for Microservice APIs in Node.js with MongoDB and Express
This is an interesting approach to construct your application. "Talk about Bob Martin's Clean Architecture model and I will show you how we can apply it to a Microservice built in node.js with MongoDB and Express JS."

Notes from security in the age of Docker & Kubernetes

Security is always the more obscure part of software development and while container runtimes provide good isolation from the host operating system when using Docker and running containers in Kubernetes, you should not assume to be free from exploits. Remember to use the best practices when you were not using containers.

Here is my notes from How Soon We Forget: Security in the Age of Docker & Kubernetes article which looked at some common regressions in security practices associated with the migration to Docker and Kubernetes and suggested ways to avoid them. And to continue the topic with notes from Taking the Scissors away: make your Kubernetes Cluster safe for DevOps talk which gives good advice and looks at some of the concepts of forcing security of the application workloads both from conceptual and practical points of view. Also Best Practices for Kubernetes deployment and Securing a cluster are worth reading.

These notes don't explain things so it's worth reading either the documents or the articles mentioned above.

Running as non-root

"One of the most common and easiest security lapses to address is running binaries as root."

Use non-root Docker images. It requires effort and is easier for greenfield projects.

In Kubernetes, you can enforce running containers as non-root using the pod and container security context.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        securityContext:
          allowPrivilegeEscalation: false
          privileged: false
      securityContext:
        fsGroup: 2866
        runAsNonRoot: true
        runAsUser: 2866

Use read-only file system

"Do you really need to write files within a container?"

In Kubernetes, set the root file system to read-only using the pod security context and create an emptyDir volume to mount at /tmp.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - env:
        - name: TMPDIR
          value: /tmp
        image: my/app:1.0.0
        name: app
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      volumes:
      - emptyDir: {}
        name: tmp

Protect against Denial of service

"Setting resources limits for your containers protects against a host of denial of service attacks."

With resource quotas you can limit a container to e.g. half a CPU and half a GiB of memory. Kubernetes deployment specification would look like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        name: app
        resources:
          limits:
            cpu: 500m
            memory: 512Mi

Health and readiness checks

"It's a good idea to make sure if your application is not healthy that it shuts down properly so it can be replaced. Kubernetes can help you with this if your application can respond to health and readiness checks and you configure them in your pod specification."

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app
      name: app
    spec:
      containers:
      - image: my/app:1.0.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3
        name: app
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ready
            port: http
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 3

The liveness probe should indicate if the application is running and readiness probe should indicate if the application can service requests. Read more from Kubernetes documentation.

Use Kubernetes policies

"Kubernetes provides network and pod security policies that give you control over what pods can communicate with each other and what types of pods can be started, respectively."

Pod Security Policies allow you to control what capabilities pods can have. When pod security policies are enabled, Kubernetes will only start pods that satisfy the constraints of the pod security policies.

They say that Pod Security Policy is actually one of the most difficult things to configure properly in Kubernetes cluster. For example it's easy to completely cap your cluster: you can't create any pods.

An example of a pod security policy that enforces some of the best practices mentioned: non-privileged containers, allow only read-only filesystem, minimum set of allowed volumes and don't use host’s network, pid or ipc namespaces.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: best-practices
spec:
  # non-privileged containers
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  runAsUser:
    rule: MustRunAsNonRoot
  supplementalGroups:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: MustRunAs
    ranges:
      - min: 1
        max: 65535
  # restrict file systems
  readOnlyRootFilesystem: true
  volumes:
    - configMap
    - emptyDir
    - projected
    - secret
    - downwardAPI
    - persistentVolumeClaim
  # limit interaction with host
  hostNetwork: false
  hostIPC: false
  hostPID: false

Network Policies

"Network policies allow you to define ingress and egress rules, i.e., firewall rules, for your pods using IP CIDR ranges and Kubernetes label selectors for pods and namespaces, similar to how Kubernetes service resources select pods."

For example you can create a network policy which will deny ingress from pods in other namespaces but allow pods within the namespace to communicate with each other.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
  namespace: mine
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

There is a GitHub repository of common network policies to help you get started using network policies.

Namespaces

Use namespaces and ensure that you've set the following defaults:

Summary

"defense in depth" is still important even in the world of containers. The container is not safe. The operating system is not safe. The host is not safe. The network is not safe.

How Soon We Forget: Security in the Age of Docker & Kubernetes