Using NGINX Ingress Controller on Google Kubernetes Engine

If you’ve used Kubernetes you might have come across Ingress which manages external access to services in a cluster, typically HTTP. When running with GKE the “default” is GLBC which is a “load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API”. It’s easy to use but doesn’t let you to to customize it much. The alternative is to use for example NGINX Ingress Controller which is more down to earth. Here are my notes of configuring ingress-nginx with cert manager on Google Cloud Kubernetes Engine.

This article takes much of it’s content from the great tutorial at Digital Ocean.

Deploying ingress-nginx to GKE

Provider specific steps for installing ingress-nginx to GKE are quite simple.

First you need to initialize your user as a cluster-admin with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
   --clusterrole cluster-admin \
   --user $(gcloud config get-value account)

Then if you are using a Kubernetes version previous to 1.14, you need to change kubernetes.io/os to beta.kubernetes.io/os at line 217 of mandatory.yaml.

Now you’re ready to create mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

$ kubectl apply -f ingress-nginx_mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

Create the LoadBalancer Service:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created

Verify installation:

$ kubectl get svc --namespace=ingress-nginx
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.10.10.1   1.1.1.1   80:30598/TCP,443:31334/TCP   40s

$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          2m17s

Configure proxy settings

In some situations the payload for ingress-nginx might be too large and you have to increase it. Add the “nginx.ingress.kubernetes.io/proxy-body-size” annotation to your ingress metadata with value you need. 0 to not limit the body size.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"

Troubleshooting

Check the Ingress Resource Events:

$ kubectl get ing ingress-nginx

Check the Ingress Controller Logs:

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-6cb75cf6dd-f4cx7   1/1     Running   0          149m

$ kubectl logs -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7

Check the Nginx Configuration:

kubectl exec -it -n ingress-nginx nginx-ingress-controller-6cb75cf6dd-f4cx7  cat /etc/nginx/nginx.conf

Check if used Services Exist:

kubectl get svc --all-namespaces

Promote ephemeral to static IP

If you want to keep the IP you got for the ingress-nginx then promote it to static. As we bound our ingress-nginx IP to a subdomain we want to retain that IP.

To promote the allocated IP to static, you can update the Service manifest:

kubectl --namespace=ingress-nginx patch svc ingress-nginx -p '{"spec": {"loadBalancerIP": "1.1.1.1"}}'

And promote the IP to static in GKE/GCE:

gcloud compute addresses create ingress-nginx --addresses 1.1.1.1 --region europe-north1

Creating the Ingress Resource

Creating your Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service and apply it to Kubernetes cluster.

$ kubectl apply -f ingress.yaml
ingress.extensions/ingress created

Verify installation:

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

Installing and Configuring Cert-Manager

Next we’ll install cert-manager into our cluster. It’s a Kubernetes service that provisions TLS certificates from Let’s Encrypt and other certificate authorities and manages their lifecycles.

Create namespace:

kubectl create namespace cert-manager

install cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers.

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.1/cert-manager.yaml

Verify installation:

kubectl get pods --namespace cert-manager

Rolling Out Production Issuer

Create a production certificate ClusterIssuer, prod_issuer.yaml:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: your-name@yourdomain.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

Apply production issuer using kubectl:

kubectl create -f prod_issuer.yaml

Update ingress.yml to use “letsencrypt-prod” issuer:

metadata:
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"

Apply the changes:

kubectl apply -f ingress.yaml

Verify that things look good:

kubectl describe ingress
kubectl describe certificate

Done;


Posted

in

, ,

by

Comments

2 responses to “Using NGINX Ingress Controller on Google Kubernetes Engine”

  1. Mrudul Palvankar Avatar
    Mrudul Palvankar

    Quite a detailed post. Everything worked for me except for SSL certificate.

  2. Pal Kerecsenyi Avatar
    Pal Kerecsenyi

    After 12 hours of trying to get this working, this is the only article that worked! Thanks so much

Leave a Reply to Mrudul Palvankar Cancel reply

Your email address will not be published. Required fields are marked *