K8s Nginx Ingress Handling TLS Traffic And Using Pod Readiness Probes

K8s Nginx Ingress Handling TLS Traffic And Using Pod Readiness Probes

Posted by vmt1991 on 27 Mar 2021
Linux-Unix

Announcing NGINX Ingress Controller for Kubernetes Release 1.5.0 - NGINX

*Configuring Ingress to handle TLS traffic:

- When a client opens a TLS connection to an Ingress controller, communication between the client and the controller is encrypted, whereas the communication between the controller and the backend pod isn’t.

- Can configure to terminate TLS through the Nginx Ingress controller and application running in the pod doesn’t need to support TLS. To enable the controller to do that, you need to attach a certificate and a private key to the Ingress.

Example: Configure Nginx Ingress controller terminate TLS for nginx pod:

- Create self-signed certificate and private key for TLS nginx service

# openssl genrsa -out private.key 2048

# openssl req -new -x509 -key private.key -out mycert.crt -days 960 -subj /CN=myapp.com.vn

- Store certificate and private key to Secret resource:

# kubectl create secret tls my-tls-secret --cert=mycert.crt --key=private.key

- Create pod nginx and service type ClusterIP:

apiVersion: v1

kind: Pod

metadata:

 name: nginx-pod

 labels:

  app: nginx

spec:

 containers:

  - name: nginx-container

    image: nginx

    ports:

     - containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

 name: nginx-service

spec:

 selector:

  app: nginx

 ports:

  - port: 80

    targetPort: 80

 

 

- Create Ingress resource for handling TLS Traffic with TLS secret:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: nginx-tls-ingress

  annotations:

    nginx.ingress.kubernetes.io/rewrite-target: /

spec:

  tls:

   - hosts:

      - myapp.com.vn

     secretName: my-tls-secret

  rules:

  - host: "myapp.com.vn"

    http:

      paths:

      - pathType: Prefix

        path: /

        backend:

          service:

            name: nginx-service

            port:

              number: 80

- Accessing HTTPS Nginx service through Ingress with TLS secret:

*Signaling when a pod is ready to accept connections using  readiness probes:

- When deploy service to expose pod for external client, pod may need time to load either configuration/data or taking period of time for ready accepted connection (especially JVM-based application). In such cases you don’t want the pod to start receiving requests immediately until process running in pods all starting up and fully ready.

- Similar to liveness probes, Kubernetes allows you to also define a readiness probe for your pod and readiness probe  invoked periodically to determine whether the specific pod should receive client requests or not.

- Example: Kubelet service perform GET request checking service web running inside containers of pods to determine if it’s ready.

- Like liveness probes, three types of readiness probes exist:

+ An Exec probe, where a process is executed. The container’s status is determined by the process’ exit status code.

+ An HTTP GET probe, which sends an HTTP GET request to the container and the HTTP status code of the response determines whether the container is ready or not.

+ A TCP Socket probe, which opens a TCP connection to a specified port of the container. If the connection is established, the container is considered ready.

- When a container is started, Kubernetes can be configured to wait for a configurable amount of time to pass before performing the first readiness check. Unlike liveness probes, if a container fails the readiness check, it won’t be killed or restarted. If a pod’s readiness probe fails, the pod is removed from the Endpoints object of related exposed service. So Clients connecting to the service will not be redirected to the pod.

- Important case to use readiness probe in real life: When you have a group of pod running service acts as a frontend for external users. These pods need connect to backend-database and If at any point one of the frontend pods experiences connectivity problems and can’t reach the database anymore, you need signaling pods which has problem connection with backend database will not serve client request through by exposed service.

Example: Add readiness probe using HTTP request checking for nginx pods in a ReplicaSet:

- Deploy ReplicaSet of nginx pods with readiness probe httpGet and expose with service type NodePort:

apiVersion: apps/v1

kind: ReplicaSet

metadata:

 name: nginx-rs

spec:

 replicas: 3

 selector:

  matchLabels:

   app: nginx

 template:

  metadata:

   labels:

    app: nginx

  spec:

   containers:

    - name: nginx-container

      image: nginx

      readinessProbe:

       initialDelaySeconds: 3

       httpGet:

        path: /

        port: 80

      ports:

       - containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

 name: nginx-service

spec:

 selector:

  app: nginx

 type: NodePort

 ports:

  - port: 80

    targetPort: 80

    nodePort: 32006

 

 

- Checking service will be attached with number endpoints is 3:

 

- Login shell to one pod and remove index file on home path of nginx web server

 

- Send HTTP GET request to this pod will have code result 404

- Check number endpoints of exposed service again will be 2 (K8s already removed endpoint 10.233.92.63 from this service)

- Login to pod again and create new index file in home path of nginx web server:

- Check number endpoints of exposed service again will be 3 (update endpoint 10.233.92.63 to this service because readiness probe was check successful again),