Chapter 11

Deployments

We will not run individual Pods in K8S cluster for running the workload. Instead , we will use more abstract objects like Deployments.

Subsections of Deployments

Nginx Deployment

$ kubectl run nginx --image=nginx

Output

deployment.apps/nginx created

Verify the Pods running

$ kubectl get pods

Output

NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-9xsms   1/1     Running   0          27s

Here we can see that the Pod name is not like the usual one.

Lets delete the Pod and see what will happen.

$ kubectl delete pod nginx-7cdbd8cdc9-9xsms

Output

pod "nginx-7cdbd8cdc9-9xsms" deleted

Verify Pod status

$ kubectl get pods

Output

NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-vfbn8   1/1     Running   0          81s

A new Pod has been created again !!!

Expose Nginx Deployment

We know how to expose a Pod using a service.

The endpoints will be created based on the label of the Pod.

Here how we can create a service which can be used to access Nginx from outside

First we will check the label of the Pod

$ kubectl get pod nginx-7cdbd8cdc9-vfbn8 --show-labels

Output

NAME                     READY   STATUS    RESTARTS   AGE     LABELS
nginx-7cdbd8cdc9-vfbn8   1/1     Running   0          7m19s   pod-template-hash=7cdbd8cdc9,run=nginx

As you can see , one of the label is run=nginx

Next write a Service spec and use selector as run: nginx

$ vi nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    run: nginx-svc
  name: nginx-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  type: LoadBalancer

This service will look for Pods with label “run=nginx”

$ kubectl apply -f nginx-svc.yaml

Verify the service details

$ kubectl get svc

Output

NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      172.168.0.1      <none>           443/TCP        103m
nginx-svc    LoadBalancer   172.168.47.182   192.168.31.201   80:32369/TCP   3s

Now we will be able to see the default nginx page with IP 192.168.31.201

Scaling

When load increases , we can scale the pods using deployment scaling

$ kubectl scale deployment --replicas=3 nginx
$ kubectl get pods

Output

NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-4lhh4   1/1     Running   0          6s
nginx-7cdbd8cdc9-mxhnl   1/1     Running   0          6s
nginx-7cdbd8cdc9-vfbn8   1/1     Running   0          14m

Lets see the endpoints of service

$ kubectl get ep nginx-svc

Output

NAME        ENDPOINTS                                         AGE
nginx-svc   10.10.36.201:80,10.10.36.202:80,10.10.36.203:80   5m40s

Endpoints will be automatically mapped , because when we scale the deployment , the newly created pod will have same label which matches the Service selector.