Chapter 3

Pods & Nodes

In this session , we will explore Pods and Nodes.

We will also create a Coffee application Pod

Subsections of Pods & Nodes

Introduction

What is a Pod ?

Pod Overview Pod Overview A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster

The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.

Pod Pod

A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.

What is a Node?

Node Node A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master’s automatic scheduling takes into account the available resources on each Node.

Create a Pod - Declarative

After completing this session , you will be able to create Pod declaratively and will be able to login to check services running on other pods.

So lets get started.

Lets Check the running Pods

k8s@k8s-master-01:~$ kubectl get pods
No resources found.
k8s@k8s-master-01:~$

Nothing

Lets create one using a YAML file

$ vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: coffee-app
spec:
  containers:
  - image: ansilh/demo-coffee
    name: coffee

Apply YAML using kubectl command

$ kubectl apply -f pod.yaml

View status of Pod

Pod status is ContainerCreating

$ kubectl get pods

Output

NAME         READY   STATUS              RESTARTS   AGE
coffee-app   0/1     ContainerCreating   0          4s

Execute kubectl get pods after some time

Now Pod status will change to Running

$ kubectl get pods

Output

NAME         READY   STATUS    RESTARTS   AGE
coffee-app   1/1     Running   0          27s

Now we can see our first Pod

Get the IP address of Pod

$ kubectl get pods -o wide

Output

NAME         READY   STATUS    RESTARTS   AGE    IP            NODE            NOMINATED NODE   READINESS GATES
coffee-app   1/1     Running   0          2m8s   192.168.1.7   k8s-worker-01   <none>           <none>

Create a new CentOS container

$ vi centos-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: centos-pod
spec:
  containers:
  - image: tutum/centos
    name: centos

Apply the Yaml spec

$ kubectl apply -f centos-pod.yaml

Verify the status of Pod

$ kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
centos-pod   0/1     ContainerCreating   0          12s
coffee-app   1/1     Running             0          5m31s

After some time status will change to Running

$ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
centos-pod   1/1     Running   0          59s
coffee-app   1/1     Running   0          6m18s

Login to CentOS Pod

$ kubectl exec -it centos-pod -- /bin/bash

Verify Coffee app using curl

$ curl -s 192.168.1.13:9090  |grep 'Serving'
<html><head></head><title></title><body><div> <h2>Serving Coffee from</h2><h3>Pod:coffee-app</h3><h3>IP:192.168.1.13</h3><h3>Node:172.16.0.1</h3><img src="data:image/png;base64,
[root@centos-pod /]#

Delete pod

$ kubectl delete pod coffee-app centos-pod
pod "coffee-app" deleted
pod "centos-pod" deleted

Make sure not pod is running

$ kubectl get pods

Create a Pod - Imperative

Execute kubectl command to create a Pod.

$ kubectl run coffee --image=ansilh/demo-coffee --restart=Never
pod/coffee created

Verify Pod status

$ kubectl get pods -o wide
NAME     READY   STATUS              RESTARTS   AGE   IP       NODE            NOMINATED NODE   READINESS GATES
coffee   0/1     ContainerCreating   0          6s    <none>   k8s-worker-01   <none>           <none>
$ kubectl get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
coffee   1/1     Running   0          19s   192.168.1.15   k8s-worker-01   <none>           <none>

Start a CentOS container

$ kubectl run centos-pod --image=tutum/centos --restart=Never
pod/centos-pod created

verify status of the Pod ; it should be in Running

$ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
centos-pod   1/1     Running   0          25s
coffee       1/1     Running   0          2m10s

Logon to CentOS Pod

$ kubectl exec -it centos-pod -- /bin/bash
[root@centos-pod /]#

Verify Coffee App status

[root@centos-pod /]# curl -s 192.168.1.15:9090 |grep 'Serving Coffee'
<html><head></head><title></title><body><div> <h2>Serving Coffee from</h2><h3>Pod:coffee</h3><h3>IP:192.168.1.15</h3><h3>Node:172.16.0.1</h3><img src="data:image/png;base64,
[root@centos-pod /]# exit

Delete pod

k8s@k8s-master-01:~$ kubectl delete pod coffee centos-pod
pod "coffee" deleted
pod "centos-pod" deleted
k8s@k8s-master-01:~$ kubectl get pods
No resources found.
k8s@k8s-master-01:~$

Nodes

In this session , we will explore the node details

List nodes

$ k8s@k8s-master-01:~$ kubectl get nodes

Output

NAME            STATUS   ROLES    AGE   VERSION
k8s-master-01   Ready    master   38h   v1.13.1
k8s-worker-01   Ready    <none>   38h   v1.13.1

Extended listing

$ kubectl get nodes -o wide

Output

NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master-01   Ready    master   38h   v1.13.1   192.168.56.201   <none>        Ubuntu 16.04.5 LTS   4.4.0-131-generic   docker://18.9.0
k8s-worker-01   Ready    <none>   38h   v1.13.1   192.168.56.202   <none>        Ubuntu 16.04.5 LTS   4.4.0-131-generic   docker://18.9.0
k8s@k8s-master-01:~$

Details on a node

$ kubectl describe node k8s-master-01

Output

Name:               k8s-master-01
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-master-01
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.56.201/24
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 31 Dec 2018 02:10:05 +0530
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 01 Jan 2019 17:01:28 +0530   Mon, 31 Dec 2018 02:10:02 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 01 Jan 2019 17:01:28 +0530   Mon, 31 Dec 2018 02:10:02 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 01 Jan 2019 17:01:28 +0530   Mon, 31 Dec 2018 02:10:02 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 01 Jan 2019 17:01:28 +0530   Mon, 31 Dec 2018 22:59:35 +0530   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.56.201
  Hostname:    k8s-master-01
Capacity:
 cpu:                1
 ephemeral-storage:  49732324Ki
 hugepages-2Mi:      0
 memory:             2048168Ki
 pods:               110
Allocatable:
 cpu:                1
 ephemeral-storage:  45833309723
 hugepages-2Mi:      0
 memory:             1945768Ki
 pods:               110
System Info:
 Machine ID:                 96cedf74a821722b0df5ee775c291ea2
 System UUID:                90E04905-218D-4673-A911-9676A65B07C5
 Boot ID:                    14201246-ab82-421e-94f6-ff0d8ad3ba54
 Kernel Version:             4.4.0-131-generic
 OS Image:                   Ubuntu 16.04.5 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.0
 Kubelet Version:            v1.13.1
 Kube-Proxy Version:         v1.13.1
PodCIDR:                     192.168.0.0/24
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                     ------------  ----------  ---------------  -------------  ---
  kube-system                calico-node-nkcrd                        250m (25%)    0 (0%)      0 (0%)           0 (0%)         38h
  kube-system                etcd-k8s-master-01                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         38h
  kube-system                kube-apiserver-k8s-master-01             250m (25%)    0 (0%)      0 (0%)           0 (0%)         38h
  kube-system                kube-controller-manager-k8s-master-01    200m (20%)    0 (0%)      0 (0%)           0 (0%)         38h
  kube-system                kube-proxy-tzznm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         38h
  kube-system                kube-scheduler-k8s-master-01             100m (10%)    0 (0%)      0 (0%)           0 (0%)         38h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                800m (80%)  0 (0%)
  memory             0 (0%)      0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:              <none>

We will discuss more about each of the fields on upcoming sessions. For now lets discuss about Non-terminated Pods field;

Non-terminated Pods field

  • Namespace : The namespace which the Pods were running . The pods that we create will by default go to default namespace.
  • Name : Name of the Pod
  • CPU Request : How much CPU resource requested by Pod during startup.
  • CPU Limits : How much CPU the Pod can use.
  • Memory Request : How much memory requested by Pod during startup.
  • Memory Limits : How much memory the Pod can use.

Namespaces

What is a namespace

We have see namespaces in Linux , which ideally isolates objects and here also the concept is same but serves a different purpose. Suppose you have two departments in you organization and both departments have application which needs more fine grained control. We can use namespaces to separate the workload of each departments.

By default kubernetes will have three namespace

List namespace

$ kubectl get ns
NAME          STATUS   AGE
default       Active   39h
kube-public   Active   39h
kube-system   Active   39h

default : All Pods that we manually create will go to this namespace (There are ways to change it , but for now that is what it is). kube-public : All common workloads can be assigned to this namespace . Most of the time no-one use it. kube-system : Kubernetes specific Pods will be running on this namespace

List Pods in kube-system namespace

$ kubectl get pods --namespace=kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-n99tb                       2/2     Running   0          38h
calico-node-nkcrd                       2/2     Running   0          38h
coredns-86c58d9df4-4c22l                1/1     Running   0          39h
coredns-86c58d9df4-b49c2                1/1     Running   0          39h
etcd-k8s-master-01                      1/1     Running   0          39h
kube-apiserver-k8s-master-01            1/1     Running   0          39h
kube-controller-manager-k8s-master-01   1/1     Running   0          39h
kube-proxy-s6hc4                        1/1     Running   0          38h
kube-proxy-tzznm                        1/1     Running   0          39h
kube-scheduler-k8s-master-01            1/1     Running   0          39h

As you can see , there are many Pods running in kube-system namespace All these Pods were running with one or mode containers If you see the calico-node-n99tb pod , the READY says 2/2 , which means two containers were running fine in this Pod

List all resources in a namespace

k8s@k8s-master-01:~$ kubectl get all -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/calico-node-kr5xg                       2/2     Running   0          13m
pod/calico-node-lcpbw                       2/2     Running   0          13m
pod/coredns-86c58d9df4-h8pjr                1/1     Running   6          26m
pod/coredns-86c58d9df4-xj24c                1/1     Running   6          26m
pod/etcd-k8s-master-01                      1/1     Running   0          26m
pod/kube-apiserver-k8s-master-01            1/1     Running   0          26m
pod/kube-controller-manager-k8s-master-01   1/1     Running   0          26m
pod/kube-proxy-fl7rj                        1/1     Running   0          26m
pod/kube-proxy-q6w9l                        1/1     Running   0          26m
pod/kube-scheduler-k8s-master-01            1/1     Running   0          26m

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/calico-typha   ClusterIP   172.16.244.140   <none>        5473/TCP        13m
service/kube-dns       ClusterIP   172.16.0.10      <none>        53/UDP,53/TCP   27m

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/calico-node   2         2         2       2            2           beta.kubernetes.io/os=linux   13m
daemonset.apps/kube-proxy    2         2         2       2            2           <none>                        27m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-typha   0/0     0            0           13m
deployment.apps/coredns        2/2     2            2           27m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-typha-5fc4874c76   0         0         0       13m
replicaset.apps/coredns-86c58d9df4        2         2         2       26m
k8s@k8s-master-01:~$

Self Healing - Readiness

Readiness Probe

We have seen that our coffee application was listening on port 9090. Lets assume that the application is not coming up but Pod status showing running. Everyone will think that application is up. You entire application stack might get affected because of this.

So here comes the question , “How can I make sure my application is started, not just the Pod ?”

Here we can use Pod spec, Readiness probe.

Official detention of readinessProbe is , “Periodic probe of container service readiness”.

Lets rewrite the Pod specification of Coffee App and add a readiness Probe.

$ vi pod-readiness.yaml
apiVersion: v1
kind: Pod
metadata:
  name: coffee-app
spec:
  containers:
  - image: ansilh/demo-coffee
    name: coffee
    readinessProbe:
     initialDelaySeconds: 10
     httpGet:
      port: 9090

Apply Yaml

$ kubectl apply -f pod-readiness.yaml
pod/coffee-app created

Verify Pod status

Try to identify the difference.

$ kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
coffee-app   0/1     ContainerCreating   0          3s
$ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
coffee-app   0/1     Running   0          25s
$ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
coffee-app   1/1     Running   0          32s

Delete the Pod

Yes ,we can delete the objects using the same yaml which we used to create/apply it

$ kubectl delete -f pod-readiness.yaml
pod "coffee-app" deleted
$

Probe Tuning.

failureThreshold     <integer>
  Minimum consecutive failures for the probe to be considered failed after
  having succeeded. Defaults to 3. Minimum value is 1.

initialDelaySeconds  <integer>
  Number of seconds after the container has started before liveness probes
  are initiated. More info:
  https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

periodSeconds        <integer>
  How often (in seconds) to perform the probe. Default to 10 seconds. Minimum
  value is 1.

timeoutSeconds       <integer>
  Number of seconds after which the probe times out. Defaults to 1 second.
  Minimum value is 1. More info:
  https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

Self Healing - Liveness

Liveness Probe

Lets assume the application failed after readiness probe execution completes Again we are back to service unavailability

To avoid this , we need a liveness check which will do a periodic health check after Pod start running or readiness probe completes.

Lets rewrite the Pod specification of Coffee App and add a liveness Probe.

$ vi pod-liveiness.yaml
apiVersion: v1
kind: Pod
metadata:
  name: coffee-app
spec:
  containers:
  - image: ansilh/demo-coffee
    name: coffee
    readinessProbe:
     initialDelaySeconds: 10
     httpGet:
      port: 9090
    livenessProbe:
     periodSeconds: 5
     httpGet:
      port: 9090

Create Pod

$ kubectl create -f pod-liveness.yaml

Resource Allocation

Limits

We can limit the CPU and Memory usage of a container so that one

Lets create the coffee Pod again with CPU and Memory limits

apiVersion: v1
kind: Pod
metadata:
  labels:
  name: coffee-limits
spec:
  containers:
  - image: ansilh/demo-coffee
    name: coffee
    resources:
      limits:
        CPU: 100m
        Memory: 123Mi

Resulting container will be allowed to use 100 millicores and 123 mebibyte (~128 Megabytes)

CPU

One CPU core is equivalent to 1000m (one thousand millicpu or one thousand millicores) CPU is always expressed as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine

Memory

You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value:

128974848, 129e6, 129M, 123Mi

Mebibyte vs Megabyte

1 Megabyte (MB) = (1000)^2 bytes = 1000000 bytes.
1 Mebibyte (MiB) = (1024)^2 bytes = 1048576 bytes.

Requests

We can request a specific amount of CPU and Memory when the container starts up.

Suppose if the Java application need at least 128MB of memory during startup , we can use resource request in Pod spec.

This will help the scheduler to select a node with enough memory.

Request also can be made of CPU as well.

Lets modify the Pod spec and add request

apiVersion: v1
kind: Pod
metadata:
  labels:
  name: coffee-limits
spec:
  containers:
  - image: ansilh/demo-coffee
    name: coffee
    resources:
      requests:
        CPU: 100m
        Memory: 123Mi
      limits:
        CPU: 200m
        Memory: 244Mi

Extra

Once you complete the training , you can visit below URLs to understand storage and network limits.

Storage Limit

Network bandwidth usage