Chapter 2

Installation

  • In this chapter we will install VirtualBox and setup networking.
  • We will learn how to install and configure Docker.
  • Also we will install a two node kubernetes cluster using kubeadm.

Subsections of Installation

VirtualBox Network Configuration

  • Create HostOnly network ( Default will be 192.168.56.0/24)
    • Open Virtual Box
    • Got to menu and navigate to File ->Host Network Manager
    • Then click “Create” This will create a Host-Only Network.

DHCP should be disabled on this network.

Internet access is needed on all VMs (for downloading needed binaries).

Make sure you can see the NAT network.(If not , create one).

VBox Host Networking
HostOnly192.168.56.0/24
NATVBOX Defined

Ubuntu 16.04 Installation

Create a template VM which will be used to clone all needed VMs

  • You need at least 50GB free space to host all VMs
  • All VMs will be placed in a directory called (Don’t create these manually now!) DRIVE_NAME:/VMs/ (Replace DRIVE_NAME with a mount point or Driver name)
  • Install Ubuntu 16.04 with latest patches
  • VM configuration
    • VM Name : k8s-master-01
    • Memory : 2 GB
    • CPU : 2
    • Disk : 100GB
    • HostOnly interface : 1 (ref. step 1).
    • NAT network interface : 1
Warning

By default , NAT will be the first in network adapter order , change it. NAT interface should be the second interface and Host-Only should be the first one

  • Install Ubuntu on this VM and go ahead with all default options

  • When asked, provide user name k8s and set password

  • Make sure to select the NAT interface as primary during installation.

  • Select below in Software Selection screen

  • Manual Software Selection

  • OpenSSH Server

  • After restart , make sure NAT interface is up

  • Login to the template VM with user k8s and execute below commands to install latest patches.

$ sudo apt-get update
$ sudo apt-get upgrade
  • Poweroff template VM
$ sudo poweroff

Clone VM

You may use VirtualBox GUI to create a full clone - Preferred You can use below commands to clone a VM - Execute it at your own risk ;)

  • Open CMD and execute below commands to create all needed VMs. You can replace the value of DRIVER_NAME with a drive which is having enough free space (~50GB)
  • Windows
 set DRIVE_NAME=D
 cd C:\Program Files\Oracle\VirtualBox
 VBoxManage.exe clonevm "k8s-master-01" --name "k8s-worker-01" --groups "/K8S Training" --basefolder "%DRIVE_NAME%:\VMs" --register
  • Mac or Linux (Need to test)
 DRIVE_NAME=${HOME}
 VBoxManage clonevm "k8s-master-01" --name "k8s-worker-01" --groups "/K8S Training" --basefolder ${DRIVE_NAME}/VMs" --register
Start VMs one by one and perform below
Execute below steps on both master and worker nodes
  • Assign IP address and make sure it comes up at boot time.
$ sudo systemctl stop networking
$ sudo vi /etc/network/interfaces
auto enp0s3 #<-- Make sure to use HostOnly interface (it can also be enp0s8)
iface enp0s3 inet static
    address 192.168.56.X #<--- Replace X with corresponding IP octet
    netmask 255.255.255.0
$ sudo systemctl restart networking
Note

You may access the VM using the IP via SSH and can complete all remaining steps from that session (for copy paste :) )

  • Change Host name
Execute below steps only on worker node
$ HOST_NAME=<host name> # <--- Replace <host name> with corresponding one
$ sudo hostnamectl set-hostname ${HOST_NAME} --static --transient
  • Regenrate SSH Keys
$ sudo /bin/rm -v /etc/ssh/ssh_host_*
$ sudo dpkg-reconfigure openssh-server
  • Change iSCSI initiator IQN
$ sudo vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:HOST_NAME  #<--- Append HostName to have unique iscsi iqn
  • Change Machine UUID
$ sudo rm /etc/machine-id /var/lib/dbus/machine-id
$ sudo systemd-machine-id-setup
Execute below steps on both master and worker nodes
  • Remove 127.0.1.1 entry from /etc/hosts

  • Add needed entries in /etc/hosts

$ sudo bash -c  "cat <<EOF >>/etc/hosts
192.168.56.201 k8s-master-01
192.168.56.202 k8s-worker-01
EOF"
  • Add public DNS incase the local one is not responding in NAT
$ sudo bash -c  "cat <<EOF >>/etc/resolvconf/resolv.conf.d/tail
nameserver 8.8.8.8
EOF"
  • Disable swap by commenting out swap_1 LV
$ sudo vi /etc/fstab
# /dev/mapper/k8s--master--01--vg-swap_1 none            swap    sw              0       0
  • Reboot VMs
$ sudo reboot
Note

Do a ping test to make sure both VMs can reach each other.

Install Docker

In this session, we will install and setup docker in a simple and easy way on Ubuntu 16.04.

  • Add gpg key to aptitude
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  • Add repository
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  • Refresh repository
$ sudo apt-get update
  • Verify whether docker is available in repo or not
$ sudo apt-cache policy docker-ce
docker-ce:
  Installed: (none)
  Candidate: 5:18.09.0~3-0~ubuntu-xenial
  Version table:
     5:18.09.0~3-0~ubuntu-xenial 500
...
  • Install docker
$ sudo apt-get install -y docker-ce
  • Make sure docker is running
$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-12-26 17:14:59 UTC; 4min 27s ago
     Docs: https://docs.docker.com
 Main PID: 1191 (dockerd)
    Tasks: 10
   Memory: 76.4M
      CPU: 625ms
   CGroup: /system.slice/docker.service
           └─1191 /usr/bin/dockerd -H unix://
...
  • Add user to docker group so that this user can execute docker commands.
$ sudo usermod -aG docker ${USER}
Info

Logout the session and login again to refresh the group membership.

  • Verify docker by executing info command.
$ docker info |grep 'Server Version'
Server Version: 18.09.0

Setup Golang

  • Download Golang tarball
$ curl -O https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz
  • Extract the contents
$ tar -xvf go1.11.4.linux-amd64.tar.gz
  • Move the contents to /usr/local directory
$ sudo mv go /usr/local/
  • Add the environmental variable GOPATH to .profile
cat <<EOF >>~/.profile
export GOPATH=\$HOME/work
export PATH=\$PATH:/usr/local/go/bin:\$GOPATH/bin
EOF
  • Create the work directory
$ mkdir $HOME/work
  • Load the profile
$ source ~/.profile
  • Verify Golang setup
$ go version
go version go1.11.4 linux/amd64
  • Create a directory tree to map to a github repository
$ mkdir -p $GOPATH/src/github.com/ansilh/golang-demo
  • Create a hello world golang program
$ vi $GOPATH/src/github.com/ansilh/golang-demo/main.go
  • Paste below code
package main
import "fmt"

func main(){  
 fmt.Println("Hello World.!")
}
  • Build and install the program
go install github.com/ansilh/golang-demo
  • Execute the program to see the output
$ golang-demo
Hello World.!

Build a Demo WebApp

  • Create a directory for the demo app.
$ mkdir -p ${GOPATH}/src/github.com/ansilh/demo-webapp
  • Create demo-webapp.go file
$ vi ${GOPATH}/src/github.com/ansilh/demo-webapp/demo-webapp.go
package main

import (
  "fmt"
  "net/http"
  "log"
)

func demoDefault(w http.ResponseWriter, r *http.Request) {
   fmt.Fprintf(w, "404 - Page not found - This is a dummy default backend") // send data to client side
}

func main() {
  http.HandleFunc("/", demoDefault) // set router
  err := http.ListenAndServe(":9090", nil) // set listen port
   if err != nil {
    log.Fatal("ListenAndServe: ", err)
   }
}
  • Build a static binary
$ cd $GOPATH/src/github.com/ansilh/demo-webapp
$ CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -ldflags="-w -s" -o $GOPATH/bin/demo-webapp
  • Execute the program
$ demo-webapp

Open the browser and check if you can see the response using IP:9090 If you see the output “404 – Page not found – This is a dummy default backend” indicates that the program is working

Press Ctrl+c to terminate the program

Build a Docker image

Create a Docker Hub account

  • Let’s create a directory to store the Dockerfile
$ mkdir ~/demo-webapp
  • Copy the pre-built program
$ cp $GOPATH/bin/demo-webapp ~/demo-webapp/
  • Create a Dockerfile.
$ cd ~/demo-webapp/
$ vi Dockerfile
FROM scratch
LABEL maintainer="Ansil H"
LABEL email="ansilh@gmail.com"
COPY demo-webapp /
CMD ["/demo-webapp"]
  • Build the docker image
$ sudo docker build -t <docker login name>/demo-webapp .
Eg:-
$ sudo docker build -t ansilh/demo-webapp .
  • Login to Docker Hub using your credentials
$ docker login
  • Push image to Docker hub
$ docker push <docker login name>/demo-webapp
Eg:-
$ docker push ansilh/demo-webapp

Congratulations ! . Now the image you built is available in Docker Hub and we can use this image to run containers in upcoming sessions

Docker - Container management

Start a Container

  • Here we map port 80 of host to port 9090 of cotainer
  • Verify application from browser
  • Press Ctrl+c to exit container
$ docker run -p 80:9090 ansilh/demo-webapp
  • Start a Container in detach mode
$ docker run -d -p 80:9090 ansilh/demo-webapp
  • List Container
$ docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                  NAMES
4c8364e0d031        ansilh/demo-webapp   "/demo-webapp"      11 seconds ago      Up 10 seconds       0.0.0.0:80->9090/tcp   zen_gauss
  • List all containers including stopped containers
$ docker ps -a
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS                     PORTS                  NAMES
4c8364e0d031        ansilh/demo-webapp   "/demo-webapp"      2 minutes ago       Up 2 minutes               0.0.0.0:80->9090/tcp   zen_gauss
acb01851c20a        ansilh/demo-webapp   "/demo-webapp"      2 minutes ago       Exited (2) 2 minutes ago                          condescending_antonelli
  • List resource usage (Press Ctrl+c to exit)
$ docker stats zen_gauss
  • Stop Container
$ docker stop zen_gauss
  • List images
$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
ansilh/demo-webapp   latest              b7c5e17ae85e        8 minutes ago       4.81MB
  • Remove containers
$ docker rm zen_gauss
  • Delete images
$ docker rmi ansilh/demo-webapp

Install kubeadm

Note

Verify the MAC address and product_uuid are unique for every node (ip link or ifconfig -a and sudo cat /sys/class/dmi/id/product_uuid)

  • Download pre-requisites
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
  • Add gpg key for apt
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |sudo apt-key add -
  • Add apt repository
$ cat <<EOF |sudo tee -a /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
  • Install kubelet , kubeadm and kubectl
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Repeat the same steps on worker node

Deploy master Node

  • Initialize kubeadm with pod IP range
$ sudo kubeadm init --apiserver-advertise-address=192.168.56.201 --pod-network-cidr=10.10.0.0/16  --service-cidr=192.168.10.0/24
  • Configure kubectl
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Verify master node status
$ kubectl cluster-info
  • Output will be like below
Kubernetes master is running at https://192.168.56.201:6443
KubeDNS is running at https://192.168.56.201:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Info

Move to next session to deploy network plugin.

Deploy Network Plugin - Calico

  • Apply RBAC rules (More about RBAC will discuss later)
$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
  • Download Calico deployment YAML
$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
  • Edit CALICO_IPV4POOL_CIDR value to 10.10.0.0/16
- name: CALICO_IPV4POOL_CIDR
  value: "10.10.0.0/16"
  • Add name: IP_AUTODETECTION_METHOD & value: "can-reach=192.168.56.1" (This IP should be the host only network ip on your laptop)
...
image: quay.io/calico/node:v3.3.2
env:
  - name: IP_AUTODETECTION_METHOD
    value: "can-reach=192.168.56.1"
...
  • Apply Deployment
$ kubectl apply -f calico.yaml
  • Make sure the READY status should show same value on left and right side of / and Pod STATUS should be Running
$ kubectl get pods -n kube-system |nl
1  NAME                                    READY   STATUS    RESTARTS   AGE
2  calico-node-2pwv9                       2/2     Running   0          20m
3  coredns-86c58d9df4-d9q2l                1/1     Running   0          21m
4  coredns-86c58d9df4-rwv7r                1/1     Running   0          21m
5  etcd-k8s-master-01                      1/1     Running   0          20m
6  kube-apiserver-k8s-master-01            1/1     Running   0          20m
7  kube-controller-manager-k8s-master-01   1/1     Running   0          20m
8  kube-proxy-m6m9n                        1/1     Running   0          21m
9  kube-scheduler-k8s-master-01            1/1     Running   0          20m
Tip

Contact the Trainer if the output is not the expected one after few minutes (~3-4mins).

Add worker node to cluster

  • Get discovery secret from Master node.
$ echo sha256:$(openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1)
  • Get node join token from Master node.
$ kubeadm token list |grep bootstra |awk '{print $1}'
  • Execute kubeadm command to add the Worker to cluster
$ sudo kubeadm join 192.168.56.201:6443 --token <token> --discovery-token-ca-cert-hash <discovery hash>
  • Verify system Pod status
$ kubectl get pods -n kube-system |nl
  • Output
 1  NAME                                    READY   STATUS    RESTARTS   AGE
 2  calico-node-2pwv9                       2/2     Running   0          20m
 3  calico-node-hwnfh                       2/2     Running   0          19m
 4  coredns-86c58d9df4-d9q2l                1/1     Running   0          21m
 5  coredns-86c58d9df4-rwv7r                1/1     Running   0          21m
 6  etcd-k8s-master-01                      1/1     Running   0          20m
 7  kube-apiserver-k8s-master-01            1/1     Running   0          20m
 8  kube-controller-manager-k8s-master-01   1/1     Running   0          20m
 9  kube-proxy-m6m9n                        1/1     Running   0          21m
10  kube-proxy-shwgp                        1/1     Running   0          19m
11  kube-scheduler-k8s-master-01            1/1     Running   0          20m