Installation
- In this chapter we will install VirtualBox and setup networking.
- We will learn how to install and configure Docker.
- Also we will install a two node kubernetes cluster using kubeadm.
Download the latest VBox installer and VBox Extension Pack
Installation procedure is available in below link
DHCP should be disabled on this network.
Internet access is needed on all VMs (for downloading needed binaries).
Make sure you can see the NAT network.(If not , create one).
VBox Host Networking | |
---|---|
HostOnly | 192.168.56.0/24 |
NAT | VBOX Defined |
Create a template VM which will be used to clone all needed VMs
DRIVE_NAME:/VMs/
(Replace DRIVE_NAME
with a mount point or Driver name)k8s-master-01
By default , NAT will be the first in network adapter order , change it. NAT interface should be the second interface and Host-Only should be the first one
Install Ubuntu on this VM and go ahead with all default options
When asked, provide user name k8s
and set password
Make sure to select the NAT interface as primary during installation.
Select below in Software Selection
screen
Manual Software Selection
OpenSSH Server
After restart , make sure NAT interface is up
Login to the template VM with user k8s
and execute below commands to install latest patches.
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo poweroff
You may use VirtualBox GUI to create a full clone - Preferred You can use below commands to clone a VM - Execute it at your own risk ;)
DRIVER_NAME
with a drive which is having enough free space (~50GB) set DRIVE_NAME=D
cd C:\Program Files\Oracle\VirtualBox
VBoxManage.exe clonevm "k8s-master-01" --name "k8s-worker-01" --groups "/K8S Training" --basefolder "%DRIVE_NAME%:\VMs" --register
DRIVE_NAME=${HOME}
VBoxManage clonevm "k8s-master-01" --name "k8s-worker-01" --groups "/K8S Training" --basefolder ${DRIVE_NAME}/VMs" --register
$ sudo systemctl stop networking
$ sudo vi /etc/network/interfaces
auto enp0s3 #<-- Make sure to use HostOnly interface (it can also be enp0s8)
iface enp0s3 inet static
address 192.168.56.X #<--- Replace X with corresponding IP octet
netmask 255.255.255.0
$ sudo systemctl restart networking
You may access the VM using the IP via SSH and can complete all remaining steps from that session (for copy paste :) )
$ HOST_NAME=<host name> # <--- Replace <host name> with corresponding one
$ sudo hostnamectl set-hostname ${HOST_NAME} --static --transient
$ sudo /bin/rm -v /etc/ssh/ssh_host_*
$ sudo dpkg-reconfigure openssh-server
$ sudo vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:HOST_NAME #<--- Append HostName to have unique iscsi iqn
$ sudo rm /etc/machine-id /var/lib/dbus/machine-id
$ sudo systemd-machine-id-setup
Remove 127.0.1.1 entry from /etc/hosts
Add needed entries in /etc/hosts
$ sudo bash -c "cat <<EOF >>/etc/hosts
192.168.56.201 k8s-master-01
192.168.56.202 k8s-worker-01
EOF"
$ sudo bash -c "cat <<EOF >>/etc/resolvconf/resolv.conf.d/tail
nameserver 8.8.8.8
EOF"
$ sudo vi /etc/fstab
# /dev/mapper/k8s--master--01--vg-swap_1 none swap sw 0 0
$ sudo reboot
Do a ping test to make sure both VMs can reach each other.
In this session, we will install and setup docker in a simple and easy way on Ubuntu 16.04.
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-cache policy docker-ce
docker-ce:
Installed: (none)
Candidate: 5:18.09.0~3-0~ubuntu-xenial
Version table:
5:18.09.0~3-0~ubuntu-xenial 500
...
$ sudo apt-get install -y docker-ce
$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-12-26 17:14:59 UTC; 4min 27s ago
Docs: https://docs.docker.com
Main PID: 1191 (dockerd)
Tasks: 10
Memory: 76.4M
CPU: 625ms
CGroup: /system.slice/docker.service
└─1191 /usr/bin/dockerd -H unix://
...
$ sudo usermod -aG docker ${USER}
Logout the session and login again to refresh the group membership.
$ docker info |grep 'Server Version'
Server Version: 18.09.0
$ curl -O https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz
$ tar -xvf go1.11.4.linux-amd64.tar.gz
$ sudo mv go /usr/local/
cat <<EOF >>~/.profile
export GOPATH=\$HOME/work
export PATH=\$PATH:/usr/local/go/bin:\$GOPATH/bin
EOF
$ mkdir $HOME/work
$ source ~/.profile
$ go version
go version go1.11.4 linux/amd64
$ mkdir -p $GOPATH/src/github.com/ansilh/golang-demo
$ vi $GOPATH/src/github.com/ansilh/golang-demo/main.go
package main
import "fmt"
func main(){
fmt.Println("Hello World.!")
}
go install github.com/ansilh/golang-demo
$ golang-demo
Hello World.!
$ mkdir -p ${GOPATH}/src/github.com/ansilh/demo-webapp
$ vi ${GOPATH}/src/github.com/ansilh/demo-webapp/demo-webapp.go
package main
import (
"fmt"
"net/http"
"log"
)
func demoDefault(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "404 - Page not found - This is a dummy default backend") // send data to client side
}
func main() {
http.HandleFunc("/", demoDefault) // set router
err := http.ListenAndServe(":9090", nil) // set listen port
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
$ cd $GOPATH/src/github.com/ansilh/demo-webapp
$ CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -ldflags="-w -s" -o $GOPATH/bin/demo-webapp
$ demo-webapp
Open the browser and check if you can see the response using IP:9090 If you see the output “404 – Page not found – This is a dummy default backend” indicates that the program is working
Press Ctrl+c to terminate the program
Create a Docker Hub account
$ mkdir ~/demo-webapp
$ cp $GOPATH/bin/demo-webapp ~/demo-webapp/
$ cd ~/demo-webapp/
$ vi Dockerfile
FROM scratch
LABEL maintainer="Ansil H"
LABEL email="ansilh@gmail.com"
COPY demo-webapp /
CMD ["/demo-webapp"]
$ sudo docker build -t <docker login name>/demo-webapp .
Eg:-
$ sudo docker build -t ansilh/demo-webapp .
$ docker login
$ docker push <docker login name>/demo-webapp
Eg:-
$ docker push ansilh/demo-webapp
Congratulations ! . Now the image you built is available in Docker Hub and we can use this image to run containers in upcoming sessions
$ docker run -p 80:9090 ansilh/demo-webapp
$ docker run -d -p 80:9090 ansilh/demo-webapp
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c8364e0d031 ansilh/demo-webapp "/demo-webapp" 11 seconds ago Up 10 seconds 0.0.0.0:80->9090/tcp zen_gauss
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c8364e0d031 ansilh/demo-webapp "/demo-webapp" 2 minutes ago Up 2 minutes 0.0.0.0:80->9090/tcp zen_gauss
acb01851c20a ansilh/demo-webapp "/demo-webapp" 2 minutes ago Exited (2) 2 minutes ago condescending_antonelli
$ docker stats zen_gauss
$ docker stop zen_gauss
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansilh/demo-webapp latest b7c5e17ae85e 8 minutes ago 4.81MB
$ docker rm zen_gauss
$ docker rmi ansilh/demo-webapp
Verify the MAC address and product_uuid are unique for every node
(ip link
or ifconfig -a
and sudo cat /sys/class/dmi/id/product_uuid
)
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |sudo apt-key add -
$ cat <<EOF |sudo tee -a /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
Repeat the same steps on worker node
$ sudo kubeadm init --apiserver-advertise-address=192.168.56.201 --pod-network-cidr=10.10.0.0/16 --service-cidr=192.168.10.0/24
kubectl
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.201:6443
KubeDNS is running at https://192.168.56.201:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Move to next session to deploy network plugin.
$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
CALICO_IPV4POOL_CIDR
value to 10.10.0.0/16
- name: CALICO_IPV4POOL_CIDR
value: "10.10.0.0/16"
name: IP_AUTODETECTION_METHOD
& value: "can-reach=192.168.56.1"
(This IP should be the host only network ip on your laptop)...
image: quay.io/calico/node:v3.3.2
env:
- name: IP_AUTODETECTION_METHOD
value: "can-reach=192.168.56.1"
...
$ kubectl apply -f calico.yaml
READY
status should show same value on left and right side of /
and Pod
STATUS
should be Running
$ kubectl get pods -n kube-system |nl
1 NAME READY STATUS RESTARTS AGE
2 calico-node-2pwv9 2/2 Running 0 20m
3 coredns-86c58d9df4-d9q2l 1/1 Running 0 21m
4 coredns-86c58d9df4-rwv7r 1/1 Running 0 21m
5 etcd-k8s-master-01 1/1 Running 0 20m
6 kube-apiserver-k8s-master-01 1/1 Running 0 20m
7 kube-controller-manager-k8s-master-01 1/1 Running 0 20m
8 kube-proxy-m6m9n 1/1 Running 0 21m
9 kube-scheduler-k8s-master-01 1/1 Running 0 20m
Contact the Trainer if the output is not the expected one after few minutes (~3-4mins).
$ echo sha256:$(openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1)
$ kubeadm token list |grep bootstra |awk '{print $1}'
$ sudo kubeadm join 192.168.56.201:6443 --token <token> --discovery-token-ca-cert-hash <discovery hash>
$ kubectl get pods -n kube-system |nl
1 NAME READY STATUS RESTARTS AGE
2 calico-node-2pwv9 2/2 Running 0 20m
3 calico-node-hwnfh 2/2 Running 0 19m
4 coredns-86c58d9df4-d9q2l 1/1 Running 0 21m
5 coredns-86c58d9df4-rwv7r 1/1 Running 0 21m
6 etcd-k8s-master-01 1/1 Running 0 20m
7 kube-apiserver-k8s-master-01 1/1 Running 0 20m
8 kube-controller-manager-k8s-master-01 1/1 Running 0 20m
9 kube-proxy-m6m9n 1/1 Running 0 21m
10 kube-proxy-shwgp 1/1 Running 0 19m
11 kube-scheduler-k8s-master-01 1/1 Running 0 20m