TLS Bootstrapping
In this session , we will discuss about how a node can join a cluster using TLS bootstrapping.
In this session , we will discuss about how a node can join a cluster using TLS bootstrapping.
Create a Token
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
Output
12c12e7eb3a9c3f9255bb74529c6768e
$ echo 12c12e7eb3a9c3f9255bb74529c6768e,kubelet-bootstrap,10001,\"system:bootstrappers\" |sudo tee -a /etc/kubernetes/config/bootstrap-token.conf
12c12e7eb3a9c3f9255bb74529c6768e,kubelet-bootstrap,10001,"system:bootstrappers"
Create a token auth file
$ cat /etc/kubernetes/config/bootstrap-token.conf
12c12e7eb3a9c3f9255bb74529c6768e,kubelet-bootstrap,10001,"system:bootstrappers"
Add below flags to API server
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/config/bootstrap-token.conf \
Add RBAC for Node TLS bootstrapping and certificate auto renewal.
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
Create a bootstrap-kubeconfig which can be used by kubelet
TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/config/bootstrap-token.conf)
KUBERNETES_PUBLIC_ADDRESS=$(grep master /etc/hosts |awk '{print $1}')
kubectl config --kubeconfig=bootstrap-kubeconfig set-cluster bootstrap --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --certificate-authority=ca.pem
kubectl config --kubeconfig=bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=${TOKEN}
kubectl config --kubeconfig=bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
kubectl config --kubeconfig=bootstrap-kubeconfig use-context bootstrap
Copy bootstrap-kubeconfig to worker node
$ sudo swapoff /dev/dm-1 ##<--- select appropriate swap device based on your OS config
Install and start docker service
Once docker is installed , execute below steps to make docker ready for kubelet
integration.
$ sudo vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --bridge=none --iptables=false --ip-masq=false
$ sudo iptables -t nat -F
$ sudo ip link set docker0 down
$ sudo ip link delete docker0
$ sudo systemctl restart docker
/var/lib/kubelet/
$ mkdir /var/lib/kubelet/
$ sudo mv bootstrap-kubeconfig /var/lib/kubelet/
$ cat <<EOF |sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
--cert-dir=/var/lib/kubelet/ \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--rotate-certificates=true \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
$ sudo systemctl daemon-reload
$ sudo systemctl start kubelet
Now execute kubectl get nodes and see if the node is listed there.
You may configure kube-proxy
as a DaemonSet so that it will automatically start after node registration completion.
Bootstrap tokens are a simple bearer token that is meant to be used when creating new clusters or joining new nodes to an existing cluster. It was built to support kubeadm, but can be used in other contexts for users that wish to start clusters without kubeadm. It is also built to work, via RBAC policy, with the Kubelet TLS Bootstrapping system.
You can read more about bootstrap tokens here
Bootstrap Tokens take the form of abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}.[a-z0-9]{16}
In this session , we will join a worker node to the cluster using bootstrap tokens.
Kubeadm uses bootstrap token to join a node to cluster.
Implementation Flow of Kubeadm is as follows.
Implementation note: the API server doesn’t have to expose a new and special insecure HTTP endpoint.
(D)DoS concern: Before this flow is secure to use/enable publicly (when not bootstrapping), the API Server must support rate-limiting. There are a couple of ways rate-limiting can be implemented to work for this use-case, but defining the rate-limiting flow in detail here is out of scope. One simple idea is limiting unauthenticated requests to come from clients in RFC1918 ranges.
This ConfigMap is really public. Users don’t need to authenticate to read this ConfigMap. In fact, the client MUST NOT use a bearer token here as we don’t trust this endpoint yet.
The API server returns the ConfigMap with the kubeconfig contents as normal Extra data items on that ConfigMap contains JWS signatures. kubeadm finds the correct signature based on the token-id part of the token. (Described below).
kubeadm verifies the JWS and can now trust the server. Further communication is simpler as the CA certificate in the kubeconfig file can be trusted.
You may read more about the proposal here
To support bootstrap token based discovery and to join nodes to cluster ; we need to make sure below flags are in place on API server.
--client-ca-file=/var/lib/kubernetes/ca.pem
--enable-bootstrap-token-auth=true
If not present , then add these flags to /etc/systemd/system/kube-apiserver.service
unit file.
make sure below flags are in place on kube-controller-manager .
--controllers=*,bootstrapsigner,tokencleaner
--experimental-cluster-signing-duration=8760h0m0s
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem
If not present , then add these flags to /etc/systemd/system/kube-controller-manager.service
unit file.
{
sudo systemctl daemon-reload
sudo systemctl restart kube-apiserver.service
sudo systemctl restart kube-controller-manager.service
}
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
$ echo $(openssl rand -hex 3).$(openssl rand -hex 8)
Output
80a6ee.fd219151288b08d8
$ vi bootstrap-token.yaml
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-80a6ee
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token."
# Token ID and secret. Required.
token-id: 80a6ee
token-secret: fd219151288b08d8
# Expiration. Optional.
expiration: 2019-12-05T12:00:00Z
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
$ kubectl create -f bootstrap-token.yaml
Create cluster-info for clients which will be downloaded if needed by client
KUBERNETES_MASTER=$(awk '/master/{print $1;exit}' /etc/hosts)
$ kubectl config set-cluster bootstrap \
--kubeconfig=bootstrap-kubeconfig-public \
--server=https://${KUBERNETES_MASTER}:6443 \
--certificate-authority=ca.pem \
--embed-certs=true
$ kubectl -n kube-public create configmap cluster-info \
--from-file=kubeconfig=bootstrap-kubeconfig-public
$ kubectl -n kube-public get configmap cluster-info -o yaml
cluster-info
ConfigMap$ kubectl create role anonymous-for-cluster-info --resource=configmaps --resource-name=cluster-info --namespace=kube-public --verb=get,list,watch
$ kubectl create rolebinding anonymous-for-cluster-info-binding --role=anonymous-for-cluster-info --user=system:anonymous --namespace=kube-public
Create bootstrap-kubeconfig for worker nodes
$ kubectl config set-cluster bootstrap \
--kubeconfig=bootstrap-kubeconfig \
--server=https://${KUBERNETES_MASTER}:6443 \
--certificate-authority=ca.pem \
--embed-certs=true
$ kubectl config set-credentials kubelet-bootstrap \
--kubeconfig=bootstrap-kubeconfig \
--token=80a6ee.fd219151288b08d8
$ kubectl config set-context bootstrap \
--kubeconfig=bootstrap-kubeconfig \
--user=kubelet-bootstrap \
--cluster=bootstrap
$ kubectl config --kubeconfig=bootstrap-kubeconfig use-context bootstrap
Copy the bootstrap-kubeconfig to worker node and then execute below steps from worker node.
$ sudo swapoff /dev/dm-1 ##<--- select appropriate swap device based on your OS config
Install and start docker service
Once docker is installed , execute below steps to make docker ready for kubelet
integration.
$ sudo vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --bridge=none --iptables=false --ip-masq=false
$ sudo iptables -t nat -F
$ sudo ip link set docker0 down
$ sudo ip link delete docker0
$ sudo systemctl restart docker
/var/lib/kubelet/
$ mkdir /var/lib/kubelet/
$ sudo mv bootstrap-kubeconfig /var/lib/kubelet/
$ cat <<EOF |sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
--cert-dir=/var/lib/kubelet/ \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--rotate-certificates=true \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
$ sudo systemctl daemon-reload
$ sudo systemctl start kubelet
Now execute kubectl get nodes and see if the node is listed there.
You may configure kube-proxy
as a DaemonSet so that it will automatically start after node registration completion.