Accounts and RBACs
In this chapter we will discuss about how k8s uses accounts and how to control resource access using RBAC.
In this chapter we will discuss about how k8s uses accounts and how to control resource access using RBAC.
In the RBAC API, a role contains rules that represent a set of permissions. Permissions are purely additive (there are no “deny” rules). A role can be defined within a namespace with a Role, or cluster-wide with a ClusterRole
A ClusterRole can be used to grant the same permissions as a Role, but because they are cluster-scoped, they can also be used to grant access to:
A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted. Permissions can be granted within a namespace with a RoleBinding
A ClusterRoleBinding may be used to grant permission at the cluster level and in all namespaces
A RoleBinding may also reference a ClusterRole to grant the permissions to namespaced resources defined in the ClusterRole within the RoleBinding’s namespace. This allows administrators to define a set of common roles for the entire cluster, then reuse them within multiple namespaces.
We can read more about RBAC , Role , RoleBindings , ClusterRoles and ClusterRole Bindings here
User Name: podview
Namespace: monitoring
$ kubectl create ns monitoring
$ kubectl get ns
Output
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
monitoring Active 9s
cat <<EOF >podview-csr.json
{
"CN": "podview",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IN",
"L": "Bangalore",
"OU": "Kubernetes from Scratch",
"ST": "Karnataka"
}
]
}
EOF
$ cp -p /var/lib/kubernetes/ca.pem /var/lib/kubernetes/ca-key.pem ~/
Generate Certificates
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
podview-csr.json | cfssljson -bare podview
Output
$ ls -lrt podview*
-rw-rw-r-- 1 k8s k8s 235 Feb 3 15:42 podview-csr.json
-rw-rw-r-- 1 k8s k8s 1428 Feb 3 15:48 podview.pem
-rw------- 1 k8s k8s 1675 Feb 3 15:48 podview-key.pem
-rw-r--r-- 1 k8s k8s 1037 Feb 3 15:48 podview.csr
Now we can use this certificate to configure kubectl
.
kubectl
will read .kube/config .
So you can either modify it manually or use the kubectl
command to modify it
Lets do a cat
on existing config.(snipped certificate data)
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: SWUZNY2UxeDZkOWtDMWlKQ1puc0VRL3lnMXBobXYxdkxvWkJqTGlBWkRvCjVJYVd
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: iUUsyU1hLT0lWQXYzR3hNMVRXTUhqVzcvSy9scEtSTFd
client-key-data: BTCtwb29ic1oxbHJYcXFzTTdaQVN6bUJucldRUTRIU1VFYV
$ kubectl config set-credentials podview --client-certificate=podview.pem --client-key=podview-key.pem
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: SWUZNY2UxeDZkOWtDMWlKQ1puc0VRL3lnMXBobXYxdkxvWkJqTGlBWkRvCjVJYVd
server: https://127.0.0.1:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: iUUsyU1hLT0lWQXYzR3hNMVRXTUhqVzcvSy9scEtSTFd
client-key-data: BTCtwb29ic1oxbHJYcXFzTTdaQVN6bUJucldRUTRIU1VFYV
- name: podview
user:
client-certificate: /home/k8s/podview.pem
client-key: /home/k8s/podview-key.pem
As we all know , kubectl by deafult will act on default namespace.
But here we can change that to monitoring
namespace.
$ kubectl config set-context podview-context --cluster=kubernetes-the-hard-way --namespace=monitoring --user=podview
$ kubectl get pods --context=podview-context
Output
Error from server (Forbidden): pods is forbidden: User "podview" cannot list resource "pods" in API group "" in the namespace "monitoring"
$ vi podview-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: monitoring
name: podview-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
$ kubectl create -f podview-role.yaml
Output
role.rbac.authorization.k8s.io/podview-role created
$ vi podview-role-binding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: podview-role-binding
namespace: monitoring
subjects:
- kind: User
name: podview
apiGroup: ""
roleRef:
kind: Role
name: podview-role
apiGroup: ""
$ kubectl create -f podview-role-binding.yaml
rolebinding.rbac.authorization.k8s.io/podview-role-binding created
$ kubectl get pods --context=podview-context
Output
No resources found.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
When you create a pod, it is automatically assigns the default service account in the same namespace.
$ kubectl get pods nginx --output=jsonpath={.spec.serviceAccount} && echo
You can access the API from inside a pod using automatically mounted service account credentials.
Lets start a Pod
$ kubectl run debugger --image=ansilh/debug-tools --restart=Never
Login to the Pod
$ kubectl exec -it debugger -- /bin/sh
Kubernetes will inject KUBERNETES_SERVICE_HOST & KUBERNETES_SERVICE_PORT_HTTPS variables to the Pod during object creation. We can use these variables to formulate the API URL
Also , there is a bearer token which kubernetes mounts to the pod via path /run/secrets/kubernetes.io/serviceaccount/token We can use this bearer token and pass it as part of HTTP header.
APISERVER=https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}
TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
Use curl command to access the API details
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt
Output
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.136.102.232:6443"
}
]
}
From where we get this Token and how kubernetes know this token is for which user or group ?
Kubernetes uses service accounts and tokens to pass authentication and authorization data to objects
When you create a Pod object , kubernetes will use default
service account and inject the token corresponding to the default user.
Lets see the service accounts in default namespace.
$ kubectl get serviceaccounts
Output
NAME SECRETS AGE
default 1 24h
Who creates this service account ?
The default
service account will be created by kubernetes during namespace creation.
Which means , when ever you create a namespace , a default service account will also be created.
In RBAC scheme , the service account will have below naming convention
system:serviceaccount:
Lets try to access another API endpoint
curl $APISERVER/api/v1/pods --header "Authorization: Bearer $TOKEN" --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt
Output
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
This indicates that the default
account have no view access to objects in that namespace
How can give access in this case ?
We already discussed about Roles and RoleMappings in previous session But we didn’t discuss about service accounts or using the service accounts.
Lets demonstrate that then.
$ kubectl create serviceaccount podview
$ kubectl get serviceaccounts podview -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2019-02-03T16:53:32Z"
name: podview
namespace: default
resourceVersion: "131763"
selfLink: /api/v1/namespaces/default/serviceaccounts/podview
uid: 3d601276-27d4-11e9-aa2d-506b8db54343
secrets:
- name: podview-token-4blzv
Here we can see a secret named podview-token-4blzv
$ kubectl get secrets podview-token-4blzv -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FUR
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXR
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: podview
kubernetes.io/service-account.uid: 3d601276-27d4-11e9-aa2d-506b8db54343
creationTimestamp: "2019-02-03T16:53:32Z"
name: podview-token-4blzv
namespace: default
resourceVersion: "131762"
selfLink: /api/v1/namespaces/default/secrets/podview-token-4blzv
uid: 3d61d6ce-27d4-11e9-aa2d-506b8db54343
type: kubernetes.io/service-account-token
(keys were snipped to fit screen)
The type is kubernetes.io/service-account-token and we can see ca.crt , namespace (base64 encoded) and a token
These fields will be injected to the Pod if we use the service account podview
to create the Pod.
$ vi pod-token.yaml
apiVersion: v1
kind: Pod
metadata:
name: debugger
spec:
containers:
- image: ansilh/debug-tools
name: debugger
serviceAccountName: podview
$ kubectl create -f pod-token.yaml
$ kubectl exec -it debugger -- /bin/sh
APISERVER=https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}
TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
curl $APISERVER/api/v1/pods --header "Authorization: Bearer $TOKEN" --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:default:podview\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
We got the same message as the one we got while using default account. Message says that the service account don’t have access to Pod object.
So we will create a ClusterRole first , which will allow this user to access all Pods
$ kubectl create clusterrole podview-role --verb=get,list,watch --resource=pods --dry-run -o yaml
Output
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: podview-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
$ kubectl create clusterrole podview-role --verb=list,watch --resource=pods
Output
clusterrole.rbac.authorization.k8s.io/podview-role created
Now we will bind this role to the user podview
$ kubectl create clusterrolebinding podview-role-binding --clusterrole=podview-role --serviceaccount=default:podview --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: podview-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: podview-role
subjects:
- kind: ServiceAccount
name: podview
namespace: default
$ kubectl create clusterrolebinding podview-role-binding --clusterrole=podview-role --serviceaccount=default:podview
Output
clusterrolebinding.rbac.authorization.k8s.io/podview-role-binding created
Lets try to access the API from pod again
k8s@k8s-master-ah-01:~$ kubectl exec -it debugger -- /bin/sh
/ # APISERVER=https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}
/ # TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
/ #
/ # curl $APISERVER/api/v1/pods --header "Authorization: Bearer $TOKEN" --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt
(If not working , then delete the service account and recreate it. Need to verify this step)
You can also create a single yaml file for ServiceAccount , ClusterRole and ClusterRoleBinding
apiVersion: v1
kind: ServiceAccount
metadata:
name: podview
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: podview-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: podview-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: podview-role
subjects:
- kind: ServiceAccount
name: podview
namespace: default