Kubernetes
Creating users on your kubernetes cluster
From the kubernetes website Kubernetes does not have objects which represent normal user accounts. In other words, there is no adduser
command. In order to “create a user” you need to create an SSL cert and put the users named in the common name (CN) field. So you’ll need to create a key, a CSR, and then have that CSR signed by the certificate authority (CA) that’s made by kubernetes during the bootstrap process. The ca.crt
file can be found in /etc/kubernetes/pki on the master node. It can also be found as a configmap in the default namespace called kube-root-ca.crt
. Run kubectl describe cm kube-root-ca.crt
and you’ll see the CA cert.
First let’s create a key and a certificate signing request
USER=cstevens
openssl req -newkey ed25519 -nodes -keyout ${USER}.key -out ${USER}.csr -subj /CN=${USER}/O=admins
Kubernetes can sign certificate requests, so let’s submit the certificate signing request (CSR) file for approval. It’ll need to be base64 encoded.
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: ${USER}
spec:
groups:
- system:authenticated
request: $(cat ${USER}.csr | base64 -w 0)
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
You will now be able to see the CSR in a pending
state in kubernetes
kubectl get csr ${USER}
NAME AGE SIGNERNAME REQUESTOR CONDITION
cstevens 88s kubernetes.io/kube-apiserver-client kubernetes-admin Pending
You can now approve the certificate signing by running
kubectl certificate approve ${USER}
Now if you run the same kubectl get csr ${USER}
command again, you’ll see that it’s been approved and issued:
NAME AGE SIGNERNAME REQUESTOR CONDITION
cstevens 4m30s kubernetes.io/kube-apiserver-client kubernetes-admin Approved,Issued
To view the approved certificate, you can run
kubectl describe csr ${USER}
Let’s grab the signed cert from kubernetes, base64 decode it and save it locally
kubectl get csr ${USER} -o jsonpath="{.status.certificate}" | base64 -d > ${USER}.crt
Next let’s get the CA.crt from the cluster. We’ll need this on disk for when we generate the kubeconfig for the new user.
kubectl -n default get cm kube-root-ca.crt -o jsonpath="{.data.ca\.crt}" > kube-root-ca.crt
Now let’s generate the kubeconfig file. This is the file the kubernetes client kubectl
will use to talk to the cluster. First let’s add the cluster to the kubeconfig.
KUBECONFIG=${USER}.kubeconfig
CONTEXT=${USER}@kubernetes
kubectl config set-cluster kubernetes --server=https://192.168.1.231:6443 --certificate-authority=kube-root-ca.crt --embed-certs=true --kubeconfig=${KUBECONFIG}
Then set our credentials
kubectl config set-credentials ${USER} --embed-certs=true --client-key=${USER}.key --client-certificate=${USER}.crt --kubeconfig=${KUBECONFIG}
Create the context
kubectl config set-context ${CONTEXT} --cluster=kubernetes --user=${USER} --kubeconfig=${KUBECONFIG}
Now you can take this ${USER}.kubeconfig file and copy it to ${HOME}:/.kube/config which is the default location that the kubectl will read it from. Once it’s copied you’ll need to use the created context
kubectl config use-context ${CONTEXT}
Some basic pod functions
Create an nginx deployment
$ kubectl create deployment nginx --image nginx
Scale the deployment
$ kubectl scale deployment nginx --replicas 2
Expose the deployment as a NodePort
$ kubectl expose deployment nginx --type NodePort --port 80
To access the nginx deployment use kubectl get service
to find the NodePort then browse to http://<node>:<port>
where <node> is the IP/hostname of any of the kubernetes nodes and <port> is the 5 digit port listed in the nginx service.
Create an nxinx pod and shell into it
$ kubectl run my-nginx-pod -it --image nginx -- sh
Delete the my-nginx-pod pod you just created
$ kubectl delete pod my-nginx-pod
Create pod then delete after it finishes running
$ kubectl run my-nginx-pod -it --rm --image nginx -- sh
Access the nginx pod you created
$ kubectl port-forward my-nginx-pod 8080:80
<browse to localhost:8080>
View logs of the nginx pod
$ kubectl logs my-nginx-pod
Diagnostics
Component status
kubectl get componentstatus
is deprecated as of 1.20. You can probe the API server directly on a master node
curl -k https://localhost:6443/livez?verbose