Free CKAD Practice Questions

Tarun Ghai
16 min readMay 20, 2021

Certified Kubernetes Application Developer ( CKAD ) is a hands on performance based exam and its always good to practice as many CKAD sample questions as we can.

Below are some practice questions for CKAD Exam that will help you understand the concepts better and also help increase the speed.

You may like to use below short notations to save time.

export dr="--dry-run=client"
export ns=default
alias k="kubectl"
alias ka="k apply -f"
alias kc="k create"
alias kr="k run"
alias kg="k get"
alias kdd="k describe"
alias kdx="k delete --force --grace-period=0"

We will use above short notations in below practice questions.

1. Create a new namespace myns and then create a pod nginx1 in this namespace with image name nginx and tag alpine.

Ans:

kc ns myns
export ns=myns
kr -n $ns nginx1 --image=nginx:alpine
kg po -n $ns
kdd -n $ns pod nginx1

2. Create a pod named nginx2 with image nginx and expose it on port 8080 in default namespace.

Ans:

kr nginx2 --image=nginx --port=8080

3. Create a pod named nginx3 with an nginx container named ngcontainer that starts only once.

Ans:

kr nginx3 --image=nginx $dr -oyaml > pod3.yaml

Update the yaml file to change the container name and Restart Policy as shown below.

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx3
name: nginx3
spec:
containers:
- image: nginx
name: ngcontainer
restartPolicy: Never

use kubectl apply command to create the pod.

ka pod3.yaml

4. Create the yaml file named pod4.yaml for the pod nginx with image nginx that runs command date in default namespace. (do not create the pod)

Ans:

kr nginx --image=nginx $dr -oyaml --command -- date > pod4.yaml 

5. Create the json file named pod5.json for pod nginx with image nginx with environment variable var1=val1. (do not create the pod)

Ans:

kr nginx --image=nginx $dr -ojson --env=app=webapp > pod5.json

6. As a pre work , run the below command

kr nginx6 --image=nginx678

This will launch a new pod which will fail immediately.

Find the pod that is in unhealthy state in all the namespaces. find the root cause and fix the issue to bring the pod to healthy state.

Ans:

kg po -A

This command will print all the pods in all the name spaces. now observe the STATUS and READY Column to find the pod which is not in running state. The image nginx678 is wrong, update the image to nginx.

7. Create a busybox pod named bbox7 that runs a command env on the terminal and gets deleted automatically after that.

Ans:

kr bbox7 --image=busybox --restart=Never --rm -it -- sh -c env

8. Create a pod nginx8 with image nginx with label version=v1.

Ans:

k run nginx8 --image=nginx -l version=v1

9. Add a new label app=mywebapp to an existing pod named nginx8. also update label version to v2.

Ans:

k label pod nginx8 app=mywebapp
k label pod nginx8 version=v2 --overwrite

10. Create a pod named nginx10 labeled as app=webapp with image nginx exposing port 8080 that runs command “echo HelloWorld” with maximum resource limits as cpu=100m and memory=1024Mi . The pod should restart only in case it fails for any reason.

Ans:

kr nginx10 --image=nginx -l=app=webapp --port=8080 --restart=OnFailure --limits='cpu=100m,memory=1024Mi' -- sh -c "echo HelloWorld"

11. Create a pod nginx11 with image nginx and make sure it is deployed on node minikube.

Ans: Use the below yaml to create the pod

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx11
name: nginx11
spec:
containers:
- image: nginx
name: nginx11
nodeName: minikube

12. label the node minikube using below command.

k label node minikube nodename=mynode

now create a new pod nginx12 that will be deployed on the node with this label.

Ans:

Use the below yaml to create the pod.

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx12
name: nginx12
spec:
containers:
- image: nginx
name: nginx12
nodeSelector:
nodename: mynode

13. For the nginx12 pod find the ip address of the pod and ip address of the node on which this pod is running.

Ans:

use below commands to know the ip address of pod and the node.

kg pods -o wide
kg nodes -o wide

14. As a pre work, run below commands to create 4 pods with different labels.

kr ngtemp1 --image=nginx -l=app=v1
kr ngtemp2 --image=nginx -l=app=v1
kr ngtemp3 --image=nginx -l=app=v2
kr ngtemp4 --image=nginx -l=app=v2

Show all the pods with their labels. Delete all the pods with label app=v1 and then show all the remaining pods with their labels.

Ans:

kg po --show-labels
kdx po -l=app=v1
kg po --show-labels

15. Create a deployment called cncfapp with image nginx and tag alpine with 2 replicas. The nginx server should be running on port 8080 and should run command sleep 3600.

Ans:

kc deploy cncfapp --image=nginx:alpine --replicas=2 --port=8080 -- sh -c ‘sleep 3600’

16. Rollout the cncfapp to new image nginx:1.17 and scale up the replicas to 5. Check the status of deployment and pods. make sure pods are running nginx:1.17 image.

Ans:

k set image deploy cncfapp nginx=nginx:1.17 --record
k scale deploy cncfapp --replicas=5
k rollout history deploy cncfapp
kg po
kdd pod pod-name | grep -i image -A2

17. Rollout the cncf app to image nginx:777. Observe that new pods won’t be created successfully as the image nginx:777 does not exist. Rollback cncfapp to old image nginx:1.17.

Ans:

k set image deploy cncfapp nginx=nginx:777 --record
kg po
k rollout history deploy cncfapp
k rollout undo deploy cncfapp

18. Create a deployment called cloudapp with image nginx:1.16 having 4 replicas. Upgrade the deployment to use nginx:1.17 image and make sure all old pods get deleted immediately during upgrade.

Ans:

kc deploy cloudapp --image=nginx:1.16 --replicas=4 $dr -oyaml > dep18.yaml

Inside the dep18.yaml change the strategy to Recreate which is not the default strategy so existing Pods are killed before new ones are created.

spec:
strategy:
type: Recreate

use below commands to create the deployment.

ka dep18.yaml
kg po
kg deploy
k set image deploy cloudapp nginx=nginx:1.17
kg po
kg deploy
k rollout history deploy cloudapp --revision=2

19. Create a deployment called onlineapp with image nginx:1.10 having 4 replicas. Upgrade the deployment to use nginx:1.11 image and make sure all the new pods are created immediately and only half of the old pods go down in one go during upgrade.

Ans:

kc deploy onlineapp --image=nginx:1.10 --replicas=4 $dr -oyaml > dep19.yaml

Modify the yaml file with below changes.

spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 100
maxUnavailable: 50

use below commands to create the deployment.

ka dep19.yaml
kg po
kg deploy
k set image deploy onlineapp nginx=nginx:1.11
kg po
kg deploy
k rollout history deploy onlineapp --revision=2

20. Find the issue in below yaml file for a deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mywebapp
name: mywebapp
spec:
replicas: 50
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- image: nginx
name: nginx

Ans:

The label inside the selector section should match the label for the pod template.

21. Create a Deployment called secureapp with nginx image and 5 replicas. The pods should have a service account called mysecuresa

Ans:

kc deploy secureapp --image=nginx --replicas=5 $dr -oyaml >21.yaml

update the yaml file as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: secureapp
name: secureapp
spec:
replicas: 5
selector:
matchLabels:
app: secureapp
template:
metadata:
labels:
app: secureapp
spec:
serviceAccountName: mysecuresa
containers:
- image: nginx
name: nginx

use below commands to create the deployment

kc sa mysecuresa
ka 21.yaml
kg deployment
kg po

22. Create a job called myjob that executes the command
echo Hello ; sleep 10 ; echo World ; sleep 10 ; echo DontPrint

This job should not run for more than 30 seconds.

Ans:

kc job myjob --image=busybox $dr -oyaml -- sh -c ‘echo Hello ; sleep 10 ; echo World ; sleep 20 ; echo DontPrint ‘ > job22.yaml

Edit the yaml file to add activeDeadlineSeconds as 30 as shown below.

apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
activeDeadlineSeconds: 30
template:
metadata:
spec:
containers:
- command:
- sh
- -c
- 'echo Hello ; sleep 10 ; echo World ; sleep 20 ; echo DontPrint '
image: busybox
name: myjob
restartPolicy: Never

create the job using below command.

ka job22.yaml
kg jobs
kg po

23. Create a job called job23 that executes the command date. The job should run for a total of 20 times. At one time, 5 parallel jobs should be launched.

Ans:

kc job job23 --image=busybox $dr -oyaml -- sh -c date > job23.yaml

Update the completions and parallelism in the the yaml file as shown below

apiVersion: batch/v1
kind: Job
metadata:
name: job23
spec:
completions: 20
parallelism: 5
template:
metadata:
spec:
containers:
- command:
- sh
- -c
- date
image: busybox
name: job23
restartPolicy: Never

run below commands to create the job and view the logs.

ka job23.yaml
k logs jobs/job23

24. Create a cronjob that runs every 5 minutes and executes date command. If the cronjob cannot start the job at its scheduled time for any reason, and if 60 seconds have passed after the schedule, do not start the job.

Ans:

kc cj mycronjob $dr -oyaml --image=busybox --schedule=”*/5 * * * *” -— sh -c date > cj.yaml

edit the cj.yaml to add startingDeadlineSeconds as shown below.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mycronjob
spec:
jobTemplate:
metadata:
name: mycronjob
spec:
template:
metadata:
spec:
containers:
- command:
- sh
- -c
- date
image: busybox
name: mycronjob
restartPolicy: OnFailure
schedule: '*/5 * * * *'
startingDeadlineSeconds: 60

run below commands to create the cronjob

ka cj.yaml
kg cj
kg jobs

25. Create a file called name.txt with contents FName=Tarun and LName=Ghai . Create a config map called cmap from this file. Create a pod with image nginx that loads this configmap into a local volume at location /etc/cmaps/ . Once the pod runs, show the contents of the file from /etc/cmaps on the terminal.

Ans:

name.txt with below contents.

Fname=Tarun
Lname=Ghai

run below commands

kc cm cmap --from-file=name.txt
kdd cm cmap
kr nginx25 --image=nginx $dr -oyaml > pod25.yaml

update the yaml file as shown below

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx25
name: nginx25
spec:
volumes:
- name: myvol
configMap:
name: cmap
containers:
- image: busybox
name: nginx25
volumeMounts:
- name: myvol
mountPath: /etc/cmaps

run below commands to view the contents of the file.

ka pod25.yaml
k exec nginx25 -- cat /etc/cmaps/name.txt

26. Create a secret mysecret with variables uname=tarun@111 and passwd=welcome@111.

Create a configmap mycm with variables username=tarun@222 and password=welcome@222.

Create a pod with image nginx that loads above secret and config map as environment variables. Once the pod runs, display its environment variables on the terminal and confirm that all 4 variables above are loaded.

Ans:

kc secret generic mysecret --from-literal=uname=tarun@111 --from-literal=passwd=welcome@111kc cm mycm --from-literal=username=tarun@222 --from-literal=password=welcome@222kr nginx26 --image=nginx $dr -oyaml > pod26.yaml

update the yaml file as shown below.

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx26
name: nginx26
spec:
containers:
- image: nginx
name: nginx26
envFrom:
- configMapRef:
name: mycm
- secretRef:
name: mysecret

run the below commands.

ka pod26.yaml k exec nginx26 -- env

27. Create a pod with a single nginx container. it should run with user id = 1010, group id = 2020 and the File System Group Id = 5000. It should also have the NET_ADMIN capability.

Ans:

kr nginx27 --image=nginx $dr -oyaml > pod27.yaml

update yaml file as shown below.

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx27
name: nginx27
spec:
securityContext:
fsGroup: 5000
containers:
- image: nginx
name: nginx27
securityContext:
runAsUser: 1010
runAsGroup: 2020
capabilities:
add: ["NET_ADMIN"]

run the below command.

ka pod27.yaml

28. Create a pod nginx28 with three nginx containers ngcon1, ngcon2 and ngcon3. ngcon1 and ngcon2 should run with user id = 1010 but ngcon3 should run with user id 2020. All containers should have the NET_ADMIN capability.

Ans:

Please use below yaml to create the pod with above requirements.

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx28
name: nginx28
spec:
securityContext:
runAsUser: 1010
containers:
- image: nginx
name: ngcon1
ports:
- containerPort: 8081
securityContext:
capabilities:
add: ["NET_ADMIN"]
- image: nginx
name: ngcon2
ports:
- containerPort: 8082
securityContext:
capabilities:
add: ["NET_ADMIN"]
- image: nginx
name: ngcon3
ports:
- containerPort: 8083
securityContext:
runAsUser: 2020
capabilities:
add: ["NET_ADMIN"]

29. Run with below command to create the liveness.yaml file.

kr livenesspod --image=k8s.gcr.io/liveness $dr -oyaml -- /server > liveness.yaml

liveness container runs a webapp at /healthz at 8080 port. It returns http status code 200 for first 10 seconds and returns http 500 status code after that.
add a relevant liveness probe. Observe that the pod restarts after 10–15 seconds. Observe that the liveness probe fails after 10 seconds as web server returns http 500 code.

Ans:

update the yaml file as shown below.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: livenesspod
name: livenesspod
spec:
containers:
- args:
- /server
image: k8s.gcr.io/liveness
name: livenesspod
livenessProbe:
httpGet:
path: /healthz
port: 8080

run the below commands

ka liveness.yaml 
kg po
kdd pod livenesspod

30. Run the below command to create the readiness.yaml file.

kr readinesspod --image=busybox $dr -oyaml -- sh -c ‘touch /tmp/healthy; sleep 600’ > readiness.yaml

Add a relevant readiness probe that checks the existence of /tmp/healthy directory in the pod. Assume that the Application takes 20 seconds to start and its health should be checked every 5 seconds.

Ans:

update the yaml file as shown below.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: readinesspod
name: readinesspod
spec:
containers:
- args:
- sh
- -c
- touch /tmp/healthy; sleep 600
image: busybox
name: readiness
readinessProbe:
exec:
command: ["sh","-c","cat /tmp/healthy"]
initialDelaySeconds: 20
periodSeconds: 5

run the below commands

ka readiness.yaml 
kg po
kdd pod readinesspod

observe that the pod goes to Ready State only after 20+ seconds as the readiness probe starts after 20 seconds after the container has started.

31. Create nginx31 pod with image nginx and also create a clusterip service for this pod

Ans:

kr nginx31 --image=nginx --port=80 --expose

32. For the above nginx31 pod, also expose a service named service32 of type clusterip at port 8080.

Ans:

k expose pod nginx31 --name=service32 --port=8080 --target-port=80

33. For the above nginx31 pod, expose a service named service33 of type NodePort at port 8080.

Ans:

k expose pod nginx31 --name=service33 --port=8080 --target-port=80 --type=NodePort

34. Create a NodePort service called service34 for the above nginx31 pod the service should use nodeport as 30010.

Ans:

k expose pod nginx31 --name=service34 --type=NodePort --port=80 $dr -oyaml > svc.yaml

Update as shown below to add NodePort 30010.

apiVersion: v1
kind: Service
metadata:
name: service34
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 30010

run the below commands.

ka svc.yaml 
kg svc
kdd svc service34

35. Create a Deployment called online35 with image nginx and 2 replicas. expose a service service35 for this deployment of type nodeport.

Ans:

kc deployment online35 --image=nginx --replicas=2 
k expose deploy online35 --name=service35 --port=80 --type=NodePort
kg svc
kdd svc service35

36. Two pods are running in the k8s cluster that are built using this command.

kr serverapp --image=nginx 
kr clientapp --image=busybox -- sh -c "sleep 600"

There is network policy in the cluster with below contents.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: server-access-policy
namespace: default
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: client

The clientapp is not able to talk to serverapp. Without changing the network policy, fix the communication issue.

Ans:

The client and server app don’t have the right labels. Fix their labels with below commands.

kg po --show-labels
k label pod serverapp app=server
k label pod clientapp app=client
kg po --show-labels

This should fix the communication issue. now go to a shell inside clientpp

k exec clientapp -it -- sh -c sh

use below command to test the communication.

wget serverapp-ipaddress

37. Create a PersistentVolume called mypv of 1Gi, with accessMode of ReadWriteOnce and storageClassName ‘manual’ it should be mounted on hostPath ‘/mnt/data’.

Create a PersistentVolumeClaim called mypvc that requests 500Mi and an accessMode of ReadWriteOnce, with the storageClassName of ‘normal’.

Ans:

Use below yaml file to create the PersistentVolume.

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
storageClassName: normal
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

use below yaml file to create the PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi

You will notice that PVC does not get bound as the storage class does not match. Fix the storage class and recreate the pvc.

38. Create a pvpod with two busybox containers that run the command ‘sleep 600’ and mount the PersistentVolumeClaim to ‘/opt/path1’ and ‘/opt/path2’ respectively. Connect to the first pod and write echo
Helloworld at /opt/path1/file.txt. Connect to the second pod and the contents of file.txt at /opt/path2/

Ans:

kr pvpod --image=busybox $dr -oyaml -- sh -c "sleep 600" > pvpod.yaml

update the yaml file as shown below

apiVersion: v1
kind: Pod
metadata:
labels:
run: pvpod
name: pvpod
spec:
volumes:
- name: mypv
persistentVolumeClaim:
claimName: mypvc
containers:
- image: busybox
name: pvpod1
command: ["sh","-c","sleep 600"]
volumeMounts:
- name: mypv
mountPath: "/opt/path1"
- image: busybox
name: pvpod2
command: ["sh","-c","sleep 600"]
volumeMounts:
- name: mypv
mountPath: "/opt/path2"

Create pvpod with above yaml file.

Run below commands to create a file on first container and view its contents on second container.

k exec pvpod -c pvpod1 -it —- sh -c ‘echo HelloWorld > /opt/path1/file.txt’k exec pvpod -c pvpod2 -it —- sh -c ‘ls -lrt /opt/path2/file.txt’k exec pvpod -c pvpod2 -it —- sh -c ‘cat /opt/path2/file.txt'

39. Create 3 Persistent Volumes mypv1,mypv2 and mypv3. All 1 Gi and their storageclass should be normal. The access mode should be RWO.

They should use hostpath /opt/data1, /opt/data2 and /opt/data3 respectively.

Ans:

Use below yaml file to create three Persistent Volumes.

mypv1.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv1
labels:
name: mypv1
spec:
storageClassName: normal
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data1"

mypv2.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv2
labels:
name: mypv2
spec:
storageClassName: normal
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data2"

mypv3.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv3
labels:
name: mypv3
spec:
storageClassName: normal
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data3"

40. Create a PersistentVolumeClaim with right access mode and 500 Mi size that always gets bound to mypv2.

Ans:

Use below yaml file to create the PersistentVolumeClaim that always binds to mypv2.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc40
spec:
storageClassName: normal
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
name: mypv2

41. Create a pod using below command that creates a pod with two containers.

ka https://k8s.io/examples/pods/two-container-pod.yaml

There is one container that is not in the ready state. Make some changes and make sure that both containers are in the Ready State and pod goes to RUNNING state. You may delete and recreate the pod.

Ans:

Get the yaml file of the running pod and add sleep 300 in arguments so container does not complete immediately after echo statement.

apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"] | sleep 300

42. Place a new file called realease.html containing message “Release Notes from the debian container” at /usr/share/nginx/html of nginx web server using shared volume of pod and test that the nginx server is hosting release.html

Ans:

Make below changes to yaml file and recreate the pod.

spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/nginx/html
name: shared-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-wv2jw
readOnly: true
- args:
- -c
- echo "Release Notes from the debian container" > /pod-data/release.html | sleep 600
command:

Now test the release.html using below commands.

install curl on debian-contianer using below command.

k exec two-containers -c debian-container -it -- sh
apt-get update
apt-get install curl

Run below command to test that nginx server is hosting release.html

k exec two-containers -c debian-container -it -- sh -c ‘curl NGINX_POD_IP_ADDRESS/release.html’

43. Run below command to create a pod in mem-example namespace.

kc ns mem-examplekubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example

The pod fails to start successfully. Fix the issue to bring the pod to Running state.

Ans:

The pod goes to OOMKilled State. As can be seen in the argument, the pod needs 250Mi memory but the maximum memory limit is 100Mi, increase this limit to 300Mi.

apiVersion: v1
kind: Pod
metadata:
name: memory-demo-2
namespace: mem-example
spec:
containers:
- name: memory-demo-2-ctr
image: polinux/stress
resources:
requests:
memory: "50Mi"
limits:
memory: "300Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]

44. Run the below command to create a pod. The pod fails to start. Fix the issue and bring the pod to RUNNING state.

kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example

Ans:

The pod is in pending state as the node does not have sufficient memory resources for the pod. Update the resources section of the pod as shown below and recreate the pod.

resources:
requests:
memory: "1Gi"
limits:
memory: "1Gi"

45. Run the below command to create a pod. The pod fails to start. Fix the issue and bring the pod to RUNNING state.

kc ns cpu-examplekubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example

Ans:

The pod is in pending state as the node does not have sufficient cpu resources for the pod. Update the resources section of the pod as shown below and recreate the pod.

apiVersion: v1
kind: Pod
metadata:
name: cpu-demo-2
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr-2
image: vish/stress
resources:
limits:
cpu: "1"
requests:
cpu: "1"
args:
- -cpus
- "2"

46. Without using imperative command, create a yaml file for the secret with username = tom and password=cruise. Create secret using this yaml file. Create an nginx Pod that has access to the secret data through a Volume.

Ans:

echo -n 'tom' | base64
echo -n 'cruise' | base64

Create a yaml file for the secret like below.

apiVersion: v1
kind: Secret
metadata:
name: mysecret
data:
username: dG9t
password: Y3J1aXNl

Create a yaml file for the pod like below.

apiVersion: v1
kind: Pod
metadata:
name: secret-pod
spec:
containers:
- name: secret-pod
image: nginx
volumeMounts:
- name: secretvolume
mountPath: /opt/secretvolume
volumes:
- name: secretvolume
secret:
secretName: mysecret

create the pod using above yaml and verify the volume and secret contents.

kg po 
k exec -i -t secret-pod -- /bin/bash
#ls /opt/secretvolume
#cat /opt/secretvolume/username
#cat /opt/secretvolume/password

47. Create a secret called backend-user with backend-username=’backend-admin’. Create another secret called db-user with db-username=’db-admin’

Create a pod called envvars-multiple-secrets with two environment variables BACKEND_USERNAME and DB_USERNAME. BACKEND_USERNAME should load backend-username from backend-user secret and
DB_USERNAME should load db-username from db-user secret.

Ans:

kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'kubectl create secret generic db-user --from-literal=db-username='db-admin'

use below yaml file to create the pod.

apiVersion: v1
kind: Pod
metadata:
name: envvars-multiple-secrets
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: BACKEND_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: backend-username
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: db-username

create the pod using above yaml and view the environment variables

kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c env

48. Run below yaml file to create the pod.

apiVersion: v1
kind: Pod
metadata:
labels:
run: ngpod48
name: ngpod48
spec:
containers:
- image: nginx
name: ngpod1
- image: nginx
name: ngpod2

The pod goes to Error state after some time. Find out the root cause.

Ans:

kdd pod ngpod48
k logs ngpod48 -c ngpod1
k logs ngpod48 -c ngpod2

49. Create a deployment redis with 5 replicas. Whenever containers go down for any reason , the deployment controller brings them back. Always force to use the local image during container creation and make sure that the image is never pulled.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 4
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Never
name: redis

50. Run below commands.

kr nginx50 --image=nginx -l=role=frontend
kr bbox50 --image=busybox -l=role=db -- sh -c 'sleep 600'

Create a network policy for pod bbox50 to allow ingress network connections from
1) the pod nginx50 which is in the same namespace as that of bbox50.
2) all the pods in the namespace dbadmin. The dbadmin namespace has label layer=dbadmin
3) the redis pod with label role=cacheserver in the namespace cachens. the cachens namespace has label layer=cache

Ans: Create NetworkPolicy using below yaml file.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
- namespaceSelector:
matchLabels:
layer: dbadmin
- namespaceSelector:
matchLabels:
layer: cache
podSelector:
matchLabels:
role: cacheserver

--

--