Kubernetes Interview Questions

  • A) CNI
  • B) kubelet
  • C) kube-proxy
  • D) CRI

kube-proxy is the component which is responsible for routing Service traffic to Pods. It can be run in two modes on Linux machines. iptables mode or ipvs mode. In both cases, kube-proxy configures the underlying service to route traffic from ClusterIP to Pods.

  • A) Yes
  • B) No

Yes, we can. We can run a Kubernetes Pod in hostNetwork mode without installing any CNI plugins. When a Pod is defined with hostNetwork option, the Linux network namespace will not created for that, and the host machine network stack will be shared with the Pod’s containers. In that case, containers do not need any CNI plugin to assign them any network interfaces, configurations, etc.

  • A) kube-apiserver
  • B) kube-scheduler
  • C) kubelet
  • D) All of them

Although all components are necessary and should be installed to run a Pod in normal use cases, kubelet is the only mandatory component to run a Pod, and you can run Pods without having kube-apiserver and kube-scheduler installed. With only having kubelet and container runtime, you can run static Pods on your machine.

  • A) Cluster will go out of reach
  • B) All Pods will go out of reach
  • C) All Pods will go to the Pending state
  • D) New Pods will remain in the Pending state

Although it depends on the CNI, in most cases, removing the CNI plugin “removing CNI binaries, config files, etc.” will only affect new Pods as they can’t run without getting network configuration and IP address from the CNI plugin, but already existing Pods have their configurations, and they can work without issue.

  • A) It’s a special container to pause deployments
  • B) It’s a special container to run the root privilege containers
  • C) It’s a special container to share network namespace with Pod’s containers
  • D) It’s a special container to pause cron jobs

If you look at the container runtime outputs, you can find that for every Kubernetes Pod, there is one “pause” container. When you run a Kubernetes Pod, at least two containers will be run. One container is the main container, as defined in PodSpec, and one container is the pause container, which runs before the main container and shares its network namespace “can share PID namespace as well” with the main container.

  • A) By adding nodeName in Pod spec
  • B) By adding nodeSelector in Pod spec
  • C) By adding topologySpreadConstraints in Pod spec
  • D) By adding affinity in Pod spec

When the nodeName field of the Pod is not empty, the kube-scheduler ignores it from the scheduling decisions, and the kubelet of the mentioned node will try to run the Pod. If you provide a non-existing node, the Pod will remain in the Pending state.

  • A) The Pod will be crashed, and PVC will be removed
  • B) All data will gone as soon as we delete the PVC resource
  • C) We can’t delete a PVC during active use
  • D) It will be deleted as soon as Pod gets stopped

Kubernetes provides a way to protect data from accidental loss/deletion. In the case of PVC deletion, if you remove a PVC in active use, “one or more running Pods are mounting it”, it’s not removed immediately. Instead, it waits until no one uses the PVC, then removes it. This protection provides you a way to get a backup of data just before deletion.

  • A) All Deploymet’s Pods will be paused
  • B) Deployment will be paused, and changes will not take effect
  • C) Kube-apiserver will be paused to accept new Deployments
  • D) All traffic to Deployment’s Pods will be paused

When you “pause” a Deployment object, any changes to the deployment will not go live or take effect, especially the rollout mechanism. So, the behaviour is: pausing the deployment, and the result is: pausing the Deployment’s rollouts.

  • A) Based on kube-apiserver timezone
  • B) Based on worker node timezone
  • C) Based on kube-controller-manager timzeone
  • D) Based on kubelet timezone

The CrobJob controller inside the kube-controller-manager component is responsible for running CronJob’s Pods, and as it runs inside the kube-controller-manager, it runs the CronJob based on the kube-controller-manager timezone. From Kubernetes 1.24, a new option timezone is added to allow changing the timezone for every Cron job.

  • A) Pod-to-Pod traffic will not be sent until readinessProbe gets passed
  • B) Service-to-Pod traffic will not be sent until readinessProbe gets passed
  • C) Pod will stay Pending until readinessProbe gets passed
  • D) Pod will be restarted until readinessProbe gets passed

The readinessProbe option helps Kubernetes, Endpoints controller, to find if the Pod is ready to accept connections or not. If the readinessProbe of the Pod fails, the Pod’s IP address will be removed from the corresponding Endpoints object, which means if traffic comes from the Service, the Service will not send the traffic to that Pod.

  • A) All components call kube-apiserver, and no one calls kubelet APIs
  • B) kube-apiserver may need to call kubelet APIs
  • C) kube-controller-manager may need to call kubelet APIs
  • D) kube-scheduler may need to call kubelet APIs

Although all components call kube-apiserver to do their jobs, kube-apiserver needs to call kubelet to do two things. kube-apiserver will call kubelet APIs to access Pod’s shell in kubectl exec command and to get Pod’s logs in kubectl logs command.

  • A) Pod-to-Pod traffic will not be sent until livenessProbe gets passed
  • B) Service-to-Pod traffic will not be sent until livenessProbe gets passed
  • C) Pod will stay Pending until livenessProbe gets passed
  • D) Pod will be restarted until livenessProbe gets passed

The livenessProbe job is to ensure the container is healthy, and if it fails, the container keeps restarting until it becomes healthy. Both livenessProbe and readinessProbe should be defined for every container to ensure they work properly.

  • A) All of them are operational simultaneously
  • B) An odd number of them are operational simultaneously
  • C) An even number of them are operational simultaneously
  • D) One of them is only operational

The kube-apiserver is a fully stateless service which can be scaled as much as we want. Its job is to get the requests and persist them in the etcd database. If you deploy more than one instance of kube-apiserver, all of them are operational and can accept requests and send them to the etcd without being dependent on others.

  • A) All of them are operational simultaneously
  • B) An odd number of them are operational simultaneously
  • C) An even number of them are operational simultaneously
  • D) One of them is only operational

The kube-controller-manager service uses a leader-election algorithm, which means if you have more than one instance, only one of them can be elected as a leader, and the leader is the only one who can operate. Other instances will be in standby mode, “they can’t operate or manage anything” until being elected.

  • A) All of them are operational simultaneously
  • B) An odd number of them are operational simultaneously
  • C) An even number of them are operational simultaneously
  • D) One of them is only operational

Again, the kube-scheduler service uses a leader-election algorithm, which means if you have more than one instance, only one of them can be elected as a leader, and the leader is the only one who can operate. Other instances will be in standby mode, “they can’t operate or manage anything” until being elected.

  • A) Cluster will be completely non-operational. kube-apiserver cannot provide any response. Pods and containers will be stopped.
  • B) Cluster will be completely non-operational. kube-apiserver cannot provide any response. Already created Pods and containers remain running.
  • C) Cluster will be semi-operational. kube-apiserver will be read-only. Pods and containers will be stopped.
  • D) Cluster will be semi-operational. kube-apiserver will be read-only. Already created Pods and containers remain running.

If, for any reason, the etcd database loses its quorum, “less than 51% instances are available”, the Kubernetes cluster will go out of reach. None of the components can do anything, and kube-apiserver returns etcd timeout error, but all the already-running Pods and containers remain running; they may also face errors and timeouts.

  • A) Piece of code to extend kube-apiserver Authentication mechanism
  • B) Piece of code to extend kube-apiserver Authorization mechanism
  • C) Piece of code to intercept requests to kube-apiserver before Auth mechanism
  • D) Piece of code to intercept requests to kube-apiserver after Auth mechanism

Admission Controllers are pieces of code “applications written externally, outside the kube-apiserver core” attached to the kube-apiserver to intercept incoming requests and extend the kube-apiserver functionality for validating and/or mutating requests. They will be run/called after the request is being authenticated and authorized.

Subscribe to our "Newsletter"

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top