Kubernetes Interview Questions – Part 2

  • A) foreground
  • B) background
  • C) orphan
  • D) none

Kubernetes deletes dependent resources in background mode by default, which means when you delete a resource, for example, Deployment, the controller deletes the Deployment resource, returns you the result and removes ReplicaSets and Pods in the background. In the foreground mode, it waits until all dependent resources get deleted, and in the orphan mode, it just removes the Deployment resource and leaves the dependents untouched, and you should remove them by yourself.

  • A) Upgrade will fail until PSP resources are removed
  • B) Upgrade will be successful, and PSP resources will be removed automatically
  • C) Upgrade will be successful, and PSP resources will still be accessible after that
  • D) Upgrade will be successful, and PSP resources will become orphan

Cluster upgrade means running a new version of kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, etc., and in that case, you stop the old versions and start new versions. All the already created objects are still available in the etcd database, even if the new Kubernetes version does not support some of them. So, if you update the cluster, the upgrade will be successful, but the removed API resources become orphans in the etcd, and you should connect to etcd and remove them manually.

  • A) Keys to tell Kubernetes to wait until resource got deployed completely
  • B) Keys to tell Kubernetes the resource has some dependencies before deployment
  • C) Keys to tell Kubernetes to wait until conditions are met before deleting resource
  • D) Keys to tell Kubernetes to wait for some actions before deploying resource

Finalizers are special keys to tell Kubernetes to wait until some conditions are met before deleting the resource. You can assign finalizers to the Kubernetes objects when you create them, or the controller or some Operators may add finalizers to the resources to achieve some things. For example, when you try to delete a PVC resource, the controller adds a finalizer to the PVC to stop its deletion process until none of the Pods use that PVC storage. When the condition is met, and no one uses the PVC, the controller removes the finalizer, and the resource can be deleted safely.

  • A) Service which returns actual Pods’ IP addresses
  • B) Service which returns multiple Service CIDR addresses
  • C) Service which has no DNS name
  • D) Service which has no Endpoints resource

Headless Service is a special Service type with a DNS name but no ClusterIP. When you try to resolve a Headless Service, you’ll get the actual Pods’ IP addresses instead of ClusterIP. Headless Services load balancing uses the DNS round-robin load balancing algorithm. In some scenarios, like clustering stateful applications, they can be used to identify the number of active members and their actual IP addresses.

  • A) By deploying more than one replica
  • B) By adding proper PriorityClass to the Pods
  • C) By creating PDB resource for Pods
  • D) All of the above choices

Having all the above choices would be best to make your application resilient to node failure and disruption. By deploying the application with multiple replicas, your application can be deployed into various worker nodes. By using PriorityClass, your application will be safe if the worker node goes under resource pressure, and by using PDB, you can enforce the application’s high availability during some maintenance works like node upgrades.

  • A) More than 10 worker nodes
  • B) More than 25 worker nodes
  • C) More than 50 worker nodes
  • D) More than 100 worker nodes

If your cluster has more than 50 worker nodes, “this is the default value”, Kubernetes considers your cluster as a large cluster. In large clusters, scheduling decisions, evictions, health checks, etc., follow some additional rules. This option can be configured in kube-controller-manager with --large-cluster-size-thresholdoption.

  • A) At least 35% of worker nodes
  • B) At least 45% of worker nodes
  • C) At least 55% of worker nodes
  • D) At least 75% of worker nodes

By default, the value of --unhealthy-zone-threshold option is 0.55 which means if at least 55% of worker nodes “minimum 3 nodes ” in the zone are NotReady “unhealthy, down, failed, etc.” the zone will be treated as unhealthy. If the zone becomes unhealthy and the cluster is a large cluster, the node eviction rate will be reduced; otherwise, the eviction process will be stopped to avoid misleading evictions.

  • A) 1 node per second
  • B) 1 node per 10 seconds
  • C) 1 node per 100 seconds
  • D) 1 node per 5 minutes

In a normal situation, if a node fails, Kubernetes follows --node-eviction-rate that its default value is 0.1 which means 1 node per 10 seconds. So, if a couple of nodes fail at the same time, 1 node per 10 seconds will be evicted. This option can reduce the amount of changes and requests to the cluster at the same time.

  • A) Eviction rate will be changed to 1 node per 10 seconds
  • B) Eviction rate will be changed to 1 node per 100 seconds
  • C) Node controller will be restarted to solve the issue
  • D) The eviction process will be stopped

Normally, The Node controller evicts Pods from the NotReady “failed” nodes with 1 node per 10 seconds approach. If zones become unhealthy based on the defined threshold, the eviction rate will be reduced to 1 node per 100 seconds in large clusters, “clusters with more than 50 worker nodes”, but in small clusters “mostly they’re not deployed in multi-zone” this threshold points the entire cluster, and the node controller will stop the eviction process because it’s obvious that the cluster has no enough resources to cover 55% workloads or this downtime is related to master issues, not the worker nodes.

  • A) 10 seconds
  • B) 1 minute
  • C) 5 minutes
  • D) 10 minutes

After a node fails, Kubernetes waits for 5 minutes to delete Pods from that node. This is the default value for the deletion grace period of the failed nodes. You can change this configuration through --pod-eviction-timeout old versions or PodEvictionTimeout in kube-controller-manager configuration manifest. You can also change this behaviour per Pod using the Taint-based Evictions within the Pod manifest.

  • A) Pod security and compliance
  • B) High availability and scaling
  • C) Upgrades and rollbacks
  • D) Application load balancing

The Deployment controller’s goal is to provide a way for application upgrades, rollouts, and rollbacks. This controller uses the ReplicaSet resource under the hood to achieve the desired state, scaling and high availability of applications. So, the Deployment controller itself has no mission except upgrades, rollouts and rollbacks.

  • A) Split Pods into different slices for security
  • B) Reduce the system overhead of updating endpoints
  • C) Provide better observation for the Pods
  • D) Provide a way for cluster multi-tenancy management

When you have tons of Pods of your application, the legacy Endpoints controller keeps track of all of them in one object. So, by growing the number of replicas, your legacy Endpoints resource gets bigger and bigger. In a stable situation, everything is good, but if something goes wrong with your application or you change, scale or upgrade the application, Kubernetes needs to upgrade a large object which creates a huge overhead on Kubernetes components. The new EndpointSlice controller provides the same capabilities as the Endpoints controller, but it splits endpoints into chunks of 100 endpoints to minimize the object size and reduce the impact

  • A) It automatically restarts the init container if it exits the execution
  • B) It changes init container lifecycle behaviour to live as much as regular containers
  • C) It does not wait to complete to start the next init container
  • D) All of the above answers

This option is a new feature in Kubernetes 1.28 called Sidecar. Before this feature, we had problems running sidecars in special situations, like Jobs and CronJobs. This feature allows you to provide restartPolicy option per container for initContainers , and the value can only be Always which means this is a sidecar container, and it should live as much as the regular containers. When you use this option, the normal behaviour of the init container will be changed, and that init container will not exit, and the next init container will not wait for the previous one to end “Normally, the first init container should exit successfully, then the next one will be run, and after running all init containers, regular containers will be started”, if the init container exits, it gets restarted again.

  • A) A kind of toleration which evicts Pods with delay after taints get added
  • B) A kind of eviction which evicts Pods based on a pre-defined scheduling rule
  • C) A kind of toleration which bypasses all taints to prevent Pod evictions
  • D) A kind of eviction which happens after removing the node taint

Taint-based eviction is a way to tell the scheduler to evict Pods with some delay after a new taint gets added to the node. We mostly use this method of eviction to customize the eviction time after node failures to bypass the default 5m eviction timeout.

  • A) A language to develop and maintain Kubernetes core
  • B) A language to develop Kubernetes controllers and operators
  • C) A language to implement Policy-as-Code in Kubernetes
  • D) A language to test Kubernetes before releasing the new version

Common Expression Language (CEL) is a new expression language evaluated directly within the Kubernetes API Server to implement Policy-as-Code. Before introducing this language, we had to use external systems and languages like OPA/Rego, Kyverno, etc., for implementing Policy as Code. Now, we can use this feature to implement our Policy as Code logic natively in Kubernetes without any external tools.

  • A) By adding “immutable” option to the ConfigMap spec
  • B) By installing an additional controller to lock ConfigMap resources
  • C) By adding locks in the etcd database
  • D) By denying update access

Kubernetes natively provides a way to create an immutable ConfigMap so that when you create a ConfigMap, after the creation, no one can change its manifest, and to change it, the only way is to delete the current one and create a new one. This option also works for Secrets, and we can create immutable Secrets as well.

  • A) Create a ClusterIP Service resource
  • B) Create an ExternalName service resource
  • C) Create a ClusterIP Service without selectors and create Endpoints manually
  • D) Create an Ingress resource and use the Rewrite rule

To do so, we must create a Service without any Selectors so the Service controller doesn’t create an Endpoints for that Service; after that, we must create an Endpoints resource with the same name as the Service resource and provide our external service IP addresses in Endpoints subsets. By creating such a configuration, if a Pod tries to resolve the Service DNS name, it gets the external service IP addresses.

  • A) It’s just a definition to clarify this Service is related to this StatefulSet
  • B) The StatefulSet controller automatically adds subdomains to that Service
  • C) The StatefulSet identifies StatefulSet Pods with that Service
  • D) The StatefulSet uses that Service to find the Pods’ order

Normally, when you create a Service, the service can be resolved using this DNS pattern <service_name>.<namespace_name>.svc.<cluster_address> for example, nginx.default.svc.cluster.local. Headless services do the same thing, but instead of resolving the ClusterIP address, they return Pods’ IP addresses. When it comes to StatefulSet, and you provide a Headless service to a StatefulSet resource, the StatefulSet Controller adds subdomains to this service so that the service can be resolved with <statefulset_pod_name>.<service_name>.<namespace_name>.svc.<cluster_address> for example, if you deploy Redis with StatefulSet with 3 replicas with the name redis and assign a Headless service with the name of redis then StatefulSet deploys redis-0 redis-1 redis-2 and the Headless service can also be resolved using redis-0.redis.default.svc.cluster.local.

  • A) It is automatically implemented by the StatefulSet resource
  • B) It can be implemented using Ingress resource
  • C) It needs to be implemented from the Application side
  • D) It can be implemented by sessionAffinity in Service

Kubernetes supports ClientIP-based session affinity to help you implement session stickiness for applications. All you need is to create a Service and provide sessionAffinity and sessionAffinityConfig to implement this option. Note that the client will be bound to a specific Pod for a specific duration configured by timeout.

  • A) To authenticate another Pod with TokenReview resource
  • B) To provide a token to another Pod with TokenRequest resource
  • C) To provide identity to External applications
  • D) To provide identity to Pods

The system:auth-delegator role is a Cluster Role that helps deployed applications to check the identity and authenticate another Pod before providing them access to resources. Kubernetes natively provides a way for microservices to authenticate each other by using TokenReview and SubjectAccessReview resources.

Subscribe to our "Newsletter"

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top