Prepare for your Kubernetes interview with these 30 carefully curated questions and answers. This guide progresses from basic concepts for freshers to advanced scenarios for experienced professionals with 1-6 years of experience. Master Kubernetes architecture, deployments, networking, security, and troubleshooting to excel in your next role at companies like Amazon, Zoho, or Atlassian.
Basic Kubernetes Questions (Freshers)
1. What is Kubernetes?
Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications across clusters of hosts. It provides container orchestration features like scheduling, self-healing, and load balancing.[1][3]
2. Why is Kubernetes popular today?
Kubernetes is popular due to its scalability, self-healing capabilities, automated rollouts and rollbacks, and strong community support. It simplifies managing containerized workloads at scale.[1][4]
3. Describe the Kubernetes architecture.
Kubernetes architecture includes a Control Plane (API Server, etcd, Scheduler, Controller Manager) for cluster management and Worker Nodes (Kubelet, Container Runtime, Kube-proxy) for running pods. The Control Plane maintains desired state, while nodes execute workloads.[1][3]
4. What is the role of etcd in Kubernetes?
etcd is a distributed key-value store that serves as the single source of truth for all cluster data. It stores the cluster state and ensures data consistency across components.[1][3]
5. What is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes, consisting of one or more containers that share storage and network resources. Pods are ephemeral and scheduled together on nodes.[2][3]
6. What is the difference between a Pod and a container?
A container runs a single application instance, while a Pod can contain multiple containers that need to share resources like network namespace and storage volumes.[3]
7. What are Kubernetes Services?
Kubernetes Services provide a stable IP address and DNS name to access a set of pods, enabling load balancing and service discovery. Types include ClusterIP, NodePort, and LoadBalancer.[2]
8. What is a Deployment in Kubernetes?
A Deployment manages a set of identical pods using ReplicaSets, handling updates, scaling, and rollbacks declaratively. It ensures the desired number of replicas are running.[5]
9. What is a ReplicaSet?
A ReplicaSet ensures a specified number of pod replicas are running at any time. It is used by Deployments to maintain desired pod counts and self-heal failures.[5]
10. What are Namespaces in Kubernetes?
Namespaces provide a way to divide cluster resources among multiple users or teams, enabling logical isolation of pods, services, and other objects within the same cluster.[2]
Intermediate Kubernetes Questions (1-3 Years Experience)
11. Explain Kubernetes networking model.
Kubernetes networking assigns a unique IP to each pod for pod-to-pod communication. Services provide stable endpoints, Ingress manages external traffic, and Network Policies control traffic flow.[1][2]
12. What is Kube-proxy and how does it work?
Kube-proxy runs on each node and manages network rules for service discovery and load balancing. It supports modes like iptables, IPVS, and userspace for routing traffic to pods.[3]
13. What are Persistent Volumes (PV) and Persistent Volume Claims (PVC)?
Persistent Volumes provide durable storage independent of pods, while PVCs are requests for storage by users. PVCs bind to matching PVs for stateful applications.[5]
14. How does the Kubernetes Scheduler work?
The Scheduler assigns pods to nodes through Filtering (predicates like resource availability, taints), Scoring (priorities), and Binding phases, selecting the best-fit node.[3]
15. What is Horizontal Pod Autoscaler (HPA)?
HPA automatically scales the number of pods in a Deployment based on CPU/memory utilization or custom metrics, ensuring optimal resource usage under varying loads.[2]
16. Explain rolling updates in Kubernetes.
Rolling updates gradually replace old pods with new ones using Deployment strategies like maxUnavailable=1 to ensure zero downtime during application upgrades.[5]
17. What are ConfigMaps and Secrets?
ConfigMaps store non-sensitive configuration data for pods, while Secrets handle sensitive data like passwords. Both are mounted as volumes or environment variables.[1]
18. What is a Headless Service?
A Headless Service does not provide load balancing; instead, it returns pod IPs directly via DNS, useful for stateful applications needing direct pod discovery.[4]
19. How do you expose a Kubernetes application externally?
Use NodePort or LoadBalancer Services for external access, or Ingress resources with an Ingress Controller to manage HTTP/HTTPS traffic routing.[2]
20. What are Liveness and Readiness Probes?
Liveness probes check if a pod is alive (restart if failing), while Readiness probes determine if a pod is ready to receive traffic (exclude if failing).[1]
Advanced Kubernetes Questions (3-6 Years Experience)
21. What are Taints and Tolerations?
Taints repel pods from nodes unless pods have matching Tolerations. Used for node scheduling control, like dedicating nodes for specific workloads.[1]
22. Explain Role-Based Access Control (RBAC) in Kubernetes.
RBAC uses Roles/ClusterRoles and RoleBindings/ClusterRoleBindings to grant permissions on resources, enforcing least-privilege access within namespaces or cluster-wide.[1][6]
23. What are Custom Resource Definitions (CRDs)?
CRDs extend Kubernetes API with custom resources, allowing users to define new object types for operators or domain-specific workloads.[5]
24. How do you implement high availability in Kubernetes?
Use multi-node control planes, Pod Anti-Affinity for workload distribution, etcd clustering, and self-healing with probes and autoscaling for resilience.[1]
25. What are Kubernetes security best practices?
Implement RBAC, Network Policies, Pod Security Standards, Secrets management, image scanning, and non-root containers to secure clusters.[1]
26. How do you backup and restore a Kubernetes cluster?
Backup etcd snapshots for cluster state and use tools for Persistent Volumes. Restore by stopping API server, restoring etcd, and restarting components.[3]
27. Scenario: A Deployment at Paytm is not scaling as expected. What do you check?
Verify HPA configuration, resource requests/limits, metrics server status, and pod events with kubectl describe deployment. Check for quota limits.[2]
28. Scenario: At Salesforce, a pod is in CrashLoopBackOff. How do you debug?
Run kubectl logs <pod-name>, kubectl describe pod <pod-name> for events, check resource throttling, and probe failures.[1][2]
kubectl logs <pod-name>
kubectl describe pod <pod-name>
29. Scenario: Pods at Swiggy are not reachable via Service. Troubleshoot.
Check Service selectors match pod labels, endpoint status with kubectl get endpoints, network policies, and pod readiness. Verify Kube-proxy status.[2]
30. Scenario: Design a HA cluster for Oracle workloads with increased traffic.
Deploy multi-master control plane, use Cluster Autoscaler, HPA for pods, Pod Anti-Affinity, and rolling updates with maxUnavailable=1 for zero-downtime scaling.[1][6]