Docker has become an essential skill for DevOps engineers, backend developers, and cloud infrastructure professionals. Whether you’re preparing for your first interview or advancing in your career, mastering Docker concepts is crucial. This guide covers 35 carefully selected interview questions spanning basic to advanced levels, designed to help you succeed in technical interviews.
Basic Docker Concepts
1. What is Docker?
Docker is a containerization platform that packages applications and their dependencies into isolated, lightweight containers. These containers can run consistently across different environments, from development machines to production servers. Docker uses images (blueprints) to create containers (runtime instances), enabling efficient resource utilization and simplified deployment workflows.
2. What is a Docker Container?
A Docker container is a lightweight, standalone, executable package that contains everything needed to run an application—including the code, runtime, system tools, libraries, and settings. Containers are runtime instances created from Docker images and provide isolated environments where applications run independently of the host system.
3. What is a Docker Image?
A Docker image is a read-only template or blueprint used to create Docker containers. It contains all the instructions and dependencies needed to build a container, including the application code, runtime environment, system libraries, and configuration files. Images are built layer-by-layer and can be stored in registries for sharing and reuse.
4. What is a Dockerfile?
A Dockerfile is a script containing a series of instructions for building a Docker image. Each command in the Dockerfile creates a new layer in the image. Here’s a basic example:
FROM python:3.9-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
The FROM instruction specifies the base image, WORKDIR sets the working directory, COPY transfers files, RUN executes commands, EXPOSE declares ports, and CMD defines the default command when the container starts.
5. What are the Three Main Docker Components?
Docker architecture consists of three primary components:
- Docker Client: The command-line interface that allows users to issue commands to build and run containers, establishing communication with the Docker Host.
- Docker Host: Contains the Docker daemon and hosts containers and their associated images. The daemon manages container lifecycle and communicates with registries.
- Docker Registry: A storage system for Docker images. Public registries like Docker Hub store images that can be accessed globally, while private registries store proprietary images within organizations.
6. How do you Check the Docker Client Version?
Use the command: docker version. This displays information about both the Docker client and server versions installed on your system.
7. What Command Lists the Status of All Docker Containers?
The command docker ps -a lists all containers including those that are running and stopped. Using docker ps without the -a flag shows only running containers.
8. When Would You Lose Data Stored in a Container?
Data stored in a container persists as long as the container exists. You only lose data when you explicitly delete the container. To retain data beyond container deletion, use Docker volumes or bind mounts to store data outside the container filesystem.
9. What is a Docker Registry?
A Docker Registry is a storage and delivery system for Docker images. Public registries like Docker Hub and Docker Cloud allow anyone to push and pull images, making them ideal for open-source projects. Private registries store images within an organization’s infrastructure, maintaining security and control over proprietary applications.
10. What is the Difference Between Docker Images and Layers?
A Docker image is the complete, immutable blueprint that contains all instructions to create a container. Layers are individual components that make up an image. Each instruction in a Dockerfile creates a new layer, and these layers stack on top of each other to form the final image. This layered architecture enables efficient storage and faster builds through caching.
Intermediate Docker Concepts
11. How do you Create a Docker Network and Why Would You Need One?
Create a Docker network using: docker network create [OPTIONS] NETWORK. Networks are essential for enabling communication between containers running different services. Key benefits include flexible network models, automatic service discovery, security through isolation, and dynamic scaling capabilities. Networks allow you to decouple services while maintaining reliable inter-container communication.
12. What is the Difference Between COPY and ADD Commands in a Dockerfile?
Both COPY and ADD transfer files from the host to the container. The primary difference is that ADD has additional functionality—it can extract tar archives automatically and fetch remote files from URLs. COPY is considered the best practice for simple file transfers because it’s more transparent and predictable. Use ADD only when you specifically need its advanced features.
13. What is the Difference Between CMD and ENTRYPOINT in a Dockerfile?
CMD provides the default command and arguments when a container starts, but these can be overridden from the command line. ENTRYPOINT specifies the container’s main executable and is harder to override. In practice, ENTRYPOINT is often used with CMD, where ENTRYPOINT defines the executable and CMD provides default arguments.
14. How do you Transfer a Docker Image Between Machines Without Using a Repository?
You can save and load images directly between machines using tar files. Save an image with: docker save image-name > image.tar. Transfer the tar file to another machine and load it with: docker load < image.tar. This method avoids the need for public or private registries when moving images between systems.
15. What is Build Cache in Docker?
Docker uses build cache to speed up image construction. When building an image, Docker reuses layers from previous builds if the instructions and source files haven't changed. This significantly reduces build times for iterative development. You can bypass cache using the --no-cache flag if you need a completely fresh build.
16. What is the Difference Between 'docker run' and 'docker create'?
docker run combines image pulling (if needed), container creation, and container startup into a single command. docker create only creates a container from an image but doesn't start it. You must use docker start to begin execution. Use docker create when you want to set up a container for later use.
17. What is the Difference Between "expose" and "publish" in Docker?
EXPOSE in a Dockerfile documents which ports the container listens on, but doesn't actually open them. Publishing ports using the -p flag in docker run actually maps container ports to host ports, making them accessible to external traffic. Publishing is required for external access; EXPOSE is documentation.
18. What is the Difference Between a Repository and a Registry?
A registry is a service that stores and serves Docker images (like Docker Hub or a private registry server). A repository is a collection of related images within a registry, typically organized by name with different tags representing versions. For example, within Docker Hub registry, there might be multiple repositories like "nginx" with tags "latest," "1.19," etc.
19. How do you Stop a Running Container?
Use the command: docker stop container-id-or-name. This gracefully shuts down the container by sending a SIGTERM signal, allowing processes to clean up. If the container doesn't stop within a timeout period, Docker will force stop it.
20. How do you Kill a Container?
Use the command: docker kill container-id-or-name. This immediately terminates the container by sending a SIGKILL signal, bypassing any graceful shutdown procedures. Use docker kill when a container is unresponsive to docker stop.
21. How do you Enter a Running Container?
Execute a command inside a running container using: docker exec -it container-id-or-name /bin/bash. The -i flag keeps stdin open for interaction, and -t allocates a pseudo-terminal. This is useful for debugging, inspecting files, or running administrative tasks without stopping the container.
22. What Command is Used to Remove Stopped Containers and Unused Resources?
The docker prune command family removes unused resources. docker container prune removes all stopped containers, docker image prune removes dangling images, and docker system prune removes all unused containers, networks, build caches, and dangling images at once.
23. What is an Orphan Volume and How do you Remove It?
An orphan volume is a Docker volume no longer attached to any container, typically created when containers are deleted without removing their associated volumes. Find orphan volumes with: docker volume ls -f dangling=true. Remove them using: docker volume rm volume-name or remove all at once with: docker volume prune.
24. What is Docker Compose and When Would You Use It?
Docker Compose is a tool for defining and running multi-container Docker applications. You write a YAML file describing all services, networks, and volumes, then start everything with a single command. It's ideal for development environments, testing, and production deployments involving multiple interconnected containers.
25. How do you Control the Startup Order of Services in Docker Compose?
While Docker Compose doesn't enforce startup order directly, you can use the depends_on option to specify service dependencies. However, depends_on only ensures one service starts before another, not that it's ready. For readiness checks, implement health checks in your Compose file and configure dependent services to wait for them.
Advanced Docker Concepts
26. How do you Secure Sensitive Data Inside Docker Containers?
Several strategies protect sensitive data in production Docker environments:
- Use Docker secrets for sensitive data in swarm mode.
- Run containers with limited privileges and a non-root user in your Dockerfile.
- Implement kernel security features like AppArmor or SELinux.
- Disable SSH access and use only necessary network ports.
- Enable TLS encryption for inter-container communication.
- Scan images for vulnerabilities using tools like Trivy.
- Limit resource usage (CPU, memory) to prevent resource exhaustion attacks.
- Use namespace remapping to isolate container users from host users.
27. What is the Default CPU Limit Set for a Container?
By default, Docker containers have unlimited CPU access—they can use all available CPU cores. You can limit CPU usage with the --cpus flag (e.g., docker run --cpus=0.5 limits to half a CPU) or --cpu-shares for relative priority allocation among containers.
28. When you Limit Memory for a Container, Does it Reserve Memory?
Memory limits set with -m or --memory define the maximum memory a container can use, not a guaranteed reservation. The kernel can reclaim unused memory from containers. To guarantee memory reservation, use the --memory-reservation flag, which allows the container to access more memory under low-pressure conditions but is limited during memory contention.
29. How is Docker Different from Virtual Machines?
Docker containers are lightweight and share the host operating system kernel, enabling fast startup (milliseconds) and efficient resource usage. Virtual machines include entire operating systems, consuming more disk space and startup time (seconds). Containers provide faster deployment and better resource density, while VMs offer stronger isolation and support heterogeneous OS requirements. Many production systems use both technologies complementarily.
30. Is it Possible to Generate a Dockerfile from an Existing Image?
There's no native Docker command to extract a Dockerfile from an image, but you can reverse-engineer it by inspecting the image history using: docker history image-name. This shows all layers and their creation commands. However, the original Dockerfile may not be exactly recoverable, especially if built from multiple sources or include non-documented changes.
31. What is the ONBUILD Instruction in a Dockerfile?
ONBUILD registers instructions that execute when another image uses this image as its base. For example, a base application image might have: ONBUILD COPY requirements.txt /app/. When a child image inherits from it, that COPY command automatically executes. This is useful for creating reusable base images that standardize build processes across multiple derived images.
32. How does Docker Run Containers on Non-Linux Systems?
Docker uses a lightweight Linux virtual machine on macOS and Windows. Docker Desktop includes a hypervisor (Hyper-V on Windows, xhypervisor on macOS) running a minimal Linux kernel. Containers execute within this Linux VM, then communicate through Docker's client interface. This approach provides native-like container performance while maintaining cross-platform compatibility.
33. What are Key Limitations of Containers Compared to Virtual Machines?
Containers have weaker isolation than VMs—they share the host kernel and can potentially compromise each other through kernel vulnerabilities. They require the host OS to be compatible with container applications, limiting OS heterogeneity. Persistent data management is more complex, and monitoring container resource usage requires different tools than VM monitoring. Security compliance may require the stronger isolation that VMs provide.
34. How do you Use Docker with Multiple Environments?
Manage multiple environments by using environment variables, configuration files, and Docker Compose override files. Define a base docker-compose.yml with shared services, then create environment-specific overrides like docker-compose.prod.yml. Inject environment-specific variables using the .env file or --env-file flag. This approach ensures consistent container behavior across development, staging, and production while managing environment-specific configurations.
35. What Steps Would You Take to Secure Containers in a Production Environment?
Implement a multi-layered security strategy: use minimal base images (distroless images with only essential packages), run containers as non-root users, enable security scanning for image vulnerabilities before deployment, implement network policies to restrict inter-container communication, use read-only filesystems where possible, enable audit logging, regularly update base images and dependencies, implement resource limits to prevent denial-of-service attacks, use secrets management for sensitive credentials, and enable container runtime security monitoring. At Atlassian, teams implement similar controls to maintain secure containerized application deployments.
Key Takeaways for Docker Interview Success
Docker interviews evaluate both conceptual understanding and practical experience. As you prepare, focus on:
- Understanding Core Concepts: Master the relationships between images, containers, registries, and Dockerfiles. Companies like Zoho emphasize candidates who can explain "why" Docker works the way it does, not just "how" to use commands.
- Practical Command Proficiency: Be comfortable with essential commands like
docker run,docker build,docker ps, and networking commands. Practice creating Dockerfiles and running multi-container applications. - Real-World Scenarios: Prepare examples from your experience—projects you've containerized, optimization challenges you've solved, or deployment issues you've debugged. Be ready to discuss how Docker improved your development workflow.
- Security and Best Practices: Understand security implications of containerization, including privilege isolation, image scanning, and secrets management. Interviewers at companies like Salesforce expect candidates to consider production security requirements.
- Performance Optimization: Discuss strategies for reducing image sizes, improving build times through layer caching, and efficient resource allocation for containers.
Docker expertise is highly valued across the industry, from startups like Swiggy building scaled platforms to established enterprises managing complex infrastructures. By thoroughly understanding these interview questions and related concepts, you'll be well-prepared to demonstrate your Docker competence and advance your career in containerization and DevOps.