Unlock the ultimate Docker & Docker Compose cheat sheet packed with essential commands and examples to supercharge your container management! From building images to scaling services, master key Docker Compose tips to boost your productivity. Don’t miss out—streamline your dev workflow and deploy multi-container apps like a pro today! #CentLinux #Linux #Docker
Table of Contents
Introduction
If you’ve ever worked in a DevOps environment or dabbled with microservices, you’ve probably come across Docker — the powerhouse of containerization. Docker revolutionized the way developers build, ship, and run applications by allowing them to package everything needed to run an app — from code to dependencies — into a lightweight, portable container. But as applications grew more complex, managing multiple containers manually became a nightmare. That’s where Docker Compose steps in to save the day.
Docker Compose lets you define and manage multi-container applications easily. Instead of manually linking and starting containers, you define everything in a single YAML file, and Compose does the orchestration for you. Whether you’re deploying a LAMP stack, a full-fledged microservices architecture, or just testing a new app locally, Docker Compose simplifies it all.
In this ultimate cheat sheet, we’ll dive deep into Docker and Docker Compose — from installation to advanced orchestration techniques. You’ll learn the essential commands, YAML structures, and best practices to make your container workflows smooth and efficient. Think of this as your quick reference guide and hands-on manual rolled into one.

Understanding Docker Architecture
Before you start spinning up containers, it’s essential to understand what makes Docker tick under the hood. At its core, Docker is built on a client-server architecture that allows developers to build, share, and run applications seamlessly. Let’s break down its key components.
The Docker Client is what you, as a user, interact with. Every time you run a command like docker run or docker build, it communicates with the Docker Daemon (also known as dockerd). The daemon does the heavy lifting — building images, managing containers, handling networks, and maintaining volumes.
Then comes the Docker Registry, which acts as a library or repository where images are stored and retrieved. The default public registry is Docker Hub, but enterprises often set up private registries to store proprietary images securely. When you execute docker pull nginx, Docker fetches that image from the registry.
Docker Images are the blueprints for containers. They’re immutable, layered snapshots of an application’s filesystem and environment. Each image layer represents a modification — like installing dependencies or copying source code — and these layers are cached for efficiency.
Once you launch an image, it becomes a Docker Container, a running instance of that image with its own isolated environment. Containers are lightweight and portable, thanks to kernel-level features like namespaces and cgroups, which separate their resources from the host system.
In summary, Docker’s architecture revolves around four key concepts — client, daemon, image, and container — all working together to provide a seamless containerization experience. Understanding this flow helps you grasp how Docker simplifies deployment and scaling, especially when paired with Compose for orchestration.
Installing Docker and Docker Compose
Getting Docker up and running is the first step toward container mastery. The installation process varies slightly depending on your operating system, but Docker has made it incredibly straightforward.
Installing on Linux
On most Linux distributions, Docker can be installed directly from the official repository. For example, on Ubuntu, you can run:
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.ioEnable and start Docker automatically on boot:
sudo systemctl enable docker
sudo systemctl start dockerTo avoid using sudo with every Docker command, add your user to the Docker group:
sudo usermod -aG docker $USERInstalling on macOS
For macOS users, Docker Desktop is the simplest choice. Download it from the Docker website, install it like any other application, and it includes both Docker Engine and Docker Compose.
Installing on Windows
Windows users can also rely on Docker Desktop, which integrates seamlessly with WSL 2 (Windows Subsystem for Linux). Once installed, you can use Docker commands from PowerShell, CMD, or within WSL.
Installing Docker Compose
In older versions, Docker Compose was a separate binary, but now it’s integrated into the Docker CLI. You can check if it’s installed with:
docker compose versionIf you’re using a standalone setup, you can install it manually:
sudo curl -L "https://github.com/docker/compose/releases/download/v2.20.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-composeOnce installed, verify:
docker-compose --versionYou’re now ready to start containerizing applications with Docker and orchestrating them with Compose.
Essential Docker Commands
Docker is command-line driven, and mastering its commands is the key to container efficiency. Once you get familiar with the basics, working with containers becomes second nature. Here’s a breakdown of essential commands every DevOps engineer or developer should have in their toolbox.
Working with Images
Docker images are the foundation of containers. You can pull, build, tag, and push them with a few simple commands.
Pull an image from Docker Hub
docker pull ubuntu This downloads the latest Ubuntu image from Docker Hub.
List all available images
docker images It displays the repository, tag, image ID, creation date, and size.
Build an image from a Dockerfile
docker build -t myapp:latest . The -t flag tags the image, and the . refers to the current directory.
Remove an image
docker rmi myapp:latest This command deletes the specified image from your local system.
Push an image to Docker Hub
docker push username/myapp:latest You’ll need to be logged in using following command to upload your image.
docker loginWorking with Containers
Containers are running instances of images. You can start, stop, and manage them easily using these commands.
Run a container
docker run -d -p 8080:80 nginx The -d flag runs it in detached mode, and -p maps port 8080 on your host to port 80 inside the container.
List running containers
docker ps Add -a to show all containers, including stopped ones.
Stop and remove a container
docker stop container_id
docker rm container_idView container logs
docker logs -f container_id The -f flag streams logs in real-time.
Networks and Volumes
Docker networks and volumes help with inter-container communication and persistent storage.
List all networks
docker network lsCreate a custom network
docker network create mynetworkList all volume
docker volume lsRemove unused volumes
docker volume pruneMemorizing these core commands will help you move faster when managing Docker environments. With these tools, you can build, run, stop, and connect containers effortlessly.
Docker Image Management
Docker images are the heart of any containerized application. They define what your container will contain and how it will behave. Efficient image management ensures you maintain clean, lean, and reusable images.
Creating and Building Images
A Docker image is typically built from a Dockerfile, a text file with instructions that define what goes into your image. Here’s a simple example:
FROM ubuntu:20.04
RUN apt update && apt install -y nginx
COPY . /var/www/html
CMD ["nginx", "-g", "daemon off;"]To build this image, run:
docker build -t mynginx:1.0 .Each instruction creates a new layer, which Docker caches. The next time you rebuild the same image, unchanged layers are reused, saving build time.
Tagging and Versioning
When you tag an image, you essentially label it for easier management:
docker tag mynginx:1.0 myrepo/mynginx:latestTags like latest, v1.0, or stable make version tracking simple.
Removing Unused Images
Over time, unused images accumulate and consume disk space. Clean them up with:
docker image prune -aThis removes all dangling and unused images.
Inspecting and Exporting
To view detailed metadata:
docker inspect mynginx:1.0If you want to export an image for backup or transfer:
docker save -o mynginx.tar mynginx:1.0Docker images make your applications portable, reproducible, and consistent across environments. Proper tagging, pruning, and version management keep your environment clean and organized — critical for any serious DevOps setup.
Container Lifecycle Management
Managing container lifecycles efficiently ensures stability, consistency, and performance across development and production environments. Once you’ve built your image, you’ll frequently start, stop, inspect, and debug containers. Understanding these stages helps you keep your Docker environment clean and predictable.
Starting Containers
The most common operation is starting containers. Use the docker run command to spin up a new container:
docker run -d --name webserver -p 8080:80 nginxHere’s what happens:
-druns the container in detached mode.--name webserverassigns a name to your container.-p 8080:80maps the host port 8080 to container port 80.nginxis the image being used.
You can confirm it’s running with:
docker psThis lists all active containers, including their names, ports, and status.
Stopping and Restarting Containers
Sometimes you need to pause or restart a container for maintenance or updates.
Stop a running container
docker stop webserverRestart a container
docker restart webserverPause and unpause a container
docker pause webserver
docker unpause webserverThese lifecycle commands make it simple to manage uptime and apply configuration changes without redeploying entirely.
Inspecting Containers
Inspecting containers helps you dig into configuration details or troubleshoot issues:
docker inspect webserverYou’ll get a detailed JSON output with everything from environment variables to IP addresses.
Accessing Running Containers
Sometimes, you’ll need to jump inside a container to diagnose problems or manually run commands:
docker exec -it webserver bashThis opens an interactive shell session inside the running container. To detach, type exit.
Alternatively, you can attach directly to a container’s terminal using:
docker attach webserverUnlike exec, attach connects you to the container’s main process — use with caution!
Viewing Logs
Logs are critical for debugging:
docker logs -f webserverThe -f flag streams the logs in real-time, so you can monitor your container’s output as it runs.
Proper container lifecycle management ensures that your services remain stable, easy to maintain, and fully observable. It also helps prevent orphaned containers and dangling resources from consuming unnecessary system memory and CPU.
Networking in Docker
Networking is what allows Docker containers to communicate with each other, the host machine, and the outside world. Docker’s flexible networking system lets you create isolated environments, connect containers across hosts, and even define overlay networks for clustered setups.
Default Networks
Docker creates several network types automatically:
- Bridge: The default network for standalone containers. Containers on the same bridge network can talk to each other using container names.
- Host: Removes network isolation, using the host’s networking directly. Useful for performance-critical applications.
- None: Disables networking completely. Ideal for highly isolated workloads.
View available networks with:
docker network lsCreating a Custom Network
To improve organization and communication between containers, create your own network:
docker network create myapp_networkThen, when running containers, connect them to that network:
docker run -d --name db --network myapp_network mysql
docker run -d --name app --network myapp_network myapp:latestNow, the app container can access the database container simply by its name (db).
Inspecting Networks
To inspect and troubleshoot network connections:
docker network inspect myapp_networkThis shows details about connected containers, subnet configurations, and gateways.
Exposing and Publishing Ports
There’s a difference between exposing and publishing:
Expose: Makes a port available for inter-container communication (defined in the Dockerfile using EXPOSE 80).
Publish: Makes a container port accessible from the host machine:
docker run -d -p 8080:80 nginx This maps host port 8080 to the container’s port 80.
Connecting Containers to Multiple Networks
You can connect containers to multiple networks as needed:
docker network connect mysecond_network appDocker’s networking model provides developers with flexibility and control over connectivity, isolation, and scalability — essential for building distributed systems.
Docker Volumes and Persistent Storage
One of the most common challenges when working with containers is data persistence. By default, when a container stops or is removed, all its internal data is lost. This is where Docker volumes come into play — they provide a way to store data persistently outside of the container’s lifecycle.
What Are Volumes?
A volume is a storage mechanism managed by Docker. It resides on the host system but is controlled entirely by Docker, meaning it can be shared between multiple containers or even reused when containers are destroyed and recreated.
There are three main ways to persist data:
- Volumes – Managed by Docker (recommended for most use cases).
- Bind Mounts – Link a host directory to a container path directly.
- tmpfs Mounts – Temporary in-memory storage (data is lost on container stop).
You can list all volumes on your system with:
docker volume lsCreating and Managing Volumes
To create a named volume:
docker volume create mydataAttach it to a container:
docker run -d --name db -v mydata:/var/lib/mysql mysqlThis ensures your MySQL data is stored outside the container, meaning it remains intact even if the container is deleted.
To inspect a volume:
docker volume inspect mydataAnd to remove unused volumes:
docker volume pruneMounting Host Directories
If you want to mount a specific host folder:
docker run -v /home/user/appdata:/usr/src/appdata nginxThis setup is particularly useful during development when you want to modify code on the host and see changes reflected instantly in the container.
Best Practices for Persistent Storage
- Use named volumes for data that should survive container updates.
- Use bind mounts only for development, not production.
- Regularly prune unused volumes to save disk space.
- Store sensitive data using Docker secrets or encrypted volumes.
Docker volumes simplify managing persistent storage and ensure data reliability, making them indispensable for databases, logs, and application state management.
Working with Dockerfiles
A Dockerfile is a script containing instructions to build a Docker image. It’s like a recipe for creating your application’s container environment. Each line in the file represents a step that Docker executes sequentially to assemble the final image.
Basic Structure
Here’s a simple example of a Dockerfile:
# Start from an official image
FROM ubuntu:20.04
# Install dependencies
RUN apt-get update && apt-get install -y nginx
# Copy files from host to container
COPY . /var/www/html
# Expose port 80
EXPOSE 80
# Start Nginx when container launches
CMD ["nginx", "-g", "daemon off;"]Common Instructions
- FROM: Specifies the base image. Every Dockerfile starts with this.
- RUN: Executes commands in the shell (e.g., installing packages).
- COPY/ADD: Copies files from the host system into the image.
- WORKDIR: Sets the working directory for subsequent instructions.
- ENV: Sets environment variables.
- EXPOSE: Documents which ports the container listens on.
- CMD/ENTRYPOINT: Defines the default command to run.
Building the Image
To build an image from your Dockerfile:
docker build -t mynginx:latest .Docker caches each layer, so rebuilds are faster when parts of the image haven’t changed.
Optimizing Dockerfiles
- Combine multiple
RUNcommands to reduce image layers. - Use
.dockerignoreto exclude unnecessary files (like logs, node_modules). - Use minimal base images like
alpinefor smaller and faster builds. - Always pin versions of dependencies for consistent builds.
Efficient Dockerfile design ensures reproducibility, smaller image sizes, and faster deployments — critical in continuous integration pipelines and large-scale environments.
Introduction to Docker Compose
While Docker handles single-container applications elegantly, real-world projects often rely on multiple containers—like a backend, frontend, and database—all working together. Managing these manually quickly becomes a headache. Enter Docker Compose, a tool that simplifies multi-container orchestration through a single YAML configuration file.
What is Docker Compose?
Docker Compose is a command-line tool that lets you define, configure, and manage multi-container applications. Instead of running several docker run commands, you write a docker-compose.yml file describing how containers interact, what networks they share, and which volumes they use.
For example, if your web application requires Nginx, PHP, and MySQL, Compose can start all three services with a single command:
docker compose upThis approach not only saves time but also ensures environment consistency across development, testing, and production.
Why Use Docker Compose?
Here’s why Compose is a DevOps favorite:
- Simplicity: Define entire stacks in one file.
- Consistency: The same configuration works across machines.
- Isolation: Each stack runs in its own network and volume context.
- Scalability: You can scale individual services easily using
--scale. - Automation: Perfect for CI/CD pipelines and local testing.
YAML Syntax Overview
Compose files use YAML (YAML Ain’t Markup Language) — a human-readable format designed for configuration. Each service, network, and volume is defined as a key-value pair. A minimal example:
version: '3.9'
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: rootHere, two services (web and db) are created. Docker Compose automatically builds a network so that the web container can talk to db by its service name.
Compose File Versions
Docker Compose has multiple schema versions (2.x, 3.x, and 3.9). Most users today rely on version 3.9, compatible with modern Docker setups and Swarm mode.
Docker Compose acts as your container conductor, orchestrating multiple services to perform in perfect harmony — much like a symphony. Whether you’re testing microservices locally or deploying on a cluster, Compose keeps everything in sync.
Docker Compose Cheat Sheet
Once you’ve written your docker-compose.yml file, you’ll primarily interact with it using Compose commands. These commands manage everything from starting containers to scaling services.
Starting and Stopping Containers
Start all services:
docker compose up Add the -d flag to run in detached mode:
docker compose up -dStop all services:
docker compose down This stops and removes all containers, networks, and temporary volumes created by Compose.
Inspecting and Managing Containers
List running services:
docker compose psView logs:
docker compose logs -f The -f flag follows logs in real-time.
Restart a specific service:
docker compose restart webBuilding and Rebuilding
If your Dockerfile or configuration changes:
docker compose buildTo rebuild specific services:
docker compose build dbScaling Services
One of Compose’s biggest strengths is scaling:
docker compose up --scale web=3 -dThis launches three instances of the web service for load balancing or redundancy.
Other Handy Commands
Pause/Unpause services:
docker compose pause
docker compose unpauseRemove stopped containers and unused images:
docker compose rmShortcut Summary Table
| Command | Description |
|---|---|
docker compose up | Start all services |
docker compose down | Stop and remove all |
docker compose ps | List services |
docker compose logs | View logs |
docker compose build | Build or rebuild services |
docker compose restart | Restart services |
docker compose exec | Run commands in a running container |
docker compose stop | Stop running services |
docker compose pull | Pull service images from registries |
With these commands, you can manage complex multi-container setups effortlessly — a huge productivity boost for developers and system admins alike.
Writing a Docker Compose File
The heart of Docker Compose lies in the docker-compose.yml file. This YAML configuration file defines how all containers (services) interact, what images they use, how they connect through networks, and where data is stored. Once this file is set up, orchestrating containers becomes as simple as running one command.
Structure of a Compose File
A typical docker-compose.yml file follows a clear hierarchical structure. Here’s a basic example:
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html
depends_on:
- db
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: appdb
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:Key Sections Explained
- version — Defines the Compose file format version. Use
"3.9"for maximum compatibility with current Docker versions. - services — Lists all containers and their configurations.
- web: Defines the frontend container.
- db: Defines the backend database container.
- ports — Maps container ports to host machine ports (
host:containerformat). - volumes — Defines persistent storage locations.
- depends_on — Specifies dependencies between services, ensuring that dependent services start in the correct order.
- environment — Sets environment variables inside containers.
- networks — Optionally defines communication between services (auto-created if not specified).
Read Also: KYAML: Complete Guide to Kubernetes YAML
Defining Networks and Volumes
You can also define networks explicitly:
networks:
backend:
driver: bridgeAnd use them within services:
services:
app:
image: myapp:latest
networks:
- backendThis structure ensures clean isolation, making it easy to manage complex systems with many interconnected containers.
Environment Variables
You can externalize configuration using environment files:
services:
db:
image: postgres
env_file:
- .envAnd your .env file might look like:
POSTGRES_USER=admin
POSTGRES_PASSWORD=secretThis approach keeps credentials out of your Compose file, improving security and flexibility.
Docker Compose files are like blueprints — once defined, you can replicate entire environments across different systems without manually setting up containers or networks. It’s declarative infrastructure at its finest.
Docker Compose Examples
Let’s look at a few practical examples that show how powerful Docker Compose can be. These setups cover real-world use cases ranging from basic web hosting to full-stack deployments.
Example 1: Nginx and PHP Setup
A simple setup for hosting a PHP website with Nginx and PHP-FPM:
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ./app:/var/www/html
depends_on:
- php
php:
image: php:8.2-fpm
volumes:
- ./app:/var/www/htmlRun it with:
docker compose up -dNow your Nginx server communicates directly with PHP-FPM, and your local code syncs automatically.
Example 2: Multi-Service Web Application
For a more complex application involving a backend API, frontend, and database:
version: "3.9"
services:
frontend:
build: ./frontend
ports:
- "3000:3000"
depends_on:
- backend
backend:
build: ./backend
ports:
- "5000:5000"
environment:
- DATABASE_URL=mysql://root:root@db/appdb
depends_on:
- db
db:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: appdb
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:You can bring up the entire stack with:
docker compose up -dExample 3: Using Environment Files
You can store configuration values in a .env file to keep your YAML clean:
version: "3.9"
services:
app:
image: myapp:latest
env_file:
- .envAnd in .env:
APP_ENV=production
APP_DEBUG=falseThis approach simplifies configuration management and improves security by separating secrets from code.
These examples show how Docker Compose can scale from a single developer project to a complex, production-grade multi-service environment — all with one command.
Debugging and Troubleshooting in Docker
Even the most well-designed containerized applications encounter issues. Docker provides powerful tools to debug containers, inspect configurations, and troubleshoot networking or resource-related problems. Learning how to diagnose these effectively can save hours of frustration during deployment or development.
Inspecting Containers
When something doesn’t behave as expected, your first step should be:
docker inspect <container_name_or_id>This command displays detailed JSON output containing container metadata—network settings, environment variables, mounted volumes, and resource limits. You can extract specific details with the --format flag:
docker inspect --format='{{.State.Status}}' webserverThis shows whether your container is running, paused, or stopped.
Checking Logs
Logs are essential for debugging application behavior:
docker logs -f webserverThe -f flag streams logs in real time, similar to tail -f. Combine this with timestamps for better traceability:
docker logs --timestamps webserverIf multiple containers are managed by Compose, you can view logs for all of them:
docker compose logs -fAccessing the Container Shell
Sometimes, you’ll need to “get inside” the container for manual inspection:
docker exec -it webserver bashInside, you can test network connectivity, read configuration files, or verify installed dependencies. For Alpine-based containers, replace bash with sh since Bash may not be installed.
Checking Resource Usage
If a container is consuming too many resources or running sluggishly:
docker statsThis live dashboard shows CPU, memory, and network usage per container, helping you spot performance bottlenecks instantly.
Debugging Networks
To inspect Docker networks:
docker network inspect myapp_networkYou can also check connectivity between containers using ping commands inside them:
docker exec -it app ping dbIf services can’t communicate, ensure they’re on the same network and ports are exposed properly.
Common Docker Errors and Fixes
1. Error: Bind for 0.0.0.0:80 failed: port is already allocated
Fix: Stop the process occupying the port or change your mapping (e.g., -p 8081:80).
2. Error: Cannot connect to the Docker daemon
Fix: Start Docker service:
sudo systemctl start docker3. Error: Image not found locally
Fix: Pull it manually:
docker pull image_nameWith these techniques, debugging Docker environments becomes straightforward and predictable. A disciplined troubleshooting process—inspect, log, exec, and monitor—can quickly pinpoint most container issues.
Best Practices for Docker and Docker Compose
Whether you’re running a single container or orchestrating dozens with Docker Compose, following best practices ensures efficiency, maintainability, and security. Below are tried-and-tested guidelines used by professionals in production environments.
Optimize Image Size
1. Use smaller base images like alpine instead of full Ubuntu images.
Example:
FROM python:3.12-alpine2. Combine multiple commands in one RUN statement:
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*3. Regularly prune unused images and containers:
docker system prune -afUse a .dockerignore File
Just like .gitignore, a .dockerignore file prevents unnecessary files from being copied into your image:
node_modules
.git
*.log
tmp/This reduces build time and image bloat.
Handle Secrets Securely
Never store passwords or keys in Dockerfiles or Compose files. Instead:
Use environment files (.env).
Or leverage Docker secrets:
docker secret create db_pass ./password.txtTag Images Clearly
Avoid using the latest tag in production. Always specify versioned tags:
docker build -t myapp:v1.2 .This ensures consistent deployments across environments.
Use Multi-Stage Builds
Multi-stage builds dramatically reduce image size by separating build and runtime environments:
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o app .
FROM alpine:latest
COPY --from=builder /app/app /usr/local/bin/app
CMD ["app"]Health Checks and Restart Policies
Add health checks to ensure your container is functioning correctly:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 30s
timeout: 10s
retries: 3And define restart policies for resiliency:
restart: alwaysUse Compose Profiles for Flexibility
Profiles let you selectively run specific parts of your stack (e.g., only databases for testing):
services:
db:
image: postgres
profiles: ["db"]Then run:
docker compose --profile db upBy following these best practices, you’ll build faster, more secure, and more maintainable container workflows — ready for both local development and production deployment.
Advanced Docker Compose Features
Once you’ve mastered the basics of Docker Compose, it’s time to explore its advanced capabilities. These features give you finer control over configurations, deployments, and scaling, allowing you to handle production-grade environments effortlessly.
Using Profiles
Profiles enable you to define optional parts of your Compose configuration. This is perfect for environments where you don’t always need every service—like skipping monitoring or caching layers in local setups.
Here’s an example:
version: "3.9"
services:
app:
image: myapp:latest
ports:
- "8080:8080"
redis:
image: redis:latest
profiles: ["cache"]
grafana:
image: grafana/grafana
profiles: ["monitoring"]By default, only app runs. To include monitoring:
docker compose --profile monitoring up -dProfiles make complex environments modular and flexible—no need to maintain separate YAML files for each use case.
Extending Compose Files
For larger projects, you can extend configurations across multiple files. This helps separate base definitions from environment-specific overrides.
Example:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dYour base docker-compose.yml might contain:
services:
app:
image: myapp:latest
ports:
- "8080:8080"While your docker-compose.prod.yml overrides or adds production settings:
services:
app:
environment:
- APP_ENV=production
deploy:
replicas: 3This layering approach allows easy transitions between development, staging, and production configurations without duplicating entire files.
Compose Deploy Section (Swarm Mode)
If you’re using Docker Swarm, the deploy key in Compose files unlocks orchestration features like scaling, placement, and rolling updates:
deploy:
replicas: 3
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 10sThese settings make deployments more resilient, automatically recovering failed containers and performing controlled rollouts.
Environment and Secrets Management
Instead of hardcoding sensitive data, store it securely:
secrets:
db_password:
file: ./secrets/db_pass.txt
services:
db:
image: postgres
secrets:
- db_passwordDocker mounts the secret inside the container at /run/secrets/db_password, keeping your credentials out of logs and images.
Deploying with External Networks
You can connect Compose-managed containers to pre-existing Docker networks:
networks:
default:
external:
name: production_networkThis feature is invaluable when multiple Compose stacks must communicate securely across shared infrastructure.
Auto-Recreate and Rebuild on Change
When you modify Dockerfiles or configurations, simply run:
docker compose up -d --buildIt automatically detects changes and rebuilds only the affected services — no need to tear down the entire environment.
Parallel Service Startup
Compose intelligently starts dependent services in the right order using the depends_on directive. Combine this with health checks for even greater control:
depends_on:
db:
condition: service_healthyExporting Configurations
To verify what Compose interprets from your YAML files:
docker compose configThis outputs the full, merged configuration — ideal for debugging multi-file setups.
With these advanced features, Docker Compose becomes a powerful orchestration layer—capable of managing everything from lightweight local environments to distributed production systems.
Docker Cheat Sheet PDF
Following is the official Docker Cheat Sheet available in Docker Documentation. Feel free to download and print it.
Conclusion
Docker and Docker Compose together form one of the most powerful duos in modern DevOps and cloud-native development. Docker simplifies containerization—making your applications portable, reproducible, and consistent across any environment. Docker Compose, on the other hand, streamlines orchestration—allowing you to define, deploy, and scale multi-container applications effortlessly.
By mastering Docker’s commands, understanding its architecture, and leveraging Compose’s YAML-driven configurations, you can build highly maintainable and scalable setups that mirror production environments locally. With best practices like image optimization, secrets management, and multi-stage builds, you’ll also ensure your infrastructure remains efficient and secure.
Think of Docker and Compose as your Swiss Army knife for application delivery—reliable, modular, and incredibly flexible. Whether you’re deploying microservices, web apps, or CI/CD pipelines, this cheat sheet gives you the quick reference and actionable knowledge you need to orchestrate containers like a pro.
Struggling with AWS or Linux server issues? I specialize in configuration, troubleshooting, and security to keep your systems performing at their best. Check out my Freelancer profile for details.
FAQs – Docker & Docker Compose Cheat Sheet
1. What’s the difference between Docker and Docker Compose?
Docker is a platform for building and running containers, while Docker Compose is a tool for defining and managing multi-container applications using a YAML file.
2. Can I use Docker Compose in production?
Yes. While Compose was initially designed for development, it’s often used in production for lightweight setups. For larger clusters, consider using Docker Swarm or Kubernetes.
3. How do I automatically restart containers on failure?
Add this to your Compose file:
restart: alwaysThis ensures containers automatically restart if they crash or the Docker daemon restarts.
4. How do I connect containers from different Compose projects?
Create a shared Docker network and declare it as an external network in each project’s Compose file:
networks:
default:
external:
name: shared_network5. How can I reduce Docker image size?
Use minimal base images (like alpine), multi-stage builds, and .dockerignore files to exclude unnecessary files.
Recommended Courses
If you’re serious about mastering containerization, Docker Mastery: with Kubernetes + Swarm from a Docker Captain by Bret Fisher is one of the best courses available. Taught by a seasoned Docker Captain, this course takes you from Docker fundamentals all the way to advanced topics like Kubernetes and Swarm orchestration.
Whether you’re a system administrator, developer, or DevOps enthusiast, this hands-on training will give you the real-world skills you need to excel in modern cloud environments. It’s a must-have investment in your career if you want to stay ahead in the competitive world of DevOps and cloud computing.
Disclaimer: This post contains affiliate links. If you purchase through these links, I may earn a small commission at no extra cost to you. This helps support my blog and allows me to continue creating valuable content for you.

Leave a Reply
Please log in to post a comment.