What is a Container?
A container packages an application along with everything it needs to run (code, libraries, configuration) into a single bundle.
VM vs Container
Virtual Machine (VM)
Includes entire OS, making it heavy and slow
Container
Shares OS kernel, making it lightweight and fast
Docker Core Concepts
Image
The "blueprint" of a container. Defined by a Dockerfile, immutable once built.
Container
A running "instance" of an image. You can run multiple; data is lost when deleted.
Registry
A "repository" for storing and sharing images. Docker Hub, GCR, GHCR, etc.
Why use containers?
- Environment consistency: Dev/test/production environments are identical
- Fast deployment: Just download the image and run
- Isolation: Runs independently without conflicting with other apps
- Scalability: Run multiple containers from the same image
Dockerfile Basics
A Dockerfile is a "recipe" for building images. Each instruction stacks a layer to create the final image.
Basic Instructions
| Instruction | Description | Example |
|---|---|---|
| FROM | Specify base image | FROM python:3.11-slim |
| WORKDIR | Set working directory | WORKDIR /app |
| COPY | Copy files | COPY . . |
| RUN | Execute command at build time | RUN pip install -r requirements.txt |
| EXPOSE | Document port | EXPOSE 8000 |
| CMD | Container run command | CMD ["uvicorn", "main:app"] |
Python (FastAPI) Dockerfile
# Use Python 3.11 slim image (minimize size)
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Copy dependency file first (caching optimization)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy source code
COPY . .
# Expose port (for documentation)
EXPOSE 8000
# Container run command
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]Node.js (Hono) Dockerfile
# Node.js 20 LTS Alpine image (lightweight)
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy package files first (caching optimization)
COPY package*.json ./
# Install dependencies (production only)
RUN npm ci --only=production
# Copy source code
COPY . .
# Expose port
EXPOSE 3000
# Container run command
CMD ["node", "src/index.js"]Multi-stage Build (Optimization)
Separate build and runtime images to reduce the final image size.
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/index.js"]Build & Run Image
# Build image
docker build -t my-api:latest .
# Run container
docker run -p 8000:8000 my-api:latest
# Run in background
docker run -d -p 8000:8000 --name my-api my-api:latest
# View logs
docker logs my-api
# Stop & remove container
docker stop my-api && docker rm my-apiDocker Compose
Define and run multiple containers at once. Manage frontend, backend, and DB in a single file.
Why Do We Need Compose?
❌ Repeating docker run
docker run -d postgres...
docker run -d redis...
docker run -d my-api...
docker run -d my-frontend...
✅ One Compose command
docker compose up -d
All services started at once!
docker-compose.yml Example (Python Stack)
version: '3.8'
services:
# PostgreSQL database
db:
image: postgres:15-alpine
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydb
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
# FastAPI backend
api:
build: ./backend
ports:
- "8000:8000"
environment:
DATABASE_URL: postgresql://myuser:mypassword@db:5432/mydb
depends_on:
- db
# React frontend
web:
build: ./frontend
ports:
- "3000:3000"
depends_on:
- api
volumes:
postgres_data:docker-compose.yml Example (Node.js Stack)
version: '3.8'
services:
# PostgreSQL database
db:
image: postgres:15-alpine
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydb
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
# Hono backend
api:
build: ./backend
ports:
- "3001:3001"
environment:
DATABASE_URL: postgresql://myuser:mypassword@db:5432/mydb
depends_on:
- db
# Vite frontend
web:
build: ./frontend
ports:
- "5173:5173"
depends_on:
- api
volumes:
postgres_data:Key Commands
| Command | Description |
|---|---|
| docker compose up -d | Start all services in background |
| docker compose down | Stop & remove all services |
| docker compose logs -f api | Stream api service logs in real time |
| docker compose exec api sh | Open a shell in the api container |
| docker compose build | Rebuild images |
Image Registry
Store built images in the cloud so they can be pulled and run from anywhere.
Registry Comparison
Docker Hub
The most widely used public registry
- ✅ Unlimited public images
- ✅ Easy to use
- ⚠️ Only 1 private repo free
GCR / Artifact Registry
Google Cloud's registry
- ✅ Best integration with Cloud Run
- ✅ Private by default
- ⚠️ Storage costs apply
GitHub Container Registry
Registry integrated with GitHub
- ✅ GitHub Actions integration
- ✅ Public is free
- ⚠️ Private 500MB free
Push Image to GCR
# 1. Authenticate with gcloud
gcloud auth configure-docker asia-northeast3-docker.pkg.dev
# 2. Tag the image
docker tag my-api:latest \
asia-northeast3-docker.pkg.dev/PROJECT_ID/my-repo/my-api:latest
# 3. Push the image
docker push asia-northeast3-docker.pkg.dev/PROJECT_ID/my-repo/my-api:latest
# 4. Use with Cloud Run
gcloud run deploy my-api \
--image=asia-northeast3-docker.pkg.dev/PROJECT_ID/my-repo/my-api:latest \
--region=asia-northeast3Push to GitHub Container Registry
# 1. Log in with GitHub Personal Access Token
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
# 2. Tag the image
docker tag my-api:latest ghcr.io/USERNAME/my-api:latest
# 3. Push the image
docker push ghcr.io/USERNAME/my-api:latestPractical Tips & Best Practices
.dockerignore Setup
Exclude unnecessary files from the build context to speed up builds.
# Version control
.git
.gitignore
# Dependencies (installed fresh in image)
node_modules
__pycache__
.venv
venv
# Build artifacts
dist
build
*.pyc
# Development files
.env.local
.env.development
*.log
# IDE
.vscode
.idea
# Docker related
Dockerfile
docker-compose*.yml
.dockerignoreLayer Caching Optimization
❌ Inefficient
COPY . .
RUN pip install -r requirements.txt
Dependencies reinstalled on every source change
✅ Efficient
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
Cache used when dependency file hasn't changed
Security Considerations
- Avoid root user: Set a non-root user with the USER instruction
- Watch for secrets: Never hardcode passwords in Dockerfiles
- Latest base images: Use the latest versions with security patches
- Least privilege: Install only necessary packages
Debugging Tips
# List running containers
docker ps
# View container logs
docker logs -f my-container
# Access container shell
docker exec -it my-container sh
# Check image history (size per layer)
docker history my-api:latest
# Clean up unused resources
docker system prune -aNext Steps
Now that you have completed containerization with Docker, set up CI/CD to automatically deploy with just a code push.