Docker
Package your application in containers - consistent deployments everywhere.
Why Docker?#
"It works on my machine" - the developer's curse.
Your application works perfectly on your laptop. You deploy to production, and it crashes. Why? Different Node.js version. Missing system library. Environment variable not set. The production server is subtly different from your development environment.
Docker solves this. It packages your application with everything it needs - Node.js version, system dependencies, environment configuration. The same container runs the same way everywhere: your laptop, CI/CD, staging, production.
Benefits:
- Consistency - Same environment everywhere
- Isolation - Apps don't interfere with each other
- Reproducibility - Build once, run anywhere
- Easy deployment - Ship a container, not instructions
Docker Basics#
Before diving into code, understand these key concepts:
| Concept | What It Is | Analogy |
|---|---|---|
| Image | Blueprint for containers. Read-only template with your app and dependencies. | Like a class definition |
| Container | Running instance of an image. Isolated process with its own filesystem. | Like an object instance |
| Dockerfile | Instructions to build an image. Step-by-step recipe. | Like source code |
| Docker Compose | Tool to run multiple containers together. Define your stack in YAML. | Like a project configuration |
The workflow:
- Write a
Dockerfile(instructions) - Build an
image(snapshot) - Run a
container(instance)
Writing a Dockerfile#
Basic Dockerfile#
A minimal Dockerfile for a Node.js app:
# Base image - what OS and Node version to start with
FROM node:20-alpine
# Where to put the app inside the container
WORKDIR /app
# Copy package files first (for caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy the rest of the application
COPY . .
# Document which port the app uses
EXPOSE 3000
# Command to start the application
CMD ["node", "src/index.js"]
Why this order matters:
Docker caches each step. If a step hasn't changed, Docker reuses the cached result. By copying package.json first and installing dependencies before copying source code, you avoid reinstalling dependencies when only your code changes.
package.json changes rarely → Cached most of the time
Source code changes frequently → Rebuilt each time
Better: Copy package.json → Install → Copy source
Worse: Copy everything → Install (reinstalls every time!)
Production-Optimized Dockerfile#
For production, you want security, smaller images, and health checks:
# Build stage - has all dev tools
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build # If you have a TypeScript/build step
# Production stage - only what's needed to run
FROM node:20-alpine AS production
WORKDIR /app
# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Copy only production dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/src ./src
# Set ownership to non-root user
RUN chown -R nodejs:nodejs /app
# Run as non-root user
USER nodejs
EXPOSE 3000
# Health check - Docker will restart unhealthy containers
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "src/index.js"]
Key Optimizations Explained#
# 1. Use Alpine (smaller image)
FROM node:20-alpine # ~50MB vs ~350MB for node:20
Why: Alpine Linux is minimal. Smaller images download faster, use less disk, and have fewer potential vulnerabilities.
# 2. Order for caching
COPY package*.json ./ # Changes rarely
RUN npm ci
COPY . . # Changes often
Why: Dependencies don't change often. By installing them before copying source code, you reuse the cached layer when only code changes.
# 3. Use npm ci (not npm install)
RUN npm ci --only=production
Why: npm ci is faster, uses exact versions from package-lock.json, and fails if the lock file is out of sync. Perfect for reproducible builds.
# 4. Multi-stage builds
FROM node:20-alpine AS builder # Has devDependencies for building
FROM node:20-alpine AS production # Only production dependencies
Why: Your build stage might need TypeScript, testing tools, etc. Your production image shouldn't include them. Multi-stage builds let you use dev tools for building but ship only what's needed.
# 5. Non-root user
USER nodejs
Why: If an attacker compromises your container, they get the permissions of the user running the process. Running as root is dangerous. Running as a limited user contains the damage.
# 6. Health checks
HEALTHCHECK CMD wget --spider http://localhost:3000/health || exit 1
Why: Docker (and orchestrators like Kubernetes) can detect when your app is unhealthy and restart it automatically. Without health checks, a crashed process inside a running container goes unnoticed.
Docker Compose#
Docker Compose lets you run multiple containers together. Define your entire stack in a YAML file.
Production Compose#
# docker-compose.yml
version: '3.8'
services:
api:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- MONGODB_URI=mongodb://mongo:27017/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- mongo
- redis
restart: unless-stopped
mongo:
image: mongo:7
volumes:
- mongo_data:/data/db
restart: unless-stopped
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
mongo_data:
redis_data:
Key points:
depends_on- Start mongo and redis before the apivolumes- Persist database data across container restartsrestart: unless-stopped- Auto-restart if the container crashes- Services communicate by name (
mongo,redis) - Docker creates a network
Development Compose#
For development, you want hot reloading and exposed ports for debugging tools:
# docker-compose.dev.yml
version: '3.8'
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app # Mount source code for hot reload
- /app/node_modules # Don't overwrite node_modules
environment:
- NODE_ENV=development
- MONGODB_URI=mongodb://mongo:27017/myapp_dev
command: npm run dev
mongo:
image: mongo:7
ports:
- "27017:27017" # Expose for MongoDB Compass, etc.
# Dockerfile.dev - Simpler, includes devDependencies
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install # All dependencies, including dev
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
Volume mounts explained:
.:/app- Your local source directory is mounted into the container. Changes on your laptop appear inside the container instantly (hot reload)./app/node_modules- Prevents the mount from overwriting the container's node_modules with your local one (which might be for a different OS).
Common Commands#
# Building and running
docker build -t my-api . # Build image
docker run -p 3000:3000 my-api # Run container
docker run -d -p 3000:3000 my-api # Run in background
docker run -e NODE_ENV=production my-api # With environment variable
# Viewing and debugging
docker ps # List running containers
docker ps -a # List all containers (including stopped)
docker logs <container_id> # View logs
docker logs -f <container_id> # Follow logs (live)
docker exec -it <container_id> sh # Shell into running container
# Stopping and cleanup
docker stop <container_id> # Graceful stop
docker rm <container_id> # Remove stopped container
docker rmi my-api # Remove image
docker system prune # Remove unused containers, images, volumes
# Docker Compose
docker compose up # Start all services
docker compose up -d # Start in background
docker compose up --build # Rebuild images first
docker compose down # Stop and remove containers
docker compose logs -f api # Follow logs for specific service
docker compose exec api sh # Shell into service
.dockerignore#
Like .gitignore, but for Docker. These files won't be copied into the image:
# .dockerignore
node_modules
npm-debug.log
.env
.env.*
.git
.gitignore
.dockerignore
Dockerfile*
docker-compose*
README.md
.vscode
coverage
.nyc_output
tests
*.test.js
Why this matters:
- Smaller images - Don't include unnecessary files
- Faster builds - Less to copy
- Security - Don't include
.envfiles with secrets - Correctness - Don't copy local
node_modules(might be wrong platform)
Environment Variables#
Three ways to set environment variables:#
1. In Dockerfile (defaults):
ENV NODE_ENV=production
ENV PORT=3000
2. In docker-compose.yml:
services:
api:
environment:
- NODE_ENV=production
- PORT=3000
env_file:
- .env.production # Load from file
3. At runtime:
docker run -e NODE_ENV=production -e JWT_SECRET=xxx my-api
Best practice: Sensitive values (API keys, secrets) should be passed at runtime, not baked into images.
Multi-Architecture Builds#
If you need to run on different CPU architectures (Intel Mac, M1/M2 Mac, AWS Graviton):
# Create a builder that can build for multiple platforms
docker buildx create --use
# Build and push for multiple platforms
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/my-api:latest \
--push .
This creates images that work on both Intel and ARM processors.
Pushing to Registries#
Docker Hub#
docker login
docker tag my-api username/my-api:latest
docker push username/my-api:latest
GitHub Container Registry#
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
docker tag my-api ghcr.io/username/my-api:latest
docker push ghcr.io/username/my-api:latest
AWS ECR#
aws ecr get-login-password | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker tag my-api 123456789.dkr.ecr.us-east-1.amazonaws.com/my-api:latest
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/my-api:latest
Production Checklist#
Before deploying to production, verify:
# ✓ Use specific version tags (not "latest")
FROM node:20.10-alpine # Not node:latest - could change unexpectedly
# ✓ Run as non-root user
USER nodejs # Security best practice
# ✓ Health check defined
HEALTHCHECK CMD wget --spider http://localhost:3000/health || exit 1
# ✓ Only production dependencies
RUN npm ci --only=production # No dev dependencies in production
# ✓ Multi-stage build (if you have a build step)
FROM node:20-alpine AS builder
FROM node:20-alpine AS production
# ✓ Use node directly (not npm start)
CMD ["node", "src/index.js"] # npm adds overhead and complicates signal handling
Why node instead of npm start? npm adds a wrapper process. When Docker sends shutdown signals, npm might not forward them properly to your Node.js process, causing ungraceful shutdowns.
Key Takeaways#
-
Docker ensures consistency - Same environment from development to production.
-
Order Dockerfile for caching - Package files before source code. Dependencies change less often than code.
-
Use multi-stage builds - Dev tools for building, minimal image for production.
-
Run as non-root - Security best practice. Limits damage if compromised.
-
Docker Compose for local dev - One command starts your entire stack.
-
Health checks enable recovery - Docker restarts unhealthy containers automatically.
Start Here
docker compose up
That's it. Your entire stack starts with one command. Database, cache, API - all configured and connected. No more "install MongoDB, configure Redis, set environment variables..."
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.