Nginx & Reverse Proxy
Configure Nginx as a reverse proxy, load balancer, and SSL terminator for Node.js applications.
What is a Reverse Proxy?#
A reverse proxy sits between the internet and your backend servers. Clients connect to Nginx, and Nginx forwards requests to your Node.js application.
Internet → Nginx (Reverse Proxy) → Node.js App
↓
- SSL termination
- Load balancing
- Caching
- Compression
Why not connect directly to Node.js?
Node.js can absolutely serve HTTP traffic directly. But in production, you want Nginx in front because:
- SSL/TLS - Nginx handles HTTPS efficiently. Node.js can do it, but Nginx does it better.
- Static files - Nginx serves files from disk much faster than Node.js
- Load balancing - Nginx can distribute traffic across multiple Node.js instances
- Security - Nginx hides your internal architecture and can block malicious requests
- Stability - If your Node.js app crashes, Nginx can show a friendly error page
Why Use Nginx?#
| Feature | Without Nginx | With Nginx |
|---|---|---|
| SSL | Node.js handles HTTPS (slower) | Nginx terminates SSL (faster) |
| Static Files | Node.js serves files (slow) | Nginx serves files (fast) |
| Multiple Instances | Manual routing | Automatic load balancing |
| Protection | App exposed directly | Hidden behind proxy |
| Crashes | Users see errors | Users see friendly page |
The pattern: Nginx handles everything HTTP-related. Node.js focuses on business logic.
How Nginx Works#
Nginx uses a configuration file to define its behavior. The main concepts:
- Server blocks - Define virtual hosts (like different domains)
- Location blocks - Define how to handle different URL paths
- Upstream blocks - Define groups of backend servers
- Directives - Individual settings (listen, server_name, proxy_pass, etc.)
Configuration files live in:
/etc/nginx/nginx.conf- Main configuration/etc/nginx/sites-available/- Individual site configs/etc/nginx/sites-enabled/- Symlinks to active sites
Basic Installation#
# Ubuntu/Debian
sudo apt update
sudo apt install nginx
# Start and enable (starts on boot)
sudo systemctl start nginx
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
After installation, visiting your server's IP shows the Nginx welcome page. Now let's configure it.
Your First Reverse Proxy#
Let's say your Node.js app runs on localhost:3000. We want:
- Users visit
http://myapp.com - Nginx forwards to
localhost:3000
Create a configuration file:
# /etc/nginx/sites-available/myapp
server {
listen 80; # Listen on port 80 (HTTP)
server_name myapp.com www.myapp.com; # Domain names to respond to
location / {
proxy_pass http://localhost:3000; # Forward to Node.js
proxy_http_version 1.1; # Use HTTP/1.1 for WebSocket support
# Pass original client information to Node.js
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
Understanding the headers:
| Header | Purpose |
|---|---|
Host | Original domain the client requested |
X-Real-IP | Client's actual IP address |
X-Forwarded-For | Chain of proxies the request passed through |
X-Forwarded-Proto | Whether original request was HTTP or HTTPS |
Upgrade / Connection | Enables WebSocket connections |
Without these headers, your Node.js app would only see Nginx's information, not the actual client's.
Enable the site:
# Create symlink to enable
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
# Test configuration (catches syntax errors)
sudo nginx -t
# Reload to apply changes (no downtime)
sudo systemctl reload nginx
Adding HTTPS with Let's Encrypt#
HTTPS is non-negotiable in production. Let's Encrypt provides free SSL certificates.
Install Certbot#
sudo apt install certbot python3-certbot-nginx
Get Certificate#
sudo certbot --nginx -d myapp.com -d www.myapp.com
Certbot will:
- Verify you own the domain
- Generate certificates
- Automatically modify your Nginx config
- Set up auto-renewal
What Changed?#
Your config now looks like:
# HTTP -> HTTPS redirect (Certbot added this)
server {
listen 80;
server_name myapp.com www.myapp.com;
return 301 https://$server_name$request_uri; # Redirect all HTTP to HTTPS
}
# HTTPS server
server {
listen 443 ssl http2; # HTTPS with HTTP/2
server_name myapp.com www.myapp.com;
# SSL certificates (Certbot added these)
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:3000;
# ... same proxy settings as before
}
}
Key points:
- HTTP (port 80) redirects to HTTPS
- HTTPS (port 443) handles actual traffic
http2enables HTTP/2 for better performance- Certificates renew automatically via cron
Load Balancing#
When one Node.js instance isn't enough, run multiple and let Nginx distribute traffic.
Define Backend Servers#
upstream nodejs_cluster {
least_conn; # Send to server with fewest active connections
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
Use the Upstream#
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://nodejs_cluster; # Nginx picks a server
# ... proxy headers
}
}
Load Balancing Methods#
upstream backend {
# Round Robin (default)
# Rotates through servers one by one
server localhost:3001;
server localhost:3002;
# Least Connections
# Sends to server with fewest active connections
# Best for requests with varying processing times
least_conn;
# IP Hash
# Same client IP always goes to same server
# Useful for session affinity (sticky sessions)
ip_hash;
# Weighted
# Send more traffic to powerful servers
server localhost:3001 weight=3; # Gets 3x traffic
server localhost:3002 weight=1; # Gets 1x traffic
}
Health Checks#
Nginx can detect failed servers and stop sending traffic:
upstream backend {
server localhost:3001 max_fails=3 fail_timeout=30s;
server localhost:3002 max_fails=3 fail_timeout=30s;
server localhost:3003 backup; # Only used if others fail
}
max_fails=3- After 3 failures, mark server as downfail_timeout=30s- Wait 30s before trying againbackup- Only receive traffic when primary servers are down
Serving Static Files#
Nginx serves static files much faster than Node.js. Let Nginx handle images, CSS, JavaScript, etc.
server {
listen 80;
server_name myapp.com;
# Static files - Nginx serves directly
location /static/ {
alias /var/www/myapp/public/; # Files live here
expires 30d; # Cache for 30 days
add_header Cache-Control "public, immutable";
}
# Uploaded files
location /uploads/ {
alias /var/www/myapp/uploads/;
expires 7d;
}
# Everything else - Forward to Node.js
location / {
proxy_pass http://localhost:3000;
}
}
Why this matters:
- Nginx serves static files with minimal CPU usage
- Node.js is freed to handle actual application logic
- Files are cached in the browser, reducing server load
Compression#
Compress responses to reduce bandwidth and speed up page loads.
# Add to nginx.conf http block
gzip on;
gzip_vary on; # Tells proxies to cache compressed + uncompressed
gzip_min_length 1024; # Don't compress tiny files
gzip_proxied any; # Compress for proxied requests too
gzip_comp_level 6; # Compression level (1-9, higher = smaller but slower)
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml
image/svg+xml;
Note: Don't compress images (JPEG, PNG) - they're already compressed. Compressing them wastes CPU.
Rate Limiting#
Protect your API from abuse by limiting requests per client.
# Define rate limit zones (in http block)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
What this means:
$binary_remote_addr- Limit per client IPzone=api_limit:10m- Store state in 10MB of memoryrate=10r/s- Allow 10 requests per second
Apply to routes:
server {
# General API - 10 requests/second
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://localhost:3000;
}
# Login - Strict limit (prevent brute force)
location /api/auth/login {
limit_req zone=login_limit burst=5;
proxy_pass http://localhost:3000;
}
}
burst=20- Allow temporary bursts up to 20 requestsnodelay- Process burst immediately (don't queue)
Security Headers#
Add headers to protect against common attacks:
server {
# Prevent clickjacking
add_header X-Frame-Options "SAMEORIGIN" always;
# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Enable XSS filter (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;
# Control referrer information
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# HTTPS only (after you have SSL working)
add_header Strict-Transport-Security "max-age=31536000" always;
# Hide Nginx version (security through obscurity)
server_tokens off;
}
WebSocket Support#
WebSockets need special handling because they upgrade from HTTP:
location /socket.io/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
# Required for WebSocket upgrade
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
# Long timeout for persistent connections
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
Without the Upgrade headers, WebSocket connections will fail.
Common Mistakes#
1. Forgetting to reload after changes#
sudo nginx -t && sudo systemctl reload nginx
2. Not testing configuration#
Always run nginx -t before reloading. A syntax error will break your site.
3. Wrong file permissions#
Nginx runs as www-data. Files it serves need to be readable:
sudo chown -R www-data:www-data /var/www/myapp
4. Proxy headers missing#
Without proper headers, your app sees Nginx's IP, not the client's.
5. Not handling WebSockets#
If you use Socket.io or WebSockets, add the upgrade headers.
Key Takeaways#
-
Use Nginx for SSL - Let Nginx handle HTTPS. It's faster and easier to manage.
-
Load balance - Run multiple Node.js instances for reliability and performance.
-
Serve static files - Nginx is much faster than Node.js for static content.
-
Enable compression - Reduce bandwidth with gzip.
-
Add security headers - Protect against common web attacks.
-
Test before reloading - Always run
nginx -tfirst.
Common workflow:
# Edit config
sudo nano /etc/nginx/sites-available/myapp
# Test (catch errors)
sudo nginx -t
# Reload (apply changes, no downtime)
sudo systemctl reload nginx
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.