Microservices Patterns
Essential patterns for microservices - API Gateway, service discovery, circuit breakers, and more.
API Gateway#
A single entry point for all client requests:
Without Gateway:
┌────────┐
│ Client │──► User Service
│ │──► Product Service
│ │──► Order Service
└────────┘ (Client knows all services)
With Gateway:
┌────────┐ ┌─────────────┐ ┌──────────────┐
│ Client │───►│ API Gateway │───►│ User Service │
└────────┘ │ │───►│ Product │
│ │───►│ Order │
└─────────────┘ └──────────────┘
(Single entry point)
Why Use an API Gateway?#
- Single entry point - Clients only know one URL
- Authentication - Verify tokens in one place
- Rate limiting - Protect services from abuse
- Request routing - Route to correct service
- Response aggregation - Combine data from multiple services
- Protocol translation - REST to gRPC, etc.
Option 1: Express Gateway#
Simple Node.js gateway:
// gateway/src/index.js
import express from 'express';
import { createProxyMiddleware } from 'http-proxy-middleware';
import rateLimit from 'express-rate-limit';
import { authenticate } from './middleware/auth.js';
const app = express();
// Rate limiting
app.use(rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
}));
// Authentication for protected routes
app.use('/api', authenticate);
// Route to User Service
app.use('/api/users', createProxyMiddleware({
target: process.env.USER_SERVICE_URL,
changeOrigin: true,
pathRewrite: { '^/api/users': '/api/users' },
}));
// Route to Product Service
app.use('/api/products', createProxyMiddleware({
target: process.env.PRODUCT_SERVICE_URL,
changeOrigin: true,
}));
// Route to Order Service
app.use('/api/orders', createProxyMiddleware({
target: process.env.ORDER_SERVICE_URL,
changeOrigin: true,
}));
// Response aggregation endpoint
app.get('/api/dashboard', async (req, res) => {
const [user, orders, recommendations] = await Promise.all([
fetch(`${USER_SERVICE}/api/users/${req.user.id}`).then(r => r.json()),
fetch(`${ORDER_SERVICE}/api/orders?userId=${req.user.id}`).then(r => r.json()),
fetch(`${PRODUCT_SERVICE}/api/recommendations/${req.user.id}`).then(r => r.json()),
]);
res.json({
data: { user: user.data, orders: orders.data, recommendations: recommendations.data }
});
});
app.listen(3000);
Option 2: Kong Gateway#
Production-ready, plugin-based:
# kong.yml
_format_version: "3.0"
services:
- name: user-service
url: http://user-service:3001
routes:
- name: user-routes
paths:
- /api/users
plugins:
- name: rate-limiting
config:
minute: 100
- name: jwt
- name: product-service
url: http://product-service:3002
routes:
- name: product-routes
paths:
- /api/products
Option 3: AWS API Gateway#
Serverless, fully managed:
# serverless.yml
service: my-api-gateway
provider:
name: aws
runtime: nodejs20.x
functions:
users:
handler: handlers/users.handler
events:
- http:
path: /api/users/{proxy+}
method: ANY
products:
handler: handlers/products.handler
events:
- http:
path: /api/products/{proxy+}
method: ANY
Gateway Comparison#
| Gateway | Best For | Complexity |
|---|---|---|
| Custom (Express) | Simple cases, learning | Low |
| Kong | Production, plugins | Medium |
| AWS API Gateway | Serverless, AWS | Low |
| Nginx | High performance | Medium |
| Traefik | Kubernetes, Docker | Medium |
Service Discovery#
How services find each other in a dynamic environment:
The Problem#
// Hardcoded URLs break when services move
const USER_SERVICE = 'http://192.168.1.10:3001'; // What if IP changes?
Option 1: Environment Variables (Simple)#
// Works for small deployments
const USER_SERVICE = process.env.USER_SERVICE_URL || 'http://localhost:3001';
Option 2: DNS-Based (Docker/Kubernetes)#
Docker Compose and Kubernetes provide built-in DNS:
# docker-compose.yml
services:
user-service:
build: ./user-service
# Accessible as 'user-service' from other containers
order-service:
build: ./order-service
environment:
- USER_SERVICE_URL=http://user-service:3001 # Uses service name
// In Kubernetes, use service names
const USER_SERVICE = 'http://user-service.default.svc.cluster.local:3001';
// Or just 'http://user-service:3001' within same namespace
Option 3: Consul (Service Registry)#
Services register themselves, clients query registry:
// User Service - Register on startup
import Consul from 'consul';
const consul = new Consul();
await consul.agent.service.register({
name: 'user-service',
id: `user-service-${process.env.INSTANCE_ID}`,
address: process.env.HOST,
port: parseInt(process.env.PORT),
check: {
http: `http://${process.env.HOST}:${process.env.PORT}/health`,
interval: '10s',
},
});
// Deregister on shutdown
process.on('SIGINT', async () => {
await consul.agent.service.deregister(`user-service-${process.env.INSTANCE_ID}`);
process.exit();
});
// Order Service - Discover User Service
async function getUserServiceUrl() {
const services = await consul.health.service('user-service');
const healthy = services.filter(s => s.Checks.every(c => c.Status === 'passing'));
if (healthy.length === 0) {
throw new Error('No healthy user-service instances');
}
// Simple load balancing (random)
const instance = healthy[Math.floor(Math.random() * healthy.length)];
return `http://${instance.Service.Address}:${instance.Service.Port}`;
}
Circuit Breaker Pattern#
Prevent cascading failures when a service is down:
Without Circuit Breaker:
Order Service ──► User Service (down)
└─► Timeout... Timeout... Timeout...
(All requests fail slowly)
With Circuit Breaker:
Order Service ──► Circuit Breaker ──► User Service (down)
└─► OPEN: Fail fast!
(Don't even try, return error immediately)
States#
CLOSED ──────► OPEN ──────► HALF-OPEN
│ │ │
│ Failures │ Timeout │ Success
│ exceed │ expires │ = CLOSED
│ threshold │ │ Failure
│ │ │ = OPEN
└─────────────┴──────────────┘
Implementation with Opossum#
npm install opossum
import CircuitBreaker from 'opossum';
import axios from 'axios';
// Wrap the function that makes external calls
async function fetchUser(userId) {
const response = await axios.get(`${USER_SERVICE_URL}/api/users/${userId}`);
return response.data;
}
// Create circuit breaker
const userServiceBreaker = new CircuitBreaker(fetchUser, {
timeout: 3000, // Timeout per request
errorThresholdPercentage: 50, // Open if 50% fail
resetTimeout: 30000, // Try again after 30s
});
// Events
userServiceBreaker.on('open', () => {
console.warn('User service circuit OPEN - failing fast');
});
userServiceBreaker.on('halfOpen', () => {
console.info('User service circuit HALF-OPEN - testing');
});
userServiceBreaker.on('close', () => {
console.info('User service circuit CLOSED - recovered');
});
// Fallback when circuit is open
userServiceBreaker.fallback((userId) => ({
data: { id: userId, name: 'Unknown', cached: true }
}));
// Usage
router.get('/orders/:id', async (req, res) => {
const order = await OrderService.findById(req.params.id);
// This will fail fast if circuit is open
const user = await userServiceBreaker.fire(order.userId);
res.json({ data: { ...order, user: user.data } });
});
Saga Pattern#
Manage transactions across multiple services:
The Problem#
// This doesn't work across services!
await db.transaction(async (tx) => {
await UserService.deductBalance(userId, amount); // Different DB!
await OrderService.createOrder(order); // Different DB!
await InventoryService.reserve(productId); // Different DB!
});
Choreography-Based Saga#
Services react to events:
// 1. Order Service creates order and publishes event
router.post('/orders', async (req, res) => {
const order = await Order.create({ ...req.body, status: 'pending' });
await messageQueue.publish('order.created', order);
res.status(201).json({ data: order });
});
// 2. Payment Service listens and processes
messageQueue.subscribe('order.created', async (order) => {
try {
await PaymentService.charge(order.userId, order.total);
await messageQueue.publish('payment.completed', { orderId: order.id });
} catch (error) {
await messageQueue.publish('payment.failed', { orderId: order.id, error: error.message });
}
});
// 3. Inventory Service listens for payment success
messageQueue.subscribe('payment.completed', async ({ orderId }) => {
try {
const order = await getOrder(orderId);
await InventoryService.reserve(order.productId, order.quantity);
await messageQueue.publish('inventory.reserved', { orderId });
} catch (error) {
await messageQueue.publish('inventory.failed', { orderId });
}
});
// 4. Order Service listens for final result
messageQueue.subscribe('inventory.reserved', async ({ orderId }) => {
await Order.updateOne({ _id: orderId }, { status: 'confirmed' });
});
// Compensation: Handle failures
messageQueue.subscribe('payment.failed', async ({ orderId }) => {
await Order.updateOne({ _id: orderId }, { status: 'failed' });
});
messageQueue.subscribe('inventory.failed', async ({ orderId }) => {
// Refund payment (compensation)
await PaymentService.refund(orderId);
await Order.updateOne({ _id: orderId }, { status: 'failed' });
});
Orchestration-Based Saga#
Central coordinator manages the flow:
// Saga Orchestrator
class OrderSaga {
async execute(orderData) {
const steps = [];
try {
// Step 1: Create order
const order = await OrderService.create(orderData);
steps.push({ service: 'order', action: 'create', data: order });
// Step 2: Process payment
const payment = await PaymentService.charge(order.userId, order.total);
steps.push({ service: 'payment', action: 'charge', data: payment });
// Step 3: Reserve inventory
await InventoryService.reserve(order.productId, order.quantity);
steps.push({ service: 'inventory', action: 'reserve', data: order });
// Step 4: Confirm order
await OrderService.confirm(order.id);
return { success: true, order };
} catch (error) {
// Compensate in reverse order
await this.compensate(steps);
return { success: false, error: error.message };
}
}
async compensate(steps) {
for (const step of steps.reverse()) {
try {
switch (step.service) {
case 'payment':
await PaymentService.refund(step.data.id);
break;
case 'inventory':
await InventoryService.release(step.data.productId, step.data.quantity);
break;
case 'order':
await OrderService.cancel(step.data.id);
break;
}
} catch (error) {
console.error(`Compensation failed for ${step.service}:`, error);
// Log for manual intervention
}
}
}
}
Distributed Tracing#
Track requests across services:
OpenTelemetry Setup#
npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node
// tracing.js - Run before your app
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
const sdk = new NodeSDK({
serviceName: 'order-service',
traceExporter: new JaegerExporter({
endpoint: 'http://jaeger:14268/api/traces',
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
# Start with tracing
node --require ./tracing.js src/index.js
Now you can see request flow across services in Jaeger UI.
Key Takeaways#
- API Gateway - Single entry point, handles cross-cutting concerns
- Service Discovery - Use DNS (Docker/K8s) or registry (Consul)
- Circuit Breaker - Fail fast when services are down
- Saga Pattern - Manage distributed transactions
- Distributed Tracing - Debug across service boundaries
Start Simple
Don't implement all patterns from day one. Start with:
- Simple proxy gateway
- DNS-based discovery (Docker Compose)
- Add circuit breakers when you see cascading failures
- Add tracing when debugging becomes painful
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.