Message Brokers Introduction
Asynchronous communication between services with RabbitMQ, Kafka, and Redis.
What are Message Brokers?#
A message broker is middleware that translates messages between services. Instead of services calling each other directly (synchronous), they send messages through a broker (asynchronous).
Synchronous: Service A → HTTP Request → Service B
← Response ←
Asynchronous: Service A → [Message Broker] → Service B
(continues immediately) (processes when ready)
Why this matters: In a synchronous world, if Service B is slow or down, Service A waits or fails. With a message broker, Service A sends the message and moves on. The broker ensures Service B gets the message eventually.
Why Use Message Brokers?#
The Problem Without Brokers#
Imagine an e-commerce checkout flow:
User clicks "Buy"
→ Process payment
→ Send confirmation email
→ Update inventory
→ Notify warehouse
→ Update analytics
If done synchronously, the user waits for all of these to complete. If email is slow, the user waits. If analytics is down, the checkout fails.
The Solution With Brokers#
User clicks "Buy"
→ Process payment
→ Send "OrderPlaced" event to broker
→ Return success to user (fast!)
Meanwhile, asynchronously:
→ Email service receives event → sends email
→ Inventory service receives event → updates stock
→ Warehouse service receives event → creates pick list
→ Analytics service receives event → logs data
The user gets their response in milliseconds. The other services work in the background.
Key Benefits#
| Benefit | What It Means |
|---|---|
| Decoupling | Services don't need to know about each other. The order service doesn't know or care that email, inventory, and analytics are listening. |
| Reliability | Messages persist even if consumers are temporarily down. When they come back, they process the backlog. |
| Scalability | Need to handle more orders? Add more consumer instances. The broker distributes work automatically. |
| Load Leveling | Traffic spikes are absorbed by the queue. Backend services process at their own pace. |
| Async Processing | Slow operations (email, reports, image processing) don't block user-facing requests. |
When to Use Message Brokers#
Good use cases:
- Background jobs (email, SMS, PDF generation)
- Event-driven architectures (microservices communication)
- Data pipelines (logs, analytics, ETL)
- High-throughput systems (thousands of events per second)
- Reliability-critical operations (payments, order processing)
Not needed for:
- Simple, single-service applications
- Operations that must be synchronous (like authentication)
- Low-volume internal tools
- Prototypes and MVPs (add complexity when you need it)
Broker Comparison#
Each broker has different strengths. Choose based on your needs:
| Broker | Best For | Throughput | Complexity | When to Use |
|---|---|---|---|---|
| Redis (BullMQ) | Job queues, simple tasks | Medium | Low | Background jobs, simple pub/sub |
| RabbitMQ | Reliable messaging, routing | Medium-High | Medium | Complex routing, guaranteed delivery |
| Kafka | High throughput, event streaming | Very High | High | Real-time analytics, event sourcing |
| AWS SQS | Managed, serverless | Medium | Low | AWS-native apps, no infrastructure mgmt |
Redis/BullMQ#
Best for: Background jobs like sending emails, processing uploads, scheduled tasks.
Strengths:
- Simple to set up (if you already use Redis)
- Great for job queues with retries, delays, and priorities
- Low latency
Limitations:
- Messages can be lost if Redis crashes (unless configured for persistence)
- Not designed for complex routing
RabbitMQ#
Best for: Enterprise messaging with routing rules, guaranteed delivery.
Strengths:
- Extremely reliable (messages survive broker restarts)
- Flexible routing (direct, topic, fanout, headers)
- Built-in retry/dead-letter support
Limitations:
- Not optimized for very high throughput
- More operational complexity than Redis
Kafka#
Best for: High-volume event streaming, real-time analytics, event sourcing.
Strengths:
- Handles millions of events per second
- Messages are stored (you can replay history)
- Perfect for event sourcing and audit logs
Limitations:
- Significant operational complexity
- Overkill for simple job queues
- Steeper learning curve
Common Messaging Patterns#
1. Work Queue (Task Distribution)#
Distribute work across multiple workers. Each message is processed by exactly one worker.
Producer → [Queue] → Consumer 1
→ Consumer 2
→ Consumer 3
How it works: When a message arrives, the broker assigns it to one available consumer. If you have 1000 jobs and 10 workers, each handles ~100.
Use cases:
- Image processing (resize, compress)
- Email sending
- Report generation
- Video encoding
Why it matters: Scales horizontally. Need to process faster? Add more workers. No code changes needed.
2. Publish/Subscribe#
Broadcast a message to all interested subscribers. Each subscriber gets a copy.
Producer → [Exchange] → Queue 1 → Consumer A (Email service)
→ Queue 2 → Consumer B (Analytics)
→ Queue 3 → Consumer C (Notifications)
How it works: When an order is placed, the event goes to an exchange. The exchange copies it to every subscribed queue. Each service processes independently.
Use cases:
- Event broadcasting ("UserCreated", "OrderPlaced")
- Logging (all services send to a log aggregator)
- Real-time notifications
- Cache invalidation
Why it matters: Services are completely decoupled. Adding a new listener requires zero changes to the publisher.
3. Request/Reply#
Synchronous-style communication over messages. Used when you need a response but want broker benefits (retries, load balancing).
Client → [Request Queue] → Server
← [Reply Queue] ←
How it works: Client sends a message with a correlation ID and reply-to address. Server processes and sends response back. Client matches the response by correlation ID.
Use cases:
- RPC over messages (distributed function calls)
- Long-running operations with status updates
When to avoid: If you need immediate response, just use HTTP. This pattern adds complexity.
4. Event Sourcing#
Store every event that ever happened. Rebuild current state by replaying events.
Events: [UserCreated] → [NameChanged] → [AddressUpdated]
↓
Replay all events = Current user state
Why it's powerful:
- Complete audit trail (what happened, when, by whom)
- Time travel (see system state at any point)
- Debugging (replay events to reproduce bugs)
Use cases:
- Financial systems (every transaction recorded)
- Compliance-heavy industries
- Collaborative editing
- Game state management
Key Concepts#
Messages#
A message is just data with context. Structure it for clarity:
const message = {
type: 'order.created', // What happened
payload: { // The data
orderId: '123',
userId: 'user_456',
total: 99.99,
items: [/* ... */],
},
metadata: { // Context
timestamp: Date.now(),
correlationId: 'req_789', // Trace across services
version: 1, // For schema evolution
},
};
Best practices:
- Include a message type for routing and handling
- Use correlation IDs to trace requests across services
- Version your messages for backward compatibility
- Keep payloads self-contained (consumers shouldn't need to fetch more data)
Acknowledgments#
Acknowledging a message tells the broker "I've processed this, delete it."
consumer.on('message', async (message) => {
try {
await processOrder(message.payload);
message.ack(); // Success - remove from queue
} catch (error) {
message.nack(); // Failure - requeue or send to dead letter
}
});
Why this matters: If your consumer crashes mid-processing without acking, the broker redelivers the message. This prevents data loss.
Important: Only ack after processing completes. Acking before processing means you lose the message if you crash.
Dead Letter Queue (DLQ)#
Messages that fail repeatedly need somewhere to go. A DLQ collects failed messages for investigation.
Main Queue → [Process] → Success
↓
[Retry 3x]
↓
Dead Letter Queue → Manual Review / Alerting
Why it matters:
- Prevents poison messages from blocking the queue forever
- Preserves failed messages for debugging
- Enables alerting on failures
What to do with DLQ messages:
- Investigate the root cause
- Fix the bug
- Replay the messages through the main queue
Choosing Your Pattern#
Simple background jobs? → Redis/BullMQ (easiest)
Need guaranteed delivery? → RabbitMQ (most reliable)
Processing millions of events? → Kafka (highest throughput)
Want managed infrastructure? → AWS SQS/SNS (zero ops)
Start with the simplest option that meets your needs. You can always migrate to something more sophisticated later.
Key Takeaways#
-
Use for decoupling - Services communicate through messages, not direct calls. Changes to one service don't break others.
-
Choose based on needs - Simple → Redis, Reliable → RabbitMQ, Scale → Kafka. Don't over-engineer.
-
Handle failures properly - Implement retries, dead letter queues, and idempotent consumers.
-
Acknowledge properly - Only confirm processing after you've actually processed. Ack-then-process loses data.
-
Monitor your queues - Track queue depth, processing time, and failure rates. Rising backlogs indicate problems.
-
Design for idempotency - Messages may be delivered more than once. Processing the same message twice should be safe.
The goal: Build systems where individual component failures don't cascade. A message broker is your shock absorber between services.
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.