Job Queues
Process tasks in the background with BullMQ, Agenda, and other job queue solutions.
Why Background Jobs?#
Some operations shouldn't make users wait. Consider a signup flow:
// Bad: User waits for everything
app.post('/signup', async (req, res) => {
const user = await User.create(req.body);
await sendWelcomeEmail(user); // 2-3 seconds
await generateAvatar(user); // 5 seconds
await notifySlack(user); // 1 second
await syncToCRM(user); // 2 seconds
res.json({ data: user }); // Total: 10+ seconds!
});
The user stares at a loading spinner for 10 seconds. If any of these operations fail, the whole signup fails. The user might retry, creating duplicate accounts.
The solution: Queue these tasks and respond immediately:
// Good: Respond immediately, process later
app.post('/signup', async (req, res) => {
const user = await User.create(req.body);
await queue.add('user.created', { userId: user.id });
res.status(201).json({ data: user }); // Instant response!
});
The user gets their response in milliseconds. Background workers handle the rest.
When to Use Background Jobs#
Good candidates for background processing:
- Email sending - Welcome emails, notifications, newsletters
- Image/video processing - Resizing, thumbnails, encoding
- Report generation - PDFs, exports, analytics
- External API calls - CRM sync, webhooks, third-party integrations
- Data processing - Imports, exports, bulk operations
- Scheduled tasks - Daily cleanup, reminders, subscription renewals
Keep synchronous:
- Authentication - Users need immediate feedback
- Critical validation - Don't let invalid data through
- Simple database operations - Already fast enough
Option 1: BullMQ (Recommended)#
BullMQ is Redis-based, feature-rich, and production-ready. It's the go-to choice for Node.js applications.
Why BullMQ?
- Reliable - Jobs survive server restarts
- Fast - Redis is extremely fast
- Feature-rich - Retries, delays, priorities, progress tracking
- Battle-tested - Used in production by many companies
npm install bullmq
Setting Up Queues#
// src/queues/index.js
import { Queue } from 'bullmq';
const connection = {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT) || 6379,
};
// Create queues for different job types
export const emailQueue = new Queue('email', { connection });
export const imageQueue = new Queue('image-processing', { connection });
export const reportQueue = new Queue('reports', { connection });
Why separate queues? Different job types have different needs. Email jobs might need high concurrency, while video processing might need to run one at a time. Separate queues let you configure and scale them independently.
Adding Jobs#
// src/services/userService.js
import { emailQueue } from '../queues/index.js';
export async function createUser(data) {
const user = await User.create(data);
// Add job to queue
await emailQueue.add('welcome-email', {
userId: user.id,
email: user.email,
name: user.name,
}, {
attempts: 3, // Retry up to 3 times on failure
backoff: {
type: 'exponential',
delay: 1000, // 1s, 2s, 4s between retries
},
removeOnComplete: 100, // Keep last 100 completed jobs (for debugging)
removeOnFail: 1000, // Keep last 1000 failed jobs (for investigation)
});
return user;
}
Job options explained:
attempts- How many times to retry on failurebackoff- How long to wait between retries (exponential means 1s, 2s, 4s...)removeOnComplete- Clean up successful jobs to save Redis memoryremoveOnFail- Keep failed jobs for debugging
Processing Jobs (Workers)#
Workers are separate processes that consume jobs from queues:
// src/workers/emailWorker.js
import { Worker } from 'bullmq';
import { sendEmail } from '../services/emailService.js';
const connection = {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT) || 6379,
};
const worker = new Worker('email', async (job) => {
console.log(`Processing job ${job.id}: ${job.name}`);
switch (job.name) {
case 'welcome-email':
await sendWelcomeEmail(job.data);
break;
case 'password-reset':
await sendPasswordResetEmail(job.data);
break;
case 'notification':
await sendNotificationEmail(job.data);
break;
default:
throw new Error(`Unknown job type: ${job.name}`);
}
return { success: true };
}, { connection });
async function sendWelcomeEmail({ email, name }) {
await sendEmail({
to: email,
subject: 'Welcome!',
html: `<h1>Hello ${name}!</h1><p>Welcome to our platform.</p>`,
});
}
// Event handlers for monitoring
worker.on('completed', (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on('failed', (job, error) => {
console.error(`Job ${job.id} failed:`, error.message);
// Alert on Slack, send to error tracking, etc.
});
worker.on('error', (error) => {
console.error('Worker error:', error);
});
export { worker };
Running Workers#
Workers run as separate processes. This is crucial - if a worker crashes or hangs, your API keeps running:
{
"scripts": {
"start": "node src/index.js",
"worker": "node src/workers/emailWorker.js",
"worker:all": "node src/workers/index.js"
}
}
# Terminal 1: API server
npm start
# Terminal 2: Workers (can run multiple instances)
npm run worker
In production: Use process managers (PM2) or container orchestration (Kubernetes) to run multiple worker instances for each queue type.
Job Options Reference#
await queue.add('job-name', data, {
// Retry configuration
attempts: 5, // Maximum retry attempts
backoff: {
type: 'exponential', // 'exponential' or 'fixed'
delay: 1000, // Base delay in milliseconds
},
// Scheduling
delay: 60000, // Don't run until 60 seconds from now
// Priority (lower number = higher priority)
priority: 1, // Process this before priority 2 jobs
// Timeout
timeout: 30000, // Kill job if it takes more than 30 seconds
// Unique job (prevent duplicates)
jobId: `user-${userId}-welcome`, // If this ID exists, don't create another
// Cleanup
removeOnComplete: true, // Remove from Redis when done
removeOnFail: false, // Keep failed jobs for investigation
});
Scheduled/Recurring Jobs#
// Run every day at 9 AM
await queue.add('daily-report', {}, {
repeat: {
pattern: '0 9 * * *', // Cron syntax: minute hour day month weekday
},
});
// Run every 5 minutes
await queue.add('health-check', {}, {
repeat: {
every: 5 * 60 * 1000, // Every 5 minutes (in milliseconds)
},
});
// Remove a recurring job
await queue.removeRepeatableByKey(repeatableKey);
Cron syntax quick reference:
* * * * *
│ │ │ │ │
│ │ │ │ └─ Day of week (0-7, 0 and 7 are Sunday)
│ │ │ └──── Month (1-12)
│ │ └─────── Day of month (1-31)
│ └────────── Hour (0-23)
└───────────── Minute (0-59)
Examples:
0 9 * * * = Every day at 9:00 AM
0 0 * * 0 = Every Sunday at midnight
*/15 * * * * = Every 15 minutes
Job Progress#
For long-running jobs, report progress so users can see status:
// Worker: Report progress
const worker = new Worker('video', async (job) => {
await job.updateProgress(0);
await extractAudio();
await job.updateProgress(25);
await processVideo();
await job.updateProgress(50);
await encodeOutput();
await job.updateProgress(75);
await uploadToStorage();
await job.updateProgress(100);
return { url: outputUrl };
}, { connection });
// API: Check progress
app.get('/jobs/:id/progress', async (req, res) => {
const job = await videoQueue.getJob(req.params.id);
if (!job) {
return res.status(404).json({ error: 'Job not found' });
}
res.json({
id: job.id,
progress: job.progress,
state: await job.getState(),
});
});
Option 2: Agenda (MongoDB-based)#
If you're already using MongoDB and don't want to add Redis, Agenda is a good alternative:
npm install agenda
import Agenda from 'agenda';
const agenda = new Agenda({
db: { address: process.env.MONGODB_URI },
processEvery: '30 seconds', // How often to check for new jobs
});
// Define job handlers
agenda.define('send welcome email', async (job) => {
const { userId, email } = job.attrs.data;
await sendWelcomeEmail(email);
});
// Schedule a one-time job
await agenda.schedule('in 5 minutes', 'send welcome email', {
userId: user.id,
email: user.email,
});
// Define and schedule recurring job
agenda.define('daily cleanup', async () => {
await cleanupOldData();
});
await agenda.every('0 0 * * *', 'daily cleanup'); // Daily at midnight
// Start processing
await agenda.start();
When to choose Agenda:
- Already using MongoDB, don't want Redis
- Simpler use cases
- Smaller scale applications
Option 3: node-cron (Simple Scheduling)#
For simple scheduled tasks without job persistence:
npm install node-cron
import cron from 'node-cron';
// Run every day at midnight
cron.schedule('0 0 * * *', async () => {
console.log('Running daily cleanup...');
await cleanupOldSessions();
});
// Run every hour
cron.schedule('0 * * * *', async () => {
await syncExternalData();
});
// Run every Monday at 9 AM
cron.schedule('0 9 * * 1', async () => {
await sendWeeklyReport();
});
When to use node-cron:
- Simple scheduled tasks
- No need for retries or persistence
- Tasks are idempotent (safe to miss or double-run)
Warning: If your server restarts, scheduled tasks might be missed. node-cron doesn't persist jobs.
Comparison#
| Feature | BullMQ | Agenda | node-cron |
|---|---|---|---|
| Backend | Redis | MongoDB | In-memory |
| Persistence | Yes | Yes | No |
| Retries | Yes | Yes | No |
| Delayed jobs | Yes | Yes | No |
| Recurring | Yes | Yes | Yes |
| Progress tracking | Yes | No | No |
| Dashboard | Bull Board | Agendash | No |
| Distributed | Yes | Yes | No |
| Complexity | Medium | Medium | Low |
| Best for | Production queues | MongoDB shops | Simple cron |
Recommendation:
- BullMQ for most production applications
- Agenda if you're MongoDB-only and have simpler needs
- node-cron for simple scheduled tasks where missing a run is acceptable
Dashboard with Bull Board#
Monitor your queues with a web UI:
npm install @bull-board/express @bull-board/api
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');
createBullBoard({
queues: [
new BullMQAdapter(emailQueue),
new BullMQAdapter(imageQueue),
new BullMQAdapter(reportQueue),
],
serverAdapter,
});
app.use('/admin/queues', serverAdapter.getRouter());
// Visit http://localhost:3000/admin/queues
The dashboard shows:
- Queue depths (how many jobs are waiting)
- Processing rates
- Failed jobs with error messages
- Ability to retry or delete jobs
Best Practices#
1. Make Jobs Idempotent#
Jobs might run multiple times (retries, crashes). Make sure running the same job twice is safe:
// Bad: Sends duplicate email on retry
await sendEmail(user.email, 'Welcome!');
// Good: Check if already sent
const alreadySent = await EmailLog.findOne({ userId, type: 'welcome' });
if (!alreadySent) {
await sendEmail(user.email, 'Welcome!');
await EmailLog.create({ userId, type: 'welcome' });
}
2. Keep Jobs Small#
Don't put too much data in the job. Store it in the database and pass IDs:
// Bad: Large payload
await queue.add('process', { users: arrayOf10000Users });
// Good: Pass ID, fetch in worker
await queue.add('process', { batchId: 'batch-123' });
3. Handle Failures#
Always handle the failed event and alert yourself:
worker.on('failed', async (job, error) => {
logger.error({ jobId: job.id, error: error.message });
// Alert on too many failures
if (job.attemptsMade >= job.opts.attempts) {
await alertSlack(`Job ${job.name} failed permanently: ${error.message}`);
}
});
Key Takeaways#
-
Don't block the response - Queue slow tasks, respond immediately.
-
BullMQ for production - Feature-rich, reliable, well-maintained.
-
Run workers separately - Workers should be separate processes from your API.
-
Always retry - Network calls fail. Default to 3 retries with exponential backoff.
-
Make jobs idempotent - Jobs might run multiple times. Design for it.
-
Monitor your queues - Use Bull Board or similar. Rising queue depth means problems.
The Pattern
// API: Queue the job, respond fast
const job = await queue.add('task', data);
res.json({ jobId: job.id, status: 'queued' });
// Worker: Process in the background
worker.on('completed', (job) => { /* success */ });
worker.on('failed', (job, err) => { /* handle/alert */ });
Users get instant responses. Work happens reliably in the background.
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.