Redis
Learn how to use Redis for caching, sessions, and real-time features - the secret weapon behind fast applications.
The Sticky Note System#
Imagine you work at a busy office. Every time someone asks "What's Sarah's phone number?", you walk to the filing cabinet, search through folders, find her file, and read the number. Takes 30 seconds each time.
Now imagine you're smart. The first time someone asks, you find the answer and write it on a sticky note. Next time? You just glance at the sticky note. 1 second.
That's Redis. It's a super-fast sticky note system for your application - keeping frequently accessed data right where you need it.
Why Redis is Ridiculously Fast#
Redis stores everything in memory (RAM), not on disk. That's why it's fast - reading from RAM is like reading a sticky note on your desk. Reading from a database is like walking to a filing cabinet in another building.
How fast? Under 1 millisecond for most operations. Your database might take 10-50ms for the same data.
The Tradeoff
RAM is fast but limited. You can't store everything in Redis. Use it for frequently accessed data that benefits from speed: user sessions, cached queries, rate limiting counters.
Getting Started#
First, install Redis locally or use a cloud service like Upstash or Redis Cloud. Then add the Node.js client:
npm install redis
Connecting to Redis#
import { createClient } from 'redis';
const redis = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379'
});
// Handle connection events
redis.on('error', (err) => console.error('Redis error:', err));
redis.on('connect', () => console.log('Redis connected'));
// Connect before using
async function startApp() {
await redis.connect();
console.log('Ready to use Redis!');
}
export { redis };
Your connection URL format:
redis://localhost:6379 # Local
redis://user:password@host:port # Remote with auth
The Five Data Types#
Redis isn't just key-value. It has five distinct data types, each designed for specific use cases.
Strings: The Basics#
Strings are the simplest - store a value, get it back:
// Set a value
await redis.set('greeting', 'Hello, World!');
// Get it back
const greeting = await redis.get('greeting');
console.log(greeting); // "Hello, World!"
// Set with expiration (very important!)
await redis.set('temp-token', 'abc123', { EX: 3600 }); // Expires in 1 hour
// Set only if key doesn't exist (great for locks)
const wasSet = await redis.set('lock:resource', '1', { NX: true });
// Returns 'OK' if set, null if key already existed
// Counters (atomic, no race conditions!)
await redis.set('visitors', '0');
await redis.incr('visitors'); // Now 1
await redis.incr('visitors'); // Now 2
await redis.incrBy('visitors', 10); // Now 12
The EX option is crucial. Without it, your Redis memory slowly fills up forever.
Hashes: Storing Objects#
Hashes are like objects - they have fields:
// Store a user profile
await redis.hSet('user:123', {
name: 'Sarah Chen',
email: 'sarah@example.com',
role: 'admin',
loginCount: '0'
});
// Get the whole thing
const user = await redis.hGetAll('user:123');
// { name: 'Sarah Chen', email: 'sarah@example.com', ... }
// Get just one field
const name = await redis.hGet('user:123', 'name');
// 'Sarah Chen'
// Update one field
await redis.hSet('user:123', 'role', 'superadmin');
// Increment a numeric field
await redis.hIncrBy('user:123', 'loginCount', 1);
Why use hashes instead of stringified JSON? You can update individual fields without reading and rewriting the whole object.
Lists: Queues and Recent Items#
Lists are ordered collections. Add to either end, read from either end:
// Add items to the left (front)
await redis.lPush('recent:searches', 'mongodb tutorial');
await redis.lPush('recent:searches', 'redis guide');
// Now the list is: ['redis guide', 'mongodb tutorial']
// Get items (0 = first, -1 = last)
const recent = await redis.lRange('recent:searches', 0, 4);
// Get first 5 items
// Keep only the 10 most recent
await redis.lTrim('recent:searches', 0, 9);
// Pop from the right (oldest item)
const oldest = await redis.rPop('recent:searches');
Perfect for: activity feeds, recent searches, job queues.
Sets: Unique Collections#
Sets store unique values. No duplicates allowed:
// Track followers
await redis.sAdd('user:123:followers', 'user:456');
await redis.sAdd('user:123:followers', 'user:789');
await redis.sAdd('user:123:followers', 'user:456'); // No effect, already there
// Check if someone follows
const isFollowing = await redis.sIsMember('user:123:followers', 'user:456');
// true
// Get all followers
const followers = await redis.sMembers('user:123:followers');
// ['user:456', 'user:789']
// Find mutual followers (intersection)
const mutual = await redis.sInter(
'user:123:followers',
'user:456:followers'
);
Perfect for: tags, followers/following, tracking unique visitors.
Sorted Sets: Leaderboards#
Sorted sets combine uniqueness with scores. Items are automatically sorted:
// Add players with scores
await redis.zAdd('leaderboard', [
{ score: 1500, value: 'player:alice' },
{ score: 2100, value: 'player:bob' },
{ score: 1800, value: 'player:charlie' }
]);
// Get top 10 (highest scores first)
const topPlayers = await redis.zRange('leaderboard', 0, 9, { REV: true });
// ['player:bob', 'player:charlie', 'player:alice']
// Get top 10 with scores
const topWithScores = await redis.zRangeWithScores('leaderboard', 0, 9, { REV: true });
// [{ value: 'player:bob', score: 2100 }, ...]
// Get someone's rank
const rank = await redis.zRevRank('leaderboard', 'player:alice');
// 2 (0-indexed, so 3rd place)
// Update a score
await redis.zIncrBy('leaderboard', 500, 'player:alice');
// Now alice has 2000 points
Perfect for: game leaderboards, "most viewed" lists, time-based rankings.
Real-World Use Cases#
Caching Database Queries#
The most common use case. Query the database once, serve from cache thousands of times:
async function getUser(id) {
const cacheKey = `user:${id}`;
// Check cache first
const cached = await redis.get(cacheKey);
if (cached) {
console.log('Cache hit!');
return JSON.parse(cached);
}
// Cache miss - query database
console.log('Cache miss, querying database...');
const user = await User.findById(id);
if (user) {
// Store in cache for 5 minutes
await redis.set(cacheKey, JSON.stringify(user), { EX: 300 });
}
return user;
}
The first request takes 50ms (database query). The next 10,000 requests take 1ms each (cache hit).
Invalidating the Cache#
When data changes, delete the cached version:
async function updateUser(id, updates) {
// Update database
const user = await User.findByIdAndUpdate(id, updates, { new: true });
// Invalidate cache
await redis.del(`user:${id}`);
return user;
}
This is called "cache invalidation" - making sure stale data doesn't get served.
A Reusable Caching Pattern#
async function withCache(key, ttlSeconds, fetchFn) {
// Try cache
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached);
}
// Fetch fresh data
const data = await fetchFn();
// Cache it (if we got something)
if (data) {
await redis.set(key, JSON.stringify(data), { EX: ttlSeconds });
}
return data;
}
// Usage
const user = await withCache(
`user:${id}`,
300, // 5 minutes
() => User.findById(id)
);
const posts = await withCache(
`trending:posts`,
60, // 1 minute
() => Post.find().sort({ views: -1 }).limit(10)
);
Rate Limiting#
Don't let users hammer your API. Count requests per time window:
async function rateLimiter(req, res, next) {
const ip = req.ip;
const key = `ratelimit:${ip}`;
const maxRequests = 100;
const windowSeconds = 60;
// Increment counter
const requests = await redis.incr(key);
// Set expiration on first request
if (requests === 1) {
await redis.expire(key, windowSeconds);
}
// Check if over limit
if (requests > maxRequests) {
const ttl = await redis.ttl(key);
return res.status(429).json({
error: 'Too many requests',
retryAfter: ttl
});
}
// Add helpful headers
res.set('X-RateLimit-Limit', maxRequests);
res.set('X-RateLimit-Remaining', Math.max(0, maxRequests - requests));
next();
}
app.use('/api', rateLimiter);
This is atomic - even if 100 requests arrive simultaneously, each gets a unique count.
Session Storage#
HTTP is stateless - sessions remember who you are between requests:
import session from 'express-session';
import RedisStore from 'connect-redis';
app.use(session({
store: new RedisStore({ client: redis }),
secret: process.env.SESSION_SECRET,
resave: false,
saveUninitialized: false,
cookie: {
secure: process.env.NODE_ENV === 'production',
httpOnly: true,
maxAge: 24 * 60 * 60 * 1000 // 24 hours
}
}));
// Now you can use sessions
app.post('/login', async (req, res) => {
const user = await authenticateUser(req.body);
req.session.userId = user.id;
req.session.role = user.role;
res.json({ success: true });
});
app.get('/profile', async (req, res) => {
if (!req.session.userId) {
return res.status(401).json({ error: 'Not logged in' });
}
// User is authenticated
});
Why Redis for sessions? Sessions need to be fast (every request checks them) and can be stored in memory (they're temporary anyway).
Real-Time with Pub/Sub#
Redis can broadcast messages between different parts of your system:
// Create a separate connection for subscribing
// (subscribers can't run other commands)
const subscriber = redis.duplicate();
await subscriber.connect();
// Subscribe to a channel
await subscriber.subscribe('notifications', (message) => {
const data = JSON.parse(message);
console.log('Received notification:', data);
// Broadcast to WebSocket clients, etc.
});
// Publish from anywhere in your app
async function notifyUser(userId, message) {
await redis.publish('notifications', JSON.stringify({
userId,
message,
timestamp: Date.now()
}));
}
// In another part of your app
await notifyUser('123', 'You have a new follower!');
Perfect for: chat apps, live notifications, real-time dashboards.
Distributed Locks#
When multiple servers might do the same work, use locks:
async function acquireLock(resource, ttlSeconds = 30) {
const lockKey = `lock:${resource}`;
const lockId = `${Date.now()}-${Math.random()}`;
// NX = only set if not exists
// EX = auto-expire (safety net)
const acquired = await redis.set(lockKey, lockId, {
NX: true,
EX: ttlSeconds
});
return acquired ? lockId : null;
}
async function releaseLock(resource, lockId) {
const lockKey = `lock:${resource}`;
// Only release if we own the lock
// Use a Lua script for atomicity
const script = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`;
await redis.eval(script, {
keys: [lockKey],
arguments: [lockId]
});
}
// Usage
async function processPayment(orderId) {
const lockId = await acquireLock(`payment:${orderId}`);
if (!lockId) {
console.log('Payment already being processed');
return;
}
try {
// Only one server can be here at a time
await chargeCard(orderId);
await updateOrder(orderId);
} finally {
await releaseLock(`payment:${orderId}`, lockId);
}
}
The TTL is a safety net - if your server crashes mid-process, the lock auto-releases.
Key Naming Conventions#
Use colons : to create namespaces:
// Good key names
'user:123' // User with ID 123
'user:123:profile' // User 123's profile
'cache:posts:trending' // Cached trending posts
'ratelimit:192.168.1.1' // Rate limit for an IP
'session:abc123' // Session data
'lock:payment:order:456' // Lock for processing order 456
This makes keys readable and avoids collisions.
Handling Failures Gracefully#
Redis going down shouldn't crash your app. Build fallbacks:
async function getUserWithFallback(id) {
try {
// Try cache first
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
} catch (error) {
// Redis is down - log and continue
console.error('Redis error:', error.message);
}
// Fallback to database
return await User.findById(id);
}
For critical operations, wrap Redis calls in try-catch and have a plan B.
Quick Reference#
| Data Type | Use Case | Key Commands |
|---|---|---|
| String | Cache, counters, simple values | SET, GET, INCR |
| Hash | Objects, profiles | HSET, HGET, HGETALL |
| List | Queues, recent items | LPUSH, RPOP, LRANGE |
| Set | Tags, unique items | SADD, SMEMBERS, SINTER |
| Sorted Set | Leaderboards, rankings | ZADD, ZRANGE, ZREVRANK |
Key Takeaways#
Redis is simple but incredibly useful:
- Start with caching - Put frequently-accessed data in Redis
- Always set TTLs - Memory fills up if keys never expire
- Use the right data type - Hashes for objects, sorted sets for rankings
- Handle failures - Redis down shouldn't mean app down
- Name keys well - Use colons to namespace:
entity:id:attribute
The Redis Mindset
Ask yourself: "Do I fetch this data often? Is it okay if it's slightly stale? Does it benefit from being fast?" If yes to all three, cache it in Redis.
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.