Fix: Redis OOM command not allowed when used memory > maxmemory
Quick Answer
How to fix Redis OOM command not allowed when used memory exceeds maxmemory caused by memory limits, missing eviction policies, large keys, and memory fragmentation.
The Error
Your application gets this error from Redis:
OOM command not allowed when used memory > 'maxmemory'.Or from client libraries:
redis.exceptions.ResponseError: OOM command not allowed when used memory > 'maxmemory'.ReplyError: OOM command not allowed when used memory > 'maxmemory'.io.lettuce.core.RedisCommandExecutionException: OOM command not allowed when used memory > 'maxmemory'.Redis has reached its configured memory limit and refuses to accept write commands. Read commands still work, but any command that would increase memory usage is rejected.
Why This Happens
Redis stores all data in memory. When maxmemory is configured and the memory usage exceeds this limit, Redis must either reject new writes or evict existing keys (depending on the maxmemory-policy).
If the eviction policy is noeviction (the default), Redis rejects write commands when memory is full. No data is lost, but your application cannot write new data.
Common causes:
maxmemoryset too low. The data set has grown beyond the configured limit.maxmemory-policyset tonoeviction. Redis cannot free memory by removing keys.- Memory leak in application. Keys are added but never expired or deleted.
- Large keys. A few keys consume most of the memory (large lists, sets, or hashes).
- Memory fragmentation. Redis uses more OS memory than the logical data size.
- No TTL on keys. Keys persist forever, accumulating over time.
Fix 1: Increase maxmemory
The quickest fix if you have available server memory:
Check current memory usage:
redis-cli INFO memory
# used_memory_human:3.50G
# maxmemory_human:4.00G
# maxmemory_policy:noevictionIncrease at runtime (no restart needed):
redis-cli CONFIG SET maxmemory 8gbIn redis.conf (permanent):
maxmemory 8gbCheck available system memory first:
free -h
# Make sure Redis maxmemory leaves enough for the OS and other processesPro Tip: Set
maxmemoryto no more than 75% of available RAM. Redis needs extra memory for fork operations (RDB saves, AOF rewrites), background processes, and output buffers. Ifmaxmemoryis too close to total RAM, the OS might OOM-kill the Redis process.
Fix 2: Set an Eviction Policy
Change the maxmemory-policy so Redis can free memory by removing keys:
redis-cli CONFIG SET maxmemory-policy allkeys-lruAvailable policies:
| Policy | Description |
|---|---|
noeviction | Return error on writes (default) |
allkeys-lru | Remove least recently used keys (recommended for caches) |
allkeys-lfu | Remove least frequently used keys |
allkeys-random | Remove random keys |
volatile-lru | Remove LRU keys that have a TTL set |
volatile-lfu | Remove LFU keys that have a TTL set |
volatile-ttl | Remove keys with shortest TTL first |
volatile-random | Remove random keys that have a TTL |
For caching workloads:
# redis.conf
maxmemory-policy allkeys-lruallkeys-lru is the best choice for caches — it removes the least recently accessed keys to make room for new ones.
For session stores:
maxmemory-policy volatile-ttlThis removes only keys with TTLs, preferring those expiring soonest. Keys without TTLs are never evicted.
Common Mistake: Using
volatile-*policies when most keys do not have a TTL. These policies only evict keys with an expiration set. If no keys have TTLs, Redis behaves likenoevictionand still returns OOM errors.
Fix 3: Find and Remove Large Keys
A few large keys might be consuming most of the memory:
Scan for large keys:
redis-cli --bigkeys
# Shows the largest key of each type:
# Biggest string found 'session:abc123' has 15728640 bytes
# Biggest list found 'events:queue' has 2500000 itemsGet memory usage of specific keys:
redis-cli MEMORY USAGE mykey
# (integer) 15728768 — bytes used by this keyFind keys by pattern and check their size:
redis-cli --scan --pattern "cache:*" | head -20Delete large keys safely (non-blocking):
# UNLINK is async and non-blocking (Redis 4.0+)
redis-cli UNLINK large-key-name
# DEL is synchronous and blocks Redis for large keys
# Avoid DEL on large keys in productionTrim large lists:
# Keep only the last 10000 items
redis-cli LTRIM events:queue -10000 -1Fix 4: Set TTL on Keys
Keys without expiration accumulate forever:
Check keys without TTL:
redis-cli TTL mykey
# -1 means no expiration
# -2 means the key doesn't exist
# positive number is seconds remainingSet TTL when creating keys:
# Python
redis_client.setex("cache:user:123", 3600, user_data) # Expires in 1 hour
redis_client.set("cache:user:123", user_data, ex=3600) # Same thing// Node.js
await redis.set("cache:user:123", userData, "EX", 3600);// Go
rdb.Set(ctx, "cache:user:123", userData, time.Hour)Add TTL to existing keys:
redis-cli EXPIRE cache:user:123 3600Find keys without TTL and set defaults:
# Bash script to add TTL to all keys matching a pattern
redis-cli --scan --pattern "cache:*" | while read key; do
ttl=$(redis-cli TTL "$key")
if [ "$ttl" = "-1" ]; then
redis-cli EXPIRE "$key" 86400 # 24 hours
fi
doneFix 5: Optimize Data Structures
Use more memory-efficient data structures:
Use hashes instead of individual keys for small objects:
# Inefficient — one key per field
SET user:123:name "Alice"
SET user:123:email "alice@example.com"
SET user:123:age "30"
# ~3 keys × overhead per key
# Efficient — one hash
HSET user:123 name "Alice" email "alice@example.com" age "30"
# 1 key, compact encoding for small hashesEnable ziplist encoding for small data structures:
# redis.conf — small hashes use ziplist (very memory-efficient)
hash-max-ziplist-entries 128
hash-max-ziplist-value 64
# Small lists use listpack
list-max-ziplist-size -2
# Small sets use listpack
set-max-intset-entries 512Compress large string values:
import zlib
import redis
r = redis.Redis()
# Compress before storing
data = zlib.compress(large_json_string.encode())
r.set("data:large", data, ex=3600)
# Decompress when reading
compressed = r.get("data:large")
original = zlib.decompress(compressed).decode()Fix 6: Fix Memory Fragmentation
Redis might use more OS memory than the logical data size:
redis-cli INFO memory
# mem_fragmentation_ratio:1.8
# A ratio > 1.5 indicates significant fragmentationEnable active defragmentation (Redis 4.0+):
# redis.conf
activedefrag yes
active-defrag-enabled yes
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100Or restart Redis to eliminate fragmentation (requires persistence configured).
Use jemalloc (the default allocator, but verify):
redis-cli INFO memory
# mem_allocator:jemalloc-5.3.0Fix 7: Monitor Memory Usage
Set up proactive monitoring:
# Current memory stats
redis-cli INFO memory
# Key metrics to monitor:
# used_memory — total allocated by Redis
# used_memory_rss — total memory from OS perspective
# maxmemory — configured limit
# evicted_keys — number of keys evicted (if using eviction policy)
# mem_fragmentation_ratio — RSS / used_memorySet up alerts:
import redis
r = redis.Redis()
info = r.info("memory")
used_mb = info["used_memory"] / 1024 / 1024
max_mb = info["maxmemory"] / 1024 / 1024
usage_pct = (used_mb / max_mb) * 100
if usage_pct > 80:
send_alert(f"Redis memory at {usage_pct:.1f}%: {used_mb:.0f}MB / {max_mb:.0f}MB")Fix 8: Scale Redis
When a single Redis instance is not enough:
Redis Cluster (horizontal scaling):
# Data is sharded across multiple nodes
redis-cli --cluster create node1:6379 node2:6379 node3:6379 \
--cluster-replicas 1Read replicas (offload reads):
# On the replica
replicaof primary-host 6379Application-level sharding:
# Route keys to different Redis instances
def get_redis(key):
shard = hash(key) % len(redis_instances)
return redis_instances[shard]Still Not Working?
Check for client output buffer limits. Large MONITOR or Pub/Sub connections consume memory:
redis-cli CLIENT LIST
# Check for clients with large output buffers (omem)Check for Lua script memory. Lua scripts can allocate significant memory inside Redis.
Check for replica output buffers. During replication, the output buffer for replicas can grow large:
client-output-buffer-limit replica 256mb 64mb 60For Redis connection issues, see Fix: Redis connection refused. For Redis type errors, see Fix: Redis WRONGTYPE Operation.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: MongoDB E11000 duplicate key error collection
How to fix the MongoDB E11000 duplicate key error by identifying duplicate fields, fixing index conflicts, using upserts, handling null values, and resolving race conditions.
Fix: MySQL ERROR 1205: Lock wait timeout exceeded
How to fix MySQL ERROR 1205 Lock wait timeout exceeded caused by long-running transactions, row-level locks, missing indexes, deadlocks, and InnoDB lock contention.
Fix: PostgreSQL permission denied for table (or relation, schema, sequence)
How to fix the PostgreSQL error 'permission denied for table' by granting privileges, fixing default permissions, resolving schema and ownership issues, RLS policies, and role inheritance.
Fix: WRONGTYPE Operation against a key holding the wrong kind of value (Redis)
How to fix Redis errors: WRONGTYPE Operation against a key holding the wrong kind of value, MISCONF Redis is configured to save RDB snapshots, OOM command not allowed, READONLY You can't write against a read only replica, and other common Redis errors. Covers key type mismatches, disk issues, memory limits, eviction policies, connection problems, and serialization.