Fix: Docker Container Exited (137) OOMKilled / Killed Signal 9
The Error
Your Docker container stops unexpectedly. You check the status and see:
$ docker ps -a
CONTAINER ID IMAGE STATUS NAMES
a1b2c3d4e5f6 myapp Exited (137) 2 minutes ago myappOr you check the logs and find:
KilledYou inspect the container and see:
"State": {
"OOMKilled": true,
"ExitCode": 137
}In Kubernetes, the pod status shows:
NAME READY STATUS RESTARTS AGE
myapp 0/1 OOMKilled 3 5mAll of these mean the same thing: your container was killed because it ran out of memory.
Why This Happens
Exit code 137 means the process received SIGKILL (signal 9). The formula is 128 + 9 = 137. When a container exceeds its memory limit, the Linux kernel’s OOM (Out of Memory) killer terminates it with SIGKILL. There’s no graceful shutdown — the process is killed immediately.
This happens for one of these reasons:
- The container has a memory limit and exceeded it. Docker’s
--memoryflag or Kubernetesresources.limits.memorysets a hard cap. Once the container crosses that line, it’s killed instantly. - The Docker host itself is running out of memory. Even without container-level limits, the host OS has finite RAM. The kernel’s OOM killer picks the biggest memory consumer and kills it — often your container.
- Docker Desktop has a memory ceiling. On macOS and Windows, Docker Desktop runs inside a VM with a fixed amount of RAM (default is usually 2 GB). All your containers share that allocation.
- Your application has a memory leak. The container’s memory usage climbs over time until it hits the limit.
Fix 1: Check What’s Using Memory
Before changing any limits, find out how much memory your container actually needs.
Check current memory usage of running containers:
docker statsThis shows a live view of CPU, memory, network, and I/O for every running container. Watch the MEM USAGE / LIMIT column.
Inspect a stopped container to confirm OOM:
docker inspect <container-id> | grep -i oomIf OOMKilled is true, you’ve confirmed the cause.
Check the host’s dmesg logs for OOM events:
dmesg | grep -i "oom\|killed process"You’ll see entries like:
Out of memory: Killed process 12345 (node) total-vm:1024000kB, anon-rss:512000kBThis tells you exactly which process was killed and how much memory it was using.
Fix 2: Increase the Container Memory Limit
If your container genuinely needs more memory, raise the limit.
Docker run:
docker run --memory=2g --memory-swap=2g myapp--memory=2gsets the hard limit to 2 GB.--memory-swap=2gsets the total (RAM + swap) to the same value, effectively disabling swap. If you want to allow swap, set--memory-swaphigher than--memory.
Docker Compose:
services:
myapp:
image: myapp
deploy:
resources:
limits:
memory: 2g
reservations:
memory: 512mFor Compose V2 without swarm mode, you can also use the top-level mem_limit (though deploy.resources is the preferred modern syntax):
services:
myapp:
image: myapp
mem_limit: 2g
memswap_limit: 2gVerify the limit is applied:
docker stats myappThe LIMIT column should show your new value.
Fix 3: Increase Docker Desktop Memory
On macOS and Windows, Docker Desktop runs inside a VM with limited resources. The default is often 2 GB, which is not enough for multi-container setups.
Docker Desktop (macOS / Windows):
- Open Docker Desktop
- Go to Settings > Resources > Advanced
- Increase the Memory slider (4 GB or 8 GB is a reasonable starting point)
- Click Apply & Restart
WSL2 backend (Windows):
Docker Desktop with WSL2 uses the WSL2 VM’s memory allocation. You can configure this with a .wslconfig file:
# %USERPROFILE%\.wslconfig
[wsl2]
memory=8GB
swap=4GBAfter saving, restart WSL2:
wsl --shutdownThen restart Docker Desktop.
Fix 4: Reduce Your Application’s Memory Usage
Sometimes the fix isn’t more memory — it’s less waste.
Node.js
Node.js defaults to a heap limit around 1.5 GB (varies by version and system). In a container with a 512 MB limit, this is a problem.
Set the max heap explicitly:
ENV NODE_OPTIONS="--max-old-space-size=384"Or pass it at runtime:
docker run --memory=512m -e NODE_OPTIONS="--max-old-space-size=384" myappWhy 384 and not 512? The heap limit should be ~75% of the container memory limit. The remaining memory is needed for the V8 engine overhead, native code, buffers, and the OS.
Detect Node.js memory leaks:
docker run --memory=1g -e NODE_OPTIONS="--max-old-space-size=768 --expose-gc" myappUse process.memoryUsage() in your code to log heap usage over time. If heapUsed grows continuously without flattening, you have a leak.
Java
Java is notorious for consuming excess memory in containers. The JVM allocates a heap that can exceed the container’s limit if not configured properly. For more on Java memory issues, see Fix: Java OutOfMemoryError: Java heap space.
Modern JVMs (Java 10+) detect container limits automatically with:
ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0"For older JVMs (Java 8u191+):
ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMFraction=2"For Java 8 before u191, container detection doesn’t exist. You must set the heap explicitly:
ENV JAVA_OPTS="-Xmx384m -Xms256m"Python
Python processes can consume large amounts of memory with data-heavy libraries (Pandas, NumPy, ML frameworks).
Reduce memory usage:
- Use generators instead of loading entire datasets into memory.
- Process data in chunks with
pandas.read_csv(chunksize=10000). - Use
delandgc.collect()to free large objects explicitly. - Set
MALLOC_TRIM_THRESHOLD_to release memory back to the OS:
ENV MALLOC_TRIM_THRESHOLD_=100000Fix 5: Use Multi-Stage Builds to Reduce Image Size
A bloated image doesn’t directly cause OOM, but unnecessary build dependencies increase the container’s baseline memory footprint.
Before (single stage):
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]This image includes npm, the entire Node.js development toolchain, and all devDependencies in node_modules.
After (multi-stage):
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
CMD ["node", "dist/server.js"]Even better, use npm ci --omit=dev in a production install step to strip devDependencies:
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --omit=dev
CMD ["node", "dist/server.js"]For more on Dockerfile issues, see Fix: Docker COPY Failed: File Not Found in Build Context.
Fix 6: Kubernetes Resource Limits
In Kubernetes, OOMKilled happens when a pod exceeds its resources.limits.memory.
Check why the pod was killed:
kubectl describe pod <pod-name>Look for:
Last State: Terminated
Reason: OOMKilled
Exit Code: 137Set appropriate resource requests and limits:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"requestsis what the scheduler uses to find a node with enough memory. Set this to your app’s normal usage.limitsis the hard cap. If the container exceeds this, Kubernetes kills it. Set this to your app’s peak usage plus some headroom.
Common mistake: Setting requests equal to limits. This guarantees the memory is reserved but wastes resources if the app doesn’t always use it. Only do this for critical workloads where you need guaranteed QoS.
Check node-level memory pressure:
kubectl top nodes
kubectl describe node <node-name> | grep -A 5 "Allocated resources"If the node is overcommitted (total requested memory exceeds available), pods will get evicted even if they’re under their individual limits.
For more on Kubernetes connectivity issues, see Fix: The Connection to the Server localhost:8080 Was Refused (kubectl).
Fix 7: Enable and Configure Swap
By default, Docker limits container swap to the same value as the memory limit. You can allow containers to use swap as a buffer.
docker run --memory=512m --memory-swap=1g myappThis gives the container 512 MB RAM and 512 MB swap (1 GB total minus 512 MB memory).
To allow unlimited swap:
docker run --memory=512m --memory-swap=-1 myappWarning: Swap prevents OOM kills but causes severe performance degradation. Your container will slow to a crawl instead of crashing. This is a band-aid, not a fix. Use it to buy time while you find the real memory issue.
Check if swap is enabled on the host:
free -h
swapon --showIf no swap exists on the host, Docker’s swap settings have no effect. On Docker Desktop, swap is configured through the Desktop settings (see Fix 3).
Fix 8: OOM During Docker Build
If the OOM kill happens during docker build rather than at runtime, the build process is consuming too much memory. This is common with:
npm install/npm cion large projects- Webpack / Vite / esbuild bundling
- Java/Gradle/Maven compilation
Limit build memory:
docker build --memory=4g -t myapp .For Node.js builds:
RUN NODE_OPTIONS="--max-old-space-size=3072" npm run buildFor Java/Gradle builds:
RUN GRADLE_OPTS="-Xmx2g" ./gradlew buildReduce build parallelism:
# Webpack
RUN NODE_OPTIONS="--max-old-space-size=2048" npx webpack --config webpack.prod.js
# Gradle
RUN ./gradlew build --max-workers=2Fewer parallel workers means less peak memory at the cost of longer build times.
Still Not Working?
OOM kill without exceeding the visible limit
If docker stats shows the container using less memory than the limit, but it still gets OOMKilled, check for kernel memory (kmem). Kernel memory allocations (network buffers, filesystem cache, etc.) count toward the container’s limit but don’t show up in the standard memory metric.
Check the full memory breakdown:
# For cgroups v1
cat /sys/fs/cgroup/memory/docker/<container-id>/memory.kmem.usage_in_bytes
# For cgroups v2
cat /sys/fs/cgroup/system.slice/docker-<container-id>.scope/memory.currentThe container restarts but you can’t catch it
If the container restarts too fast to see the OOM event, check Docker’s event stream:
docker events --filter event=oomRun this in a separate terminal, then reproduce the issue.
Memory usage spikes during specific operations
Profile your application’s memory during the operation that triggers the OOM. Useful tools:
- Node.js:
--inspectflag with Chrome DevTools Memory tab - Java:
jmap -heap <pid>, VisualVM, orasync-profiler - Python:
tracemalloc,memory_profiler, orobjgraph - Go:
pprofwithruntime.MemStats
Cgroup v1 vs v2 differences
Docker on newer Linux distributions (Ubuntu 22.04+, Fedora 31+) uses cgroups v2, which handles memory accounting differently from v1. Check which version you’re using:
stat -f -c %T /sys/fs/cgroup/cgroup2fs= cgroups v2tmpfs= cgroups v1
Cgroups v2 counts additional memory (like kernel stack memory) toward the container limit that v1 did not. A container that ran fine under v1 might get OOMKilled under v2 with the same memory limit. Increase the limit by 10-20% if you recently upgraded your host OS.
Container keeps getting OOMKilled in a loop
If a container with --restart=always keeps getting killed and restarting:
docker run --restart=on-failure:5 --memory=1g myappThe on-failure:5 limits restart attempts to 5, so you can inspect the container’s state instead of watching it crash indefinitely.
In Kubernetes, check the restart count and look at the previous container’s logs:
kubectl logs <pod-name> --previousHost-level OOM vs container-level OOM
If no container has a memory limit set but containers are still getting killed, the host is running out of memory. The kernel’s OOM killer selects the process with the highest oom_score.
Check the OOM score of your container’s process:
# Find the container's PID
docker inspect --format '{{.State.Pid}}' <container-id>
# Check its OOM score
cat /proc/<pid>/oom_scoreA higher score means the process is more likely to be killed. You can adjust this (lower = safer):
docker run --oom-score-adj=-500 myappOr completely disable OOM killing for a critical container (use with extreme caution — this can hang the entire host):
docker run --oom-kill-disable --memory=2g myappNever use --oom-kill-disable without a --memory limit. Without a limit, the container can consume all host memory and freeze the system.
Check for file watchers exhausting memory
On Linux, running many containers with file watchers (dev servers, hot-reload tools) can exhaust inotify limits, which indirectly causes memory pressure. See Fix: ENOSPC: System Limit for Number of File Watchers Reached for how to increase these limits.
Related: If you’re troubleshooting Docker socket issues, see Fix: Docker Permission Denied While Trying to Connect to the Docker Daemon Socket.
Related Articles
Fix: Docker Volume Permission Denied – Cannot Write to Mounted Volume
How to fix Docker permission denied errors on mounted volumes caused by UID/GID mismatch, read-only mounts, or SELinux labels.
Fix: E: Unable to locate package (apt-get install on Ubuntu/Debian)
How to fix the 'E: Unable to locate package' error in apt-get on Ubuntu and Debian, including apt update, missing repos, Docker images, PPA issues, and EOL releases.
Fix: Docker no space left on device (build, pull, or run)
How to fix the 'no space left on device' error in Docker when building images, pulling layers, or running containers, with cleanup and prevention strategies.
Fix: Kubernetes Pod CrashLoopBackOff (Back-off restarting failed container)
How to fix the Kubernetes CrashLoopBackOff error when a pod repeatedly crashes and Kubernetes keeps restarting it with increasing back-off delays.