Fix: Docker no space left on device (build, pull, or run)
The Error
You try to build a Docker image and it fails:
$ docker build -t myapp .
ERROR: failed to solve: failed to register layer: write /usr/lib/somelib.so: no space left on deviceOr you pull an image and it breaks midway:
$ docker pull postgres:16
Error response from daemon: failed to register layer: Error processing tar file(exit status 1): write /usr/lib/x86_64-linux-gnu/libicudata.so.72: no space left on deviceOr a running container crashes when it tries to write data:
$ docker logs myapp
Error: ENOSPC: no space left on device, write '/app/data/output.json'Or Docker Compose fails entirely:
$ docker compose up
Error response from daemon: no space left on deviceThe message is always the same: Docker cannot write to disk because it has run out of space. This can happen on the host filesystem, inside the container’s writable layer, or within a volume.
Why This Happens
Docker consumes disk space in ways that are not always obvious. Every operation leaves artifacts behind, and without periodic cleanup, they accumulate until the disk is full.
Here is where the space goes:
- Images. Every image you pull or build is stored locally. A single image can be hundreds of megabytes or several gigabytes. Old versions, intermediate images, and images from abandoned projects pile up fast.
- Containers. Stopped containers are not deleted automatically. Each one retains its writable layer, which includes any files the container created or modified during its lifetime.
- Volumes. Named and anonymous volumes persist even after the container that created them is removed. Database containers are especially bad — a PostgreSQL or MySQL volume can grow to tens of gigabytes.
- Build cache. Docker caches every layer from every build. Multi-stage builds, repeated builds with changing dependencies, and CI/CD pipelines generate enormous amounts of cache data.
- Container logs. By default, Docker stores container logs as JSON files with no size limit. A noisy application can fill the disk with log data alone.
On Docker Desktop (macOS and Windows), all of this is stored inside a virtual disk image with a fixed maximum size. Once that virtual disk fills up, you get the error even if your host machine has plenty of free space.
Fix 1: Check What Docker Is Using
Before deleting anything, find out where the space is going.
Show Docker disk usage:
docker system dfOutput looks like this:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 45 3 12.8GB 11.2GB (87%)
Containers 12 1 820MB 815MB (99%)
Local Volumes 23 2 8.4GB 7.1GB (84%)
Build Cache 78 0 5.3GB 5.3GB (100%)The RECLAIMABLE column tells you how much space you can recover. In this example, over 23 GB is reclaimable.
For a detailed breakdown:
docker system df -vThis lists every image, container, and volume individually with its size. Use this to identify the biggest offenders.
Check the host filesystem:
df -h /var/lib/dockerThis shows how much space is available on the partition where Docker stores its data. If this partition is separate from your root partition and is small, that is likely the bottleneck. For related filesystem issues on Linux, see Fix: bash: permission denied.
Fix 2: Docker System Prune
The fastest way to reclaim space is a system-wide prune.
Remove stopped containers, unused networks, dangling images, and build cache:
docker system pruneDocker will ask for confirmation and show you what will be removed.
Remove everything including unused images (not just dangling ones):
docker system prune -aA “dangling” image is one with no tag (shows as <none>:<none>). An “unused” image is one not referenced by any container. The -a flag removes both.
Also remove unused volumes:
docker system prune -a --volumesWarning: This deletes volume data permanently. If you have database volumes or other persistent data you care about, do not use --volumes blindly. Check which volumes exist first with docker volume ls.
Skip the confirmation prompt (useful in scripts):
docker system prune -a --volumes -fFix 3: Remove Unused Images
If you want more control than a full prune, remove images selectively.
List all images sorted by size:
docker images --format "{{.Repository}}:{{.Tag}}\t{{.Size}}\t{{.ID}}" | sort -k2 -hRemove dangling images only:
docker image pruneRemove all unused images:
docker image prune -aRemove specific images you no longer need:
docker rmi postgres:14 node:18 python:3.9Remove images older than a certain age:
docker image prune -a --filter "until=720h"This removes unused images created more than 30 days ago (720 hours). Useful for keeping recent images while cleaning old ones. If Docker refuses to remove an image because of a permissions issue, see Fix: Docker Permission Denied While Trying to Connect to the Docker Daemon Socket.
Fix 4: Remove Stopped Containers
Stopped containers consume disk space for no reason unless you need their logs or filesystem state.
List all containers including stopped ones:
docker ps -aRemove all stopped containers:
docker container pruneRemove specific containers:
docker rm container1 container2 container3Remove a container and its anonymous volumes:
docker rm -v mycontainerPrevent the problem going forward — always use --rm for throwaway containers:
docker run --rm -it ubuntu bashThe --rm flag automatically removes the container (and its anonymous volumes) when it exits. Make this a habit for development and one-off tasks.
Fix 5: Remove Unused Volumes
Volumes are the most commonly overlooked source of disk waste. They persist independently of containers and can grow very large.
List all volumes:
docker volume lsList dangling volumes (not attached to any container):
docker volume ls -f dangling=trueRemove all unused volumes:
docker volume pruneRemove a specific volume:
docker volume rm my_database_volumeCheck volume size on disk (Linux):
du -sh /var/lib/docker/volumes/*Large volumes are usually databases, file uploads, or application caches. Before deleting them, make sure you have backups or can regenerate the data.
Fix 6: Clear the Build Cache
Docker BuildKit caches intermediate layers, source code mounts, and downloaded dependencies. On a machine that runs many builds, this cache can consume tens of gigabytes.
Remove all build cache:
docker builder pruneRemove all build cache without confirmation:
docker builder prune -a -fRemove only cache entries older than 24 hours:
docker builder prune --filter "until=24h"Set a cache size limit in the Docker daemon configuration. Edit /etc/docker/daemon.json (or the Docker Desktop settings):
{
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "10GB"
}
}
}This tells Docker to automatically garbage-collect build cache when it exceeds 10 GB. Restart the Docker daemon after changing this file. If the daemon fails to start, see Fix: Docker Daemon Is Not Running.
Fix 7: Use a .dockerignore File
A missing or incomplete .dockerignore file sends your entire project directory to the Docker daemon as build context. This wastes both time and disk space during builds.
Common offenders:
node_modules/ # can be hundreds of MB
.git/ # entire repo history
dist/ # previous build artifacts
*.tar.gz # leftover archives
data/ # local data files
.env # secrets (also a security issue)Create a .dockerignore file in your project root:
node_modules
.git
.gitignore
dist
build
*.md
*.tar.gz
.env
.env.*
.vscode
.idea
__pycache__
*.pyc
.pytest_cache
coverage
.nyc_outputCheck how large your build context is:
du -sh . --exclude=.gitCompare this with what you actually need in the image. If your build context is 500 MB but your final image only needs 20 MB of source code, your .dockerignore is missing entries.
A large build context is also a common cause of slow builds that consume excessive temporary disk space during the COPY or ADD step.
Fix 8: Use Multi-Stage Builds
Multi-stage builds reduce final image size, which means less disk space consumed per image. They also reduce the size of the build cache over time.
Before (single stage, 1.2 GB image):
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]After (multi-stage, 200 MB image):
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --omit=dev
CMD ["node", "dist/server.js"]The final image only contains the slim Node.js runtime and production dependencies. Build tools, devDependencies, and source files are left behind in the builder stage.
For Go applications, the savings are even more dramatic:
FROM golang:1.22 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app/server .
FROM scratch
COPY --from=builder /app/server /server
CMD ["/server"]This produces a final image that is just the static binary — often under 20 MB. When containers crash in Kubernetes due to resource issues after switching to smaller images, the cause might be memory-related instead. See Fix: Kubernetes CrashLoopBackOff for help diagnosing those restarts.
Fix 9: Increase Docker Desktop Disk Size
On macOS and Windows, Docker Desktop stores all images, containers, and volumes inside a virtual disk image. This disk has a default maximum size (often 64 GB) that can fill up even when the host has plenty of free space.
Docker Desktop (macOS / Windows):
- Open Docker Desktop
- Go to Settings > Resources > Advanced
- Increase the Virtual disk limit slider
- Click Apply & Restart
Docker Desktop will resize the virtual disk image. This does not immediately consume the full amount of host disk space — the virtual disk file grows as needed up to the limit.
Check the virtual disk usage:
On macOS, the virtual disk is at:
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.rawOn Windows with WSL2:
%LOCALAPPDATA%\Docker\wsl\disk\docker_data.vhdxCheck the file size to see how much of the virtual disk is actually used.
If the virtual disk is bloated but Docker reports low usage, the virtual disk file may not be reclaiming space from deleted data. Reset Docker Desktop’s disk image through Settings > Resources > Advanced > Purge data (this deletes all Docker data) or use the Hyper-V compact command on Windows.
Fix 10: Move the Docker Data Directory
If the partition where Docker stores its data is too small, move Docker’s data root to a larger partition.
Check current data root:
docker info | grep "Docker Root Dir"Default is /var/lib/docker on Linux.
Move to a different location:
- Stop the Docker daemon:
sudo systemctl stop docker- Edit
/etc/docker/daemon.json:
{
"data-root": "/mnt/large-disk/docker"
}- Copy existing data to the new location:
sudo rsync -aP /var/lib/docker/ /mnt/large-disk/docker/- Start Docker:
sudo systemctl start docker- Verify the new location:
docker info | grep "Docker Root Dir"Once confirmed, you can remove the old data at /var/lib/docker to reclaim space.
On Docker Desktop, you can move the virtual disk image through the Desktop settings without manual file operations.
Fix 11: Handle CI/CD Disk Space Issues
CI/CD runners (GitHub Actions, GitLab CI, Jenkins) frequently hit “no space left on device” because they build many images on shared or ephemeral machines.
GitHub Actions — free up disk space before the build:
- name: Free disk space
run: |
docker system prune -af
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
df -hGitHub-hosted runners have about 14 GB of free space. Large builds, especially multi-platform builds with docker buildx, can exceed this.
Use --no-cache in CI builds to avoid cache accumulation:
docker build --no-cache -t myapp .Limit BuildKit cache in CI:
docker buildx build --cache-to type=local,dest=/tmp/buildcache,mode=max \
--cache-from type=local,src=/tmp/buildcache \
-t myapp .This stores the cache in a known location that you can size-limit or clear between runs.
GitLab CI — clear Docker data between jobs:
after_script:
- docker system prune -af --volumesJenkins — schedule periodic cleanup:
Add a cron job or Jenkins pipeline that runs cleanup on the build agents:
0 2 * * * docker system prune -af --volumesThis runs at 2 AM daily. Adjust frequency based on how often your builds run. If your Docker daemon stops between builds and you cannot start it again, see Fix: Docker Daemon Is Not Running.
Fix 12: Limit Container Log Size
Docker stores container logs as JSON files under /var/lib/docker/containers/<id>/. A container that writes heavy output can fill the disk with logs alone.
Check log file sizes:
sudo du -sh /var/lib/docker/containers/*/*-json.log | sort -hSet log size limits per container:
docker run --log-opt max-size=50m --log-opt max-file=3 myappThis limits each log file to 50 MB and keeps at most 3 rotated files (150 MB total per container).
Set global defaults in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "50m",
"max-file": "5"
}
}Restart Docker after changing this. Existing containers are not affected — only new containers use the new defaults.
Truncate a log file immediately without restarting the container:
sudo truncate -s 0 /var/lib/docker/containers/<container-id>/<container-id>-json.logThis zeroes out the log file while the container keeps running.
Still Not Working?
”No space left on device” but df shows free space
This usually means you have run out of inodes, not disk space. Each file on an ext4 filesystem consumes one inode, and the total number of inodes is fixed at filesystem creation time. Docker’s overlay2 storage driver creates many small files.
Check inode usage:
df -i /var/lib/dockerIf the IUse% column is at 100%, you are out of inodes. The fix is to clean up Docker data (which removes the files and frees inodes) or reformat the partition with more inodes. Running docker system prune -a --volumes usually resolves this.
Overlay2 filesystem growing despite cleanup
Sometimes the overlay2 storage driver does not fully release space after removing images and containers. This can happen due to mount references or leaked layers.
Check for orphaned overlay2 directories:
ls /var/lib/docker/overlay2 | wc -lCompare this number with the total layers Docker knows about. If there are significantly more directories than expected, you may have orphaned layers. The safest fix is:
sudo systemctl stop docker
sudo rm -rf /var/lib/docker/overlay2
sudo systemctl start dockerWarning: This deletes all image data. Docker will recreate the directory and you will need to re-pull all images.
Container-level filesystem is full but host has space
The container’s writable layer has a default size limit controlled by the storage driver. For overlay2, this is typically unlimited (bounded only by available host space). But for devicemapper in direct-lvm mode, the default per-container size is 10 GB.
Check if your storage driver limits container size:
docker info | grep "Storage Driver"If you are using devicemapper, consider switching to overlay2, which is the default and recommended driver on modern Linux distributions.
The error happens only with Docker Compose
Docker Compose creates named volumes, networks, and containers with project-name prefixes. Running docker compose down does not remove volumes by default.
# Remove containers and networks only
docker compose down
# Remove containers, networks, AND volumes
docker compose down --volumes
# Remove containers, networks, volumes, AND images
docker compose down --volumes --rmi allIf you frequently bring Compose stacks up and down, the orphaned volumes accumulate. Run docker volume prune periodically. If containers in your Compose stack are also crashing with exit code 137, that is a different issue — see Fix: Docker Container Exited (137) OOMKilled.
Thin pool exhaustion on devicemapper
If you are running Docker with the devicemapper storage driver (common on older RHEL/CentOS systems), the thin pool can fill up independently of the host filesystem.
Check thin pool usage:
docker info | grep "Data Space"If Data Space Used is close to Data Space Total, expand the thin pool or switch to overlay2.
Related: If Docker commands fail with a connection error instead of a space error, the daemon may not be running. See Fix: Docker Daemon Is Not Running.
Related Articles
Fix: Docker Volume Permission Denied – Cannot Write to Mounted Volume
How to fix Docker permission denied errors on mounted volumes caused by UID/GID mismatch, read-only mounts, or SELinux labels.
Fix: E: Unable to locate package (apt-get install on Ubuntu/Debian)
How to fix the 'E: Unable to locate package' error in apt-get on Ubuntu and Debian, including apt update, missing repos, Docker images, PPA issues, and EOL releases.
Fix: SSL certificate problem: unable to get local issuer certificate
How to fix 'SSL certificate problem: unable to get local issuer certificate', 'CERT_HAS_EXPIRED', 'ERR_CERT_AUTHORITY_INVALID', and 'self signed certificate in certificate chain' errors in Git, curl, Node.js, Python, Docker, and more. Covers CA certificates, corporate proxies, Let's Encrypt, certificate chains, and self-signed certs.
Fix: Nginx 502 Bad Gateway
How to fix Nginx 502 Bad Gateway errors caused by upstream server issues, wrong proxy_pass configuration, PHP-FPM socket problems, timeout settings, SELinux, Docker networking, and more.