Fix: Kubernetes ImagePullBackOff - Failed to Pull Image
Quick Answer
How to fix the Kubernetes ImagePullBackOff and ErrImagePull errors when a pod fails to pull a container image from a registry.
The Error
You deploy a pod to Kubernetes. It never starts. You check the status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-7c4b6d9f8-k3m2n 0/1 ImagePullBackOff 0 2m15sOr you see the closely related status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-7c4b6d9f8-k3m2n 0/1 ErrImagePull 0 45sYou describe the pod and find something like:
$ kubectl describe pod myapp-7c4b6d9f8-k3m2n
...
Events:
Warning Failed 12s (x3 over 58s) kubelet Failed to pull image "myapp:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied
Warning Failed 12s (x3 over 58s) kubelet Error: ErrImagePull
Normal BackOff 1s (x4 over 57s) kubelet Back-off pulling image "myapp:latest"
Warning Failed 1s (x4 over 57s) kubelet Error: ImagePullBackOffThe pod is stuck. Kubernetes cannot pull the container image, and after repeated failures it backs off with increasing delays before retrying.
Why This Happens
ImagePullBackOff is Kubernetes telling you that the kubelet tried to pull a container image and failed. After the initial failure (ErrImagePull), Kubernetes applies an exponential back-off — 10s, 20s, 40s, up to 5 minutes — before retrying. The pod stays in this state until the pull succeeds or you fix the underlying problem.
The kubelet on the node is responsible for pulling images. When you create a pod, the kubelet checks if the image already exists locally. If it does not (or if imagePullPolicy forces a pull), it contacts the container registry, authenticates if needed, and downloads the image layers. Any failure in this chain results in ErrImagePull, which quickly becomes ImagePullBackOff after a few retries.
The root cause is always one of these:
- Wrong image name or tag — a typo, a tag that does not exist, or a missing registry prefix
- Private registry without credentials — the registry requires authentication and the pod has no
imagePullSecrets - Docker Hub rate limits — anonymous or free-tier pulls exceeded the rate limit
- Network issues — the node cannot reach the registry due to firewalls, proxies, or network policies
- Architecture mismatch — the image exists but not for the node’s CPU architecture (e.g., arm64 vs amd64)
- Local image with wrong pull policy — the image exists on the node but Kubernetes tries to pull it from a remote registry anyway
Fix 1: Check the Image Name and Tag
This is the most common cause. A single typo in the image name, registry URL, or tag breaks the pull.
Run kubectl describe pod and look at the exact image reference in the error:
kubectl describe pod myapp-7c4b6d9f8-k3m2n | grep -i imageYou see something like:
Image: myapp:latestCheck for these common mistakes:
- Typos in the image name:
ngixninstead ofnginx,postgressinstead ofpostgres - Missing registry prefix: If your image is on a private registry like
ghcr.ioorregistry.example.com, you need the full path:ghcr.io/myorg/myapp:v1.2.0 - Tag does not exist: You specified
myapp:v2.0but onlyv2.0.0exists. Tags are case-sensitive. - Using
latestwhen nolatesttag exists: Not every image has alatesttag. Some projects only publish versioned tags.
Verify the image exists by pulling it manually on your local machine:
docker pull myapp:latestIf it fails locally, the image reference is wrong. Fix it in your deployment:
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1.2.0 # full path with valid tagApply the fix:
kubectl apply -f deployment.yamlPro Tip: Avoid using the
latesttag in production. It is ambiguous — you never know which version is running, and it makes rollbacks difficult. Pin to a specific version likev1.2.0or a SHA digest likemyapp@sha256:abc123.... This also prevents unexpected behavior when a new image gets pushed with the samelatesttag.
If you have dealt with image reference issues in Docker before, the troubleshooting steps overlap. See Fix: Docker Image Not Found for more on resolving image name and registry URL problems.
Fix 2: Configure imagePullSecrets for Private Registries
If your image is in a private registry (Docker Hub private repos, AWS ECR, GCR, Azure ACR, GitHub Container Registry, or a self-hosted registry), Kubernetes needs credentials to pull it.
First, confirm this is the issue. Run kubectl describe pod and look for an error like:
Failed to pull image "registry.example.com/myapp:v1": unauthorized: authentication requiredor:
pull access denied for myapp, repository does not exist or may require 'docker login'Step 1: Create a Docker registry secret:
kubectl create secret docker-registry my-registry-creds \
--docker-server=registry.example.com \
--docker-username=myuser \
--docker-password=mypassword \
--docker-email=myuser@example.com \
-n my-namespaceFor Docker Hub, use https://index.docker.io/v1/ as the server:
kubectl create secret docker-registry dockerhub-creds \
--docker-server=https://index.docker.io/v1/ \
--docker-username=myuser \
--docker-password=mypassword \
-n my-namespaceStep 2: Reference the secret in your pod spec:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1.2.0
imagePullSecrets:
- name: my-registry-credsFor Deployments, the imagePullSecrets goes inside the spec.template.spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1.2.0
imagePullSecrets:
- name: my-registry-credsStep 3: Verify the secret exists in the correct namespace:
kubectl get secret my-registry-creds -n my-namespaceCommon Mistake: The secret must be in the same namespace as the pod. If your pod is in production but the secret is in default, the pull will fail with the same authentication error. Secrets are namespace-scoped — they do not span across namespaces.
If you want every pod in a namespace to use the same registry credentials without adding imagePullSecrets to every spec, attach the secret to the namespace’s default service account:
kubectl patch serviceaccount default -n my-namespace \
-p '{"imagePullSecrets": [{"name": "my-registry-creds"}]}'AWS ECR, GCR, and Azure ACR
Cloud-managed registries have their own authentication methods:
AWS ECR tokens expire every 12 hours. You need a cron job or a controller like ecr-credential-helper to refresh the secret. Alternatively, use IAM roles for service accounts (IRSA) with the ECR pull policy attached to the node’s IAM role.
GCR / Artifact Registry works automatically if your GKE nodes have the cloud-platform scope or the appropriate IAM permissions. For non-GKE clusters, create a service account key and use it as a Docker registry secret.
Azure ACR integrates with AKS through managed identity. For non-AKS clusters, create a service principal and use it as registry credentials.
Fix 3: Handle Docker Hub Rate Limits
Docker Hub enforces pull rate limits:
- Anonymous users: 100 pulls per 6 hours per IP
- Authenticated free users: 200 pulls per 6 hours
- Paid subscriptions: Higher or unlimited
If your cluster has many nodes pulling images through the same public IP (common with NAT gateways), you hit these limits fast.
The error in kubectl describe pod looks like:
Failed to pull image "nginx:latest": toomanyrequests: You have reached your pull rate limitSolutions:
Authenticate to Docker Hub even for public images — this raises your limit to 200 pulls. Create a secret and add
imagePullSecretsas shown in Fix 2.Use a pull-through cache: Set up a registry mirror (like Harbor or a cloud-provider registry proxy) that caches Docker Hub images. This reduces direct pulls to Docker Hub significantly.
Pre-pull images on nodes: Use a DaemonSet to pull commonly used images to every node during off-peak hours. Once cached locally, the kubelet does not need to pull again (unless
imagePullPolicy: Alwaysis set).Switch registries: Many popular images are mirrored on other registries. For example, use
gcr.io/google-containers/nginxorpublic.ecr.aws/nginx/nginxinstead of pulling from Docker Hub directly.
Fix 4: Fix Network Issues Blocking the Pull
If the node cannot reach the container registry over the network, every pull fails. This is common in air-gapped environments, clusters behind corporate proxies, or when network policies are too restrictive.
Check if the node can reach the registry:
# SSH into the node or use a debug pod
kubectl run debug --rm -it --image=busybox -- sh
# Inside the debug pod, test connectivity
wget -qO- https://registry.example.com/v2/ --timeout=5If this times out, investigate:
- Firewall rules: Ensure the node’s egress allows HTTPS traffic (port 443) to the registry. In cloud environments, check security groups and firewall rules.
- Proxy configuration: If your cluster requires an HTTP proxy, configure the container runtime (Docker or containerd) to use it. For containerd, add proxy settings in
/etc/systemd/system/containerd.service.d/http-proxy.conf. - Network policies: A Kubernetes NetworkPolicy might be blocking egress from the pod’s namespace. Check with:
kubectl get networkpolicies -n my-namespaceIf a policy exists, make sure it allows egress to the registry’s IP or domain.
- DNS resolution: The node must resolve the registry hostname. Test with:
nslookup registry.example.comIf DNS fails, check the node’s /etc/resolv.conf and any custom CoreDNS configuration.
If you are troubleshooting broader connectivity issues with your cluster, see Fix: kubectl Connection Refused for diagnosing cluster-level network problems.
Fix 5: Fix Architecture Mismatches (arm64 vs amd64)
You pull an image that exists, the credentials are correct, but the pull still fails. The error might say:
no matching manifest for linux/arm64 in the manifest list entriesThis happens when the image was built only for one CPU architecture (usually amd64) but your node runs a different one (usually arm64). This is increasingly common with the rise of ARM-based nodes like AWS Graviton, Apple Silicon development environments, and ARM-based cloud instances.
Check your node’s architecture:
kubectl get nodes -o wideLook at the ARCH column. Or inspect a specific node:
kubectl describe node <node-name> | grep -i archYou see something like kubernetes.io/arch=arm64.
Solutions:
- Use multi-arch images: Many popular images support multiple architectures. Check the image’s registry page or inspect the manifest:
docker manifest inspect nginx:latestThis shows which platforms the image supports. If linux/arm64 is listed, the image works on ARM nodes.
- Build your image for multiple architectures using Docker Buildx:
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1.2.0 --push .- Use node affinity to schedule the pod only on nodes with the matching architecture:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64- Add a nodeSelector for simpler cases:
spec:
nodeSelector:
kubernetes.io/arch: amd64Fix 6: Set the Correct imagePullPolicy for Local Images
If you built an image locally (for example during development with Minikube or kind) and the image only exists on the node — not in any remote registry — Kubernetes might still try to pull it.
The imagePullPolicy controls this behavior:
Always: Always pull from the registry. This is the default when using thelatesttag.IfNotPresent: Only pull if the image is not already on the node. This is the default for images with a specific tag (e.g.,myapp:v1.2.0).Never: Never pull. Only use the local image.
If you are using a local image with the latest tag, Kubernetes defaults to Always and tries to pull from a registry — which fails because the image is not there.
Fix it by setting the policy explicitly:
spec:
containers:
- name: myapp
image: myapp:latest
imagePullPolicy: Never # or IfNotPresentFor Minikube, point your Docker client to Minikube’s Docker daemon so images you build are available inside the cluster:
eval $(minikube docker-env)
docker build -t myapp:latest .For kind, load images into the cluster:
kind load docker-image myapp:latest --name my-clusterWhy this matters: In production, you almost always want
imagePullPolicy: IfNotPresentwith pinned version tags. UsingNeveris only appropriate for local development. UsingAlwayswithlatestin production leads to unpredictable deployments and is a common source of ImagePullBackOff when the registry is temporarily unavailable.
Fix 7: Debug with kubectl describe pod
When none of the above fixes are obvious, kubectl describe pod is your best diagnostic tool. It shows the full event history and reveals exactly what went wrong.
kubectl describe pod myapp-7c4b6d9f8-k3m2nFocus on three sections:
1. The Containers section — shows the exact image reference being used:
Containers:
myapp:
Image: registry.example.com/myapp:v1.2.0
Image ID:
State: Waiting
Reason: ImagePullBackOff2. The Events section — shows the timeline of what happened:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/myapp to node-1
Normal Pulling 90s (x4 over 2m) kubelet Pulling image "registry.example.com/myapp:v1.2.0"
Warning Failed 89s (x4 over 2m) kubelet Failed to pull image "registry.example.com/myapp:v1.2.0": ...
Warning Failed 89s (x4 over 2m) kubelet Error: ErrImagePull
Normal BackOff 65s (x6 over 2m) kubelet Back-off pulling image "registry.example.com/myapp:v1.2.0"
Warning Failed 65s (x6 over 2m) kubelet Error: ImagePullBackOffThe Message column contains the actual error from the container runtime. Read it carefully — it tells you if the problem is authentication, a missing tag, a network timeout, or something else.
3. The imagePullSecrets field — check if secrets are attached:
...
Image Pull Secrets: my-registry-creds
...If this is empty and you are pulling from a private registry, that is your problem.
Additional debugging commands:
Check if the secret is correct by decoding it:
kubectl get secret my-registry-creds -o jsonpath='{.data.\.dockerconfigjson}' | base64 -dThis prints the stored credentials. Verify the server URL, username, and password are correct.
Check events across the namespace for broader patterns:
kubectl get events -n my-namespace --sort-by='.lastTimestamp'Try pulling the image directly on the node (if you have SSH access):
crictl pull registry.example.com/myapp:v1.2.0If crictl pull fails with the same error, the issue is at the container runtime or node level, not Kubernetes. If it succeeds, the problem is likely with imagePullSecrets configuration.
If your pod gets past the image pull but then crashes on startup, the issue is different — see Fix: Kubernetes CrashLoopBackOff for debugging container crashes.
Still Not Working?
If you have checked everything above and the pod is still stuck in ImagePullBackOff, try these less obvious fixes:
Expired credentials: Registry tokens and service account keys expire. ECR tokens last 12 hours. GCR service account keys can be rotated or disabled. Regenerate the credentials and recreate the Kubernetes secret.
Image was deleted from the registry: Someone may have deleted the tag or the entire repository. Check the registry’s web UI or API to confirm the image still exists.
Registry is down: Check the registry’s status page. Docker Hub has had outages. Your private registry’s storage backend might be full or unreachable.
Node disk pressure: If the node’s disk is full, the container runtime cannot download image layers. Check node conditions:
kubectl describe node <node-name> | grep -i pressureIf DiskPressure is True, free up space or add more disk.
Containerd or Docker daemon issues: Restart the container runtime on the node:
sudo systemctl restart containerdImage manifest corruption: Rarely, an image manifest in the registry can be corrupted. Try pushing the image again with a new tag and updating your deployment.
Pod security policies or admission webhooks: A webhook might be mutating or rejecting the pod spec before the kubelet sees it. Check for any admission controllers that modify image references:
kubectl get mutatingwebhookconfigurations
kubectl get validatingwebhookconfigurationsIf you are also having trouble with your kubectl context configuration while debugging, see Fix: kubectl context not found. For Docker socket permission issues that might affect local builds, check Fix: Docker Permission Denied Socket.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Kubernetes Pod OOMKilled (Exit Code 137)
How to fix Kubernetes OOMKilled pod status caused by memory limit exceeded, container memory leaks, JVM heap misconfiguration, and resource requests/limits settings.
Fix: Kubernetes Pod CrashLoopBackOff (Back-off restarting failed container)
How to fix the Kubernetes CrashLoopBackOff error when a pod repeatedly crashes and Kubernetes keeps restarting it with increasing back-off delays.
Fix: YAML 'mapping values are not allowed here' and Other YAML Syntax Errors
How to fix 'mapping values are not allowed here', 'could not find expected :', 'did not find expected key', and other YAML indentation and syntax errors in Docker Compose, Kubernetes manifests, GitHub Actions, and config files.
Fix: Docker Container Exited (137) OOMKilled / Killed Signal 9
How to fix Docker container 'Exited (137)', OOMKilled, and 'Killed' signal 9 errors caused by out-of-memory conditions in Docker, Docker Compose, and Kubernetes.