Fix: The Connection to the Server localhost:8080 Was Refused (kubectl)
The Error
You run a kubectl command and get:
The connection to the server localhost:8080 was refused - did you specify the right host or port?Or one of these variations:
Unable to connect to the server: dial tcp 127.0.0.1:6443: connect: connection refusederror: You must be logged in to the server (Unauthorized)Unable to connect to the server: x509: certificate signed by unknown authorityAll of these mean kubectl cannot reach or authenticate with the Kubernetes API server.
Why This Happens
kubectl needs two things to work: a running API server and a valid kubeconfig that points to it. The localhost:8080 error specifically means kubectl has no kubeconfig at all — it falls back to the default localhost:8080, where nothing is listening.
The 127.0.0.1:6443 variant means kubectl has a kubeconfig, but the API server at that address is down or unreachable. The Unauthorized and x509 errors mean the server is reachable but your credentials or certificates are invalid. (For general SSL certificate issues, see Fix: SSL certificate problem: unable to get local issuer certificate.)
Common causes:
- Your cluster isn’t running. Minikube is stopped, Docker Desktop Kubernetes is disabled, or your kind cluster was deleted.
- KUBECONFIG isn’t set or points to the wrong file. kubectl can’t find your cluster configuration.
- Wrong context selected. Your kubeconfig has multiple clusters and the active context points to one that’s unavailable.
- Expired credentials or certificates. Common with cloud clusters (EKS, GKE, AKS) where auth tokens have a limited lifespan.
Fix 1: Start Your Cluster
The most common cause is a cluster that isn’t running. Start it based on your setup.
Minikube:
minikube startCheck status:
minikube statusIf the status shows Stopped, minikube start will bring it back with your previous configuration intact.
Docker Desktop:
- Open Docker Desktop
- Go to Settings > Kubernetes
- Check Enable Kubernetes
- Click Apply & Restart
Wait for the Kubernetes indicator in the bottom-left corner to turn green. This can take a few minutes on the first enable.
kind (Kubernetes in Docker):
kind get clustersIf no clusters are listed, create one:
kind create clusterIf a cluster exists but containers are stopped (e.g., after a Docker restart), delete and recreate it:
kind delete cluster
kind create clusterNote: kind clusters don’t survive Docker daemon restarts. If you restarted Docker or your machine, you need to recreate the cluster.
k3d / k3s:
k3d cluster list
k3d cluster start <cluster-name>After starting any of these, verify the connection:
kubectl cluster-infoFix 2: Fix Your KUBECONFIG Path
If kubectl can’t find your kubeconfig file, it defaults to localhost:8080. Check what file it’s using:
kubectl config viewIf this returns an empty config or an error, kubectl isn’t finding your kubeconfig.
The default path is ~/.kube/config. Verify the file exists:
ls -la ~/.kube/configIf your kubeconfig is at a different path, set the KUBECONFIG environment variable:
export KUBECONFIG=/path/to/your/kubeconfigAdd this to your ~/.bashrc or ~/.zshrc to make it permanent:
echo 'export KUBECONFIG=/path/to/your/kubeconfig' >> ~/.bashrc
source ~/.bashrcMultiple kubeconfig files: You can merge multiple kubeconfigs by separating paths with a colon (semicolon on Windows):
# Linux / macOS
export KUBECONFIG=~/.kube/config:~/.kube/config-eks:~/.kube/config-gke
# Windows (PowerShell)
$env:KUBECONFIG = "$HOME\.kube\config;$HOME\.kube\config-eks"To permanently merge them into a single file:
KUBECONFIG=~/.kube/config:~/.kube/config-other kubectl config view --flatten > ~/.kube/config-merged
mv ~/.kube/config-merged ~/.kube/configWarning: Back up your existing kubeconfig before merging. A syntax error in the merged file can lock you out of all clusters.
Fix 3: Fix Your kubectl Context
Your kubeconfig may contain multiple clusters. If the active context points to a cluster that no longer exists or is unreachable, you get a connection error.
List all available contexts:
kubectl config get-contextsThe current context is marked with *. Switch to the correct one:
kubectl config use-context minikubeOr for Docker Desktop:
kubectl config use-context docker-desktopVerify the switch worked:
kubectl cluster-infoIf you don’t know which context to use, check what each one points to:
kubectl config view -o jsonpath='{range .contexts[*]}{.name}{"\t"}{.context.cluster}{"\n"}{end}'Fix 4: Docker Desktop Kubernetes Not Enabled or Running
Docker Desktop ships with a built-in Kubernetes cluster, but it’s disabled by default and can get into a bad state.
If Kubernetes is not enabled:
- Open Docker Desktop
- Go to Settings > Kubernetes
- Check Enable Kubernetes
- Click Apply & Restart
If Kubernetes is enabled but not working:
The Kubernetes status indicator in Docker Desktop should be green. If it’s orange or red:
- Go to Settings > Kubernetes
- Click Reset Kubernetes Cluster
- Wait for the status to turn green
If a reset doesn’t fix it, try a full Docker Desktop restart:
- Quit Docker Desktop completely
- Reopen Docker Desktop
- Wait for both Docker and Kubernetes indicators to turn green
If the kubectl context isn’t set to Docker Desktop:
Docker Desktop creates a context called docker-desktop. Make sure it’s active:
kubectl config use-context docker-desktopNote: Docker Desktop Kubernetes uses port 6443, not 8080. If you see the localhost:8080 error with Docker Desktop, your kubeconfig is likely missing or not pointing to the Docker Desktop cluster.
Fix 5: Certificate and Authentication Issues
The x509: certificate signed by unknown authority and Unauthorized errors mean kubectl reaches the server but can’t authenticate.
Expired or invalid certificates
Check your cluster’s certificate expiry:
kubectl config view --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | base64 -d | openssl x509 -text -noout | grep "Not After"For kubeadm-managed clusters, check and renew certificates:
sudo kubeadm certs check-expiration
sudo kubeadm certs renew allAfter renewing, restart the control plane components:
sudo systemctl restart kubeletThen update your kubeconfig:
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/configCloud cluster auth tokens expired
AWS EKS:
Your AWS credentials or IAM token may have expired. Update the kubeconfig:
aws eks update-kubeconfig --region <region> --name <cluster-name>Make sure your AWS CLI credentials are current:
aws sts get-caller-identityIf this fails, refresh your credentials with aws configure or update your AWS SSO session. For general AWS credential issues, see Fix: AWS unable to locate credentials.
Google GKE:
gcloud container clusters get-credentials <cluster-name> --region <region> --project <project-id>If your gcloud auth is expired:
gcloud auth loginAzure AKS:
az aks get-credentials --resource-group <rg-name> --name <cluster-name>If your Azure CLI session is expired:
az loginWrong user credentials in kubeconfig
If someone manually edited the kubeconfig or you copied it from another machine, the user credentials may not match the cluster. Regenerate the kubeconfig using the commands above for your cluster type.
Still Not Working?
Check if the API server is actually running
On a self-managed cluster, check the API server pod or process:
# kubeadm clusters
sudo crictl ps | grep kube-apiserver
# Or check the kubelet service
sudo systemctl status kubelet
sudo journalctl -u kubelet --no-pager -n 50If the API server isn’t running, check its logs:
sudo journalctl -u kubelet --no-pager | grep "kube-apiserver"Firewall or security group blocking the port
If your cluster is on a remote machine, verify the API server port is accessible:
curl -k https://<server-ip>:6443/healthzAn ok response means the server is reachable. A timeout means a firewall is blocking port 6443.
For cloud clusters, check the security group or firewall rules allow inbound traffic on the API server port from your IP address. EKS, GKE, and AKS all have options to restrict API server access to specific CIDR ranges.
VPN blocking the connection
If you connect to your cluster over a VPN and the connection suddenly stopped working:
- Check your VPN is connected. Many corporate clusters are only reachable on the internal network.
- Check for split-tunnel issues. Some VPN configurations route all traffic through the VPN, while others only route specific subnets. If your cluster IP isn’t in the VPN’s routed subnets, traffic goes over the public internet and gets blocked.
- DNS resolution may differ. The cluster hostname might resolve to a different IP inside vs. outside the VPN. Test with
nslookup <cluster-hostname>while connected and disconnected.
Proxy settings interfering
If you’re behind a corporate proxy, kubectl traffic might be getting routed through it. The API server is typically not reachable via a web proxy. Add your cluster IP or hostname to the no-proxy list:
export NO_PROXY=$NO_PROXY,<cluster-ip>,<cluster-hostname>
export no_proxy=$no_proxy,<cluster-ip>,<cluster-hostname>For Minikube specifically:
export NO_PROXY=$NO_PROXY,$(minikube ip)Get full diagnostic output
If you’ve partially fixed the issue and kubectl connects but things still seem wrong, run a full diagnostic dump:
kubectl cluster-info dumpIf kubectl still can’t connect at all, start with these client-side checks:
kubectl cluster-info
kubectl version --client
kubectl config current-context
kubectl config view --minifyThe --minify flag shows only the current context’s config, which makes it easier to spot issues with the server address, certificate path, or user credentials.
Related: If you’re getting Docker socket errors, see Fix: Docker Permission Denied.
Related Articles
Fix: Kubernetes Pod CrashLoopBackOff (Back-off restarting failed container)
How to fix the Kubernetes CrashLoopBackOff error when a pod repeatedly crashes and Kubernetes keeps restarting it with increasing back-off delays.
Fix: YAML 'mapping values are not allowed here' and Other YAML Syntax Errors
How to fix 'mapping values are not allowed here', 'could not find expected :', 'did not find expected key', and other YAML indentation and syntax errors in Docker Compose, Kubernetes manifests, GitHub Actions, and config files.
Fix: Docker Container Exited (137) OOMKilled / Killed Signal 9
How to fix Docker container 'Exited (137)', OOMKilled, and 'Killed' signal 9 errors caused by out-of-memory conditions in Docker, Docker Compose, and Kubernetes.
Fix: Docker Volume Permission Denied – Cannot Write to Mounted Volume
How to fix Docker permission denied errors on mounted volumes caused by UID/GID mismatch, read-only mounts, or SELinux labels.