Fix: SSH Connection Timed Out or Connection Refused

The Error

You try to connect to a remote server over SSH and get one of these errors:

ssh: connect to host 192.168.1.100 port 22: Connection timed out
ssh: connect to host 192.168.1.100 port 22: Connection refused
ssh: connect to host 192.168.1.100 port 22: No route to host

All three errors prevent you from reaching the server, but each one points to a different underlying problem:

  • Connection timed out means your packets left your machine but never got a response. Something between you and the server is silently dropping the traffic. This is almost always a firewall or security group issue.
  • Connection refused means your packets reached the target machine, but nothing is listening on port 22. Either the SSH daemon is not running, or it is configured to listen on a different port.
  • No route to host means your operating system cannot find a network path to the destination IP. The address is wrong, the host is powered off, or a firewall is actively rejecting packets with an ICMP unreachable response.

Understanding which error you have narrows down the fix dramatically. Read on for targeted solutions for each scenario.

Why This Happens

SSH connections can fail at several layers, and the specific error message tells you which layer is broken.

Connection refused indicates the TCP handshake was explicitly rejected. The most frequent causes are:

  • The SSH daemon (sshd) is not installed or is not running on the server.
  • The sshd process is listening on a non-standard port (for example, 2222 instead of the default 22).
  • A host-level firewall like UFW, iptables, or firewalld is rejecting inbound connections on port 22.
  • The server ran out of disk space, preventing sshd from starting after a reboot.

Connection timed out indicates packets went out but nothing came back. This typically points to:

  • A cloud security group (AWS, GCP, Azure) that does not allow inbound traffic on port 22.
  • A network-level firewall silently dropping packets before they reach the server.
  • An incorrect IP address or hostname that resolves to the wrong destination.
  • The server sits on a private network and you are trying to connect from outside that network without a VPN or bastion host.

No route to host indicates your OS does not know how to reach the target IP. Common causes include:

  • A wrong or mistyped IP address.
  • The target machine is powered off or unreachable.
  • A missing or disconnected VPN connection to a private network.
  • A firewall sending ICMP “host unreachable” responses rather than silently dropping packets.

These same networking fundamentals apply to other connection errors too. If you have seen similar issues with Docker containers failing to connect, the debugging approach is much the same: verify the service is running, verify the port is open, and verify the network path exists.

Fix 1: Verify the Server Is Reachable

Before debugging SSH itself, confirm you can reach the server at all.

ping -c 4 192.168.1.100

If ping succeeds, the machine is up and reachable at the network level. If it fails, you have a network-level problem that has nothing to do with SSH.

However, many cloud providers block ICMP (ping) by default. A failed ping does not necessarily mean the host is down. Use a TCP-level probe on port 22 instead:

# Using netcat
nc -zv -w 5 192.168.1.100 22

# Using telnet
telnet 192.168.1.100 22

# Using nmap (shows the port state)
nmap -p 22 192.168.1.100

With nc, a successful connection prints Connection to 192.168.1.100 22 port [tcp/ssh] succeeded!. A timeout or refusal confirms the port is not reachable.

Check DNS Resolution

If you are connecting by hostname rather than IP, verify the hostname resolves correctly:

nslookup myserver.example.com
# or
dig myserver.example.com +short

If the returned IP address does not match the server you expect, you have a DNS problem, not an SSH problem. This is similar to the kind of issue you might see when environment variables resolve to unexpected values — the input looks right, but the underlying resolution is wrong.

Check /etc/hosts as well. A stale entry there overrides DNS:

cat /etc/hosts | grep myserver

Fix 2: Check if sshd Is Running

If you get Connection refused, the SSH daemon most likely is not running. You need an alternative way to access the server — cloud console, out-of-band management, or physical access — and then check the service status.

# On systemd-based systems (Ubuntu, Debian, CentOS 7+, RHEL, Fedora)
sudo systemctl status sshd

# Some distributions name the service 'ssh' instead of 'sshd'
sudo systemctl status ssh

If the service is inactive or failed, start it:

sudo systemctl start sshd
sudo systemctl enable sshd

If OpenSSH server is not installed at all:

# Debian / Ubuntu
sudo apt update && sudo apt install openssh-server -y

# CentOS / RHEL / Fedora
sudo dnf install openssh-server -y

# Arch Linux
sudo pacman -S openssh

After installing, start and enable the service:

sudo systemctl start sshd
sudo systemctl enable sshd

Confirm sshd is listening on the expected port:

sudo ss -tlnp | grep sshd

Expected output:

LISTEN  0  128  0.0.0.0:22  0.0.0.0:*  users:(("sshd",pid=1234,fd=3))

If the port number shown is not 22, the server uses a custom SSH port. See Fix 4.

Fix 3: Check Firewall Rules on the Server

A host-level firewall can block SSH connections even when sshd is running and healthy. You need to verify that port 22 (or your custom SSH port) is allowed through.

UFW (Ubuntu / Debian)

sudo ufw status

If UFW is active and port 22 is not listed as allowed:

sudo ufw allow 22/tcp
sudo ufw reload

Or allow by service name:

sudo ufw allow ssh

firewalld (CentOS / RHEL / Fedora)

sudo firewall-cmd --list-all

If SSH is not in the allowed services:

sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload

iptables

sudo iptables -L -n | grep 22

If there is no ACCEPT rule for port 22:

sudo iptables -I INPUT -p tcp --dport 22 -j ACCEPT

To persist across reboots:

# Debian / Ubuntu
sudo apt install iptables-persistent -y
sudo netfilter-persistent save

# CentOS / RHEL
sudo service iptables save

nftables

sudo nft list ruleset | grep 22

If port 22 is not allowed, add the appropriate rule in your nftables configuration, or use firewalld or UFW as a frontend.

Fix 4: Connect on the Correct SSH Port

Many administrators change the SSH port from the default 22 to a non-standard port for security purposes. If the port was changed, connecting on 22 gives you Connection refused.

Check the server configuration (if you have access through another method):

sudo grep -i "^Port" /etc/ssh/sshd_config

If it shows Port 2222 (or any other number), connect using that port:

ssh -p 2222 user@192.168.1.100

To avoid specifying the port every time, add the host to your ~/.ssh/config:

Host myserver
    HostName 192.168.1.100
    User admin
    Port 2222

Then connect simply with:

ssh myserver

Fix 5: Fix Cloud Provider Security Groups

Cloud providers block all inbound traffic by default. If you launched a virtual machine and cannot SSH into it, you almost certainly need to update a security group or firewall rule. This is one of the most common causes of Connection timed out errors on cloud instances, similar to how misconfigured S3 bucket policies block access until the correct permissions are in place.

AWS Security Groups

  1. Open the EC2 console.
  2. Select your instance and click the Security tab.
  3. Click the security group link.
  4. Edit Inbound rules.
  5. Add a rule: Type SSH, Protocol TCP, Port 22, Source My IP.

Using the AWS CLI:

# Find the security group attached to your instance
aws ec2 describe-instances --instance-id i-0abcdef1234567890 \
  --query 'Reservations[].Instances[].SecurityGroups[].GroupId' --output text

# Allow SSH from your current IP
aws ec2 authorize-security-group-ingress \
  --group-id sg-0123456789abcdef0 \
  --protocol tcp \
  --port 22 \
  --cidr $(curl -s https://checkip.amazonaws.com)/32

Also check Network ACLs (NACLs) on the subnet. NACLs are stateless, so you need both an inbound rule allowing port 22 and an outbound rule allowing ephemeral ports (1024-65535) for the return traffic.

GCP Firewall Rules

# List firewall rules that affect port 22
gcloud compute firewall-rules list --filter="allowed[].ports:22"

# Create a rule allowing SSH from your IP
gcloud compute firewall-rules create allow-ssh \
  --direction=INGRESS \
  --priority=1000 \
  --network=default \
  --action=ALLOW \
  --rules=tcp:22 \
  --source-ranges=$(curl -s https://checkip.amazonaws.com)/32

In GCP, the target instance must have the correct network tag that matches the firewall rule, or the rule must apply to all instances in the network.

Azure Network Security Groups (NSG)

az network nsg rule create \
  --resource-group myResourceGroup \
  --nsg-name myNSG \
  --name allow-ssh \
  --protocol tcp \
  --direction inbound \
  --priority 1000 \
  --source-address-prefixes '<your-ip>/32' \
  --destination-port-ranges 22 \
  --access allow

Setting the source to 0.0.0.0/0 allows SSH from the entire internet. This works for quick debugging but is a security risk in production. Always restrict access to your IP or a VPN CIDR range.

Fix 6: Fix SSH Key Issues

If you can reach the server but authentication fails, the problem is with your SSH keys rather than the connection itself.

Check that your key is being offered:

ssh -vvv user@192.168.1.100 2>&1 | grep "Offering public key"

If no key is offered, SSH cannot find your private key. Common fixes:

# Explicitly specify the key
ssh -i ~/.ssh/my_key user@192.168.1.100

# Check that the key file has correct permissions
chmod 600 ~/.ssh/id_rsa
chmod 700 ~/.ssh

On the server side, verify the authorized_keys file:

cat ~/.ssh/authorized_keys

Make sure the public key is present and the file has correct permissions:

chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh

If you are dealing with Git over SSH and seeing Permission denied (publickey), the troubleshooting steps overlap significantly with the Git SSH permission denied guide.

Add the key to your SSH config so you do not have to specify -i every time:

Host myserver
    HostName 192.168.1.100
    User admin
    IdentityFile ~/.ssh/my_key

Fix 7: Adjust ConnectTimeout and Keep-Alive Settings

If connections work sometimes but time out on slow networks, or if established sessions drop after sitting idle, adjust your SSH client configuration.

Increase the connection timeout

The default connection timeout is system-dependent, usually around 20-30 seconds. For servers on high-latency networks:

ssh -o ConnectTimeout=30 user@192.168.1.100

Prevent idle disconnects with keep-alive

Firewalls and NAT gateways often close idle TCP connections after a period of inactivity. Send periodic keep-alive packets to prevent this:

Add to ~/.ssh/config:

Host *
    ServerAliveInterval 60
    ServerAliveCountMax 3
    ConnectTimeout 30

This sends a keep-alive packet every 60 seconds. If 3 consecutive packets receive no response (180 seconds total), SSH disconnects cleanly.

You can also set this per-connection:

ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=3 user@server

Server-side keep-alive

If you manage the server, add these to /etc/ssh/sshd_config:

ClientAliveInterval 60
ClientAliveCountMax 3

Then restart the SSH daemon:

sudo systemctl restart sshd

Fix 8: Configure ~/.ssh/config for Complex Setups

The ~/.ssh/config file saves you from typing long commands and helps manage connections to multiple servers. A well-organized config file prevents many common connection issues.

# Default settings for all hosts
Host *
    ServerAliveInterval 60
    ServerAliveCountMax 3
    ConnectTimeout 20
    AddKeysToAgent yes

# Production server on custom port
Host prod
    HostName prod.example.com
    User deploy
    Port 2222
    IdentityFile ~/.ssh/prod_key

# Internal server via bastion host
Host internal
    HostName 10.0.1.50
    User admin
    ProxyJump bastion

# Bastion / jump host
Host bastion
    HostName bastion.example.com
    User jumpuser
    IdentityFile ~/.ssh/bastion_key

With this config, connecting to any host is a single command:

ssh prod
ssh internal

The ProxyJump directive handles multi-hop connections automatically. SSH connects to the bastion first, then tunnels through to the internal server. This is the recommended way to access servers on private networks without a VPN.

You can also chain multiple jump hosts:

ssh -J user@jump1.example.com,user@jump2.example.com user@10.0.1.50

Fix 9: Check sshd_config for Restrictions

The SSH daemon configuration file can restrict who is allowed to connect, from where, and how. If your connection is refused despite sshd running, review /etc/ssh/sshd_config for restrictive settings.

ListenAddress

ListenAddress 0.0.0.0

If this is set to 127.0.0.1, the SSH daemon only accepts connections from localhost. Change it to 0.0.0.0 to listen on all IPv4 interfaces, or set it to a specific interface IP.

AllowUsers / AllowGroups

AllowUsers admin deploy
AllowGroups sshusers

If these directives exist, only the listed users or group members can connect. Everyone else is rejected. Add your username or remove the restriction.

PasswordAuthentication

PasswordAuthentication no

If password authentication is disabled and you do not have a key configured, you get Permission denied (publickey). Set up key-based authentication or temporarily enable password auth to regain access.

MaxStartups

MaxStartups 10:30:60

This throttles concurrent unauthenticated connections. Under brute-force attacks or with many simultaneous connection attempts, legitimate connections can be dropped. The format is start:rate:full — after start unauthenticated connections, new ones are randomly dropped at rate percent, and all are refused after full.

After editing sshd_config, always validate before restarting:

sudo sshd -t

If there are no errors:

sudo systemctl restart sshd

Important: Always keep an existing SSH session open when changing sshd_config. If you break the configuration, you still have a working session to fix it. If you lock yourself out completely, use your cloud provider’s console or out-of-band access.

Fix 10: Troubleshoot Network and VPN Issues

If the server is on a private network behind a VPN, connectivity depends on the VPN tunnel being up and correctly configured.

  • Verify your VPN client is connected and the tunnel is established.
  • Check that traffic to the server’s subnet is routed through the VPN interface:
# Linux
ip route get 10.0.1.50

# macOS
route get 10.0.1.50
  • Some VPNs use split tunneling, where only certain subnets route through the VPN. If the server’s subnet is not included, your SSH traffic goes over the public internet and gets blocked.
  • Corporate VPNs sometimes block SSH (port 22) even within the tunnel. Contact your network administrator to confirm.

Your IP changed

If you restricted SSH access to a specific source IP in a security group or firewall rule, and your ISP assigned you a new IP address (common with residential connections), your old rule no longer matches. Check your current public IP and update the rule:

curl -s https://checkip.amazonaws.com

SELinux blocking a non-standard port

On RHEL, CentOS, or Fedora systems with SELinux enabled, changing the SSH port requires updating SELinux policy. Without this, sshd cannot bind to the new port:

# Check for SELinux denials
sudo ausearch -m avc | grep sshd

# Allow sshd on a custom port
sudo semanage port -a -t ssh_port_t -p tcp 2222

Still Not Working?

If none of the above fixes resolve your issue, run SSH with maximum verbosity to see exactly where the connection stalls:

ssh -vvv user@192.168.1.100

This outputs every step of the connection process. Look for where it hangs:

  • Stuck on Connecting to 192.168.1.100 port 22... — network or firewall problem. Revisit Fix 1, Fix 3, and Fix 5.
  • Connects but hangs on SSH2_MSG_KEX_ECDH_REPLY — possible MTU issue. Try reducing the MTU or using ssh -o IPQoS=none user@server.
  • Gets to authentication and fails — credential problem, not a connection problem. Check your keys (Fix 6) and sshd_config restrictions (Fix 9).

Check whether the server’s disk is full. A full disk can prevent sshd from starting or handling new connections:

df -h

If the server has an Nginx reverse proxy returning 502 errors on web traffic as well, the underlying issue might be that the entire server is under resource pressure — out of memory, out of disk, or CPU-saturated. Address the resource issue first.

As a last resort, check the system logs on the server for clues:

sudo journalctl -u sshd --no-pager -n 50
sudo dmesg | tail -50

Share the verbose SSH output (with sensitive information redacted) and the server-side logs when asking for help in forums or from your team. These two pieces of information together almost always reveal the root cause.

Related Articles