Skip to content

Fix: Ansible UNREACHABLE – Failed to Connect to the Host via SSH

FixDevs ·

Quick Answer

How to fix Ansible UNREACHABLE errors caused by SSH connection failures, wrong credentials, host key issues, or Python interpreter problems on remote hosts.

The Error

You run an Ansible playbook or ad-hoc command and get:

fatal: [192.168.1.100]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.1.100 port 22: Connection timed out",
    "unreachable": true
}

Or one of these variations:

fatal: [webserver]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).",
    "unreachable": true
}
fatal: [dbserver]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Host key verification failed.",
    "unreachable": true
}

The UNREACHABLE status means Ansible could not establish a connection to the remote host at all. No tasks ran, no modules were executed — Ansible gave up before it even started doing work. This is different from a FAILED status, which means Ansible connected successfully but a specific task encountered an error during execution.

Why This Happens

Ansible relies on SSH (or WinRM for Windows hosts) to communicate with remote machines. An UNREACHABLE error means the SSH connection itself failed before Ansible could send any commands. There are several layers where this can break down.

Network-level problems prevent your control node from reaching the target host at all. The host might be powered off, the IP address might be wrong, a firewall might be blocking port 22, or the host sits on a private network that your control node cannot route to. These are the same fundamental issues covered in the SSH connection timed out guide, but surfaced through Ansible’s error reporting.

Authentication failures mean the SSH connection reached the server, but the server refused your credentials. You might be using the wrong SSH key, the wrong username, or the remote host does not have your public key in its authorized_keys file. If password authentication is disabled on the server and you have no valid key, Ansible gets a Permission denied rejection.

Host key verification failures happen when the remote server’s SSH fingerprint is unknown or has changed since the last connection. By default, SSH refuses to connect to hosts whose fingerprint is not in your known_hosts file or has changed unexpectedly. In an automated Ansible environment, this is a frequent source of UNREACHABLE errors, especially when provisioning new servers or rebuilding existing ones.

Python interpreter problems on the remote host can cause Ansible to report UNREACHABLE even after a successful SSH connection. Ansible needs a Python interpreter on the target machine to run its modules. If Python is not installed, is installed at an unexpected path, or the wrong version is present, Ansible may fail with a confusing UNREACHABLE message instead of a clear module error.

Privilege escalation issues with become (sudo) can also surface as UNREACHABLE in some scenarios. If Ansible connects via SSH but then fails to escalate privileges because the sudo password is missing or incorrect, certain connection plugins report this as unreachable rather than as a task failure.

Fix 1: Verify Basic SSH Connectivity

Before debugging Ansible configuration, confirm you can SSH into the host manually from the same machine where you run Ansible:

ssh -o ConnectTimeout=10 user@192.168.1.100

If this works, Ansible should be able to connect too (assuming correct configuration). If this fails, you have an SSH problem that is independent of Ansible. Check the host’s network reachability, firewall rules, and SSH daemon status. The SSH connection timed out guide covers all of these scenarios in detail.

If manual SSH works but Ansible does not, the problem is in how Ansible is configured — the username, key file, port, or connection parameters differ from what you used in your manual test.

Fix 2: Check Your Inventory File

Ansible reads host information from an inventory file. A misconfigured inventory is one of the most common causes of UNREACHABLE errors.

INI format

[webservers]
192.168.1.100 ansible_user=deploy ansible_ssh_private_key_file=~/.ssh/deploy_key
192.168.1.101 ansible_user=deploy ansible_port=2222

[dbservers]
db1.example.com ansible_user=admin

YAML format

all:
  children:
    webservers:
      hosts:
        192.168.1.100:
          ansible_user: deploy
          ansible_ssh_private_key_file: ~/.ssh/deploy_key
        192.168.1.101:
          ansible_user: deploy
          ansible_port: 2222
    dbservers:
      hosts:
        db1.example.com:
          ansible_user: admin

Common inventory mistakes that lead to UNREACHABLE:

  • Wrong hostname or IP address. Double-check for typos.
  • Missing ansible_user. If not set, Ansible uses the current local username, which may not exist on the remote host.
  • Wrong ansible_port. If the remote SSH daemon listens on a non-standard port and you do not specify it, Ansible connects to port 22 and times out.
  • Wrong ansible_ssh_private_key_file. The path must point to a valid private key that the remote host trusts.

Test your inventory with a simple ping:

ansible all -i inventory.ini -m ping

If only some hosts are unreachable, the problem is host-specific. If all hosts fail, the issue is likely a global configuration problem in your ansible.cfg or group variables.

Real-world scenario: You spin up 10 new EC2 instances with Terraform, add them to your inventory, and immediately run a playbook. It fails with UNREACHABLE on all of them because the default inventory uses ansible_user=ubuntu but the AMI expects ec2-user.

Fix 3: Fix SSH Key Authentication

Ansible uses SSH keys by default. If the remote host does not have your public key, or Ansible is using the wrong key, you get Permission denied.

Check which key Ansible is using:

ansible webserver -i inventory.ini -m ping -vvvv 2>&1 | grep "SSH"

Explicitly specify the correct key in your inventory or command:

ansible all -i inventory.ini -m ping --private-key=~/.ssh/deploy_key

Or set it in your inventory file:

[webservers:vars]
ansible_ssh_private_key_file=~/.ssh/deploy_key

Make sure the key file has correct permissions on the control node:

chmod 600 ~/.ssh/deploy_key
chmod 700 ~/.ssh

On the remote host, verify the public key is in the authorized_keys file for the correct user:

cat /home/deploy/.ssh/authorized_keys

If the public key is missing, add it:

ssh-copy-id -i ~/.ssh/deploy_key.pub deploy@192.168.1.100

SSH key issues often overlap with those described in the Git SSH permission denied guide, since the underlying SSH authentication mechanism is the same.

Fix 4: Disable or Configure Host Key Checking

When Ansible connects to a host for the first time, SSH prompts you to verify the host’s fingerprint. In an automated context, nobody is there to type “yes”, so the connection fails with Host key verification failed.

The quickest fix is to disable host key checking in your ansible.cfg:

[defaults]
host_key_checking = False

Or set the environment variable:

export ANSIBLE_HOST_KEY_CHECKING=False

Or pass it as an SSH argument in your inventory:

[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

Security warning: Disabling host key checking removes protection against man-in-the-middle attacks. This is acceptable in development, test environments, and ephemeral cloud infrastructure, but in production you should pre-populate known_hosts instead:

# Scan and add the host key before running Ansible
ssh-keyscan -H 192.168.1.100 >> ~/.ssh/known_hosts

If a server was rebuilt and its host key changed, remove the old key:

ssh-keygen -R 192.168.1.100

Then add the new one with ssh-keyscan or connect manually and accept the new fingerprint.

Fix 5: Set the Correct Python Interpreter

Ansible requires Python on the remote host to execute modules. If Python is not installed or is at a path Ansible does not expect, you can get UNREACHABLE or MODULE FAILURE errors. This is especially common on minimal server images, containers, and newer distributions that ship only Python 3.

Check if Python is installed on the remote host:

ssh deploy@192.168.1.100 "which python3; python3 --version"

If Python is not found, install it. The steps depend on the distribution — see the Python command not found guide for instructions across different operating systems.

Tell Ansible where to find Python by setting ansible_python_interpreter in your inventory:

[webservers:vars]
ansible_python_interpreter=/usr/bin/python3

Or use the auto discovery mode, which is the default in Ansible 2.8+ but can be set explicitly:

[webservers:vars]
ansible_python_interpreter=auto

The auto mode tries a list of common Python paths. If your system uses an unusual path (for example, /usr/local/bin/python3.11 on a custom build), set it explicitly.

For Alpine Linux or other minimal images where Python is not installed by default, you can use the raw module to bootstrap Python before running regular Ansible tasks:

- name: Bootstrap Python on minimal hosts
  hosts: alpine_servers
  gather_facts: false
  tasks:
    - name: Install Python
      raw: apk add --no-cache python3
      changed_when: true
    - name: Gather facts after Python is available
      setup:

Note the gather_facts: false setting. Without it, Ansible tries to gather facts (which requires Python) before running any tasks, creating a chicken-and-egg problem.

Fix 6: Fix become and sudo Issues

If your playbook uses become: yes to escalate privileges, several things can go wrong.

Missing sudo password

If the remote user requires a password for sudo, Ansible needs to know it:

ansible-playbook playbook.yml -i inventory.ini --ask-become-pass

Or set it in your inventory (less secure):

[webservers:vars]
ansible_become_password=mysudopassword

A better approach is to use Ansible Vault to encrypt the password:

ansible-vault encrypt_string 'mysudopassword' --name 'ansible_become_password'

User not in sudoers

If the remote user is not in the sudoers file, you get a privilege escalation failure. On the remote host, add the user:

sudo usermod -aG sudo deploy    # Debian/Ubuntu
sudo usermod -aG wheel deploy   # CentOS/RHEL/Fedora

Or create a sudoers entry for passwordless sudo (common for automation accounts):

echo "deploy ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/deploy
sudo chmod 440 /etc/sudoers.d/deploy

Wrong become method

Ansible defaults to sudo for privilege escalation, but some systems use su, pbrun, doas, or other methods. Specify the correct method:

[webservers:vars]
ansible_become_method=su

Fix 7: Fix Connection Timeout Settings

If hosts are reachable but the connection is slow (high-latency networks, overloaded servers, VPN tunnels), Ansible may time out before SSH finishes connecting.

Increase the timeout in ansible.cfg:

[defaults]
timeout = 30

The default is 10 seconds. For slow networks, 30 or 60 seconds may be necessary.

You can also set it per-command:

ansible all -i inventory.ini -m ping -T 30

For persistent SSH connections that reuse an existing channel (reducing handshake overhead on repeated tasks), enable SSH pipelining and multiplexing in ansible.cfg:

[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r

Pipelining also improves performance by reducing the number of SSH operations per task. It requires requiretty to be disabled in /etc/sudoers on the remote host (comment out or remove the Defaults requiretty line).

Fix 8: Configure ansible.cfg Correctly

The ansible.cfg file controls global behavior. Ansible looks for this file in the following order:

  1. ANSIBLE_CONFIG environment variable
  2. ansible.cfg in the current directory
  3. ~/.ansible.cfg (home directory)
  4. /etc/ansible/ansible.cfg (system-wide)

A well-configured ansible.cfg for avoiding UNREACHABLE errors:

[defaults]
inventory = ./inventory.ini
remote_user = deploy
timeout = 30
host_key_checking = False
interpreter_python = auto

[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
retries = 3

The retries = 3 setting tells Ansible to retry failed SSH connections up to three times, which helps with transient network issues.

Check which configuration file Ansible is actually using:

ansible --version

This prints the config file path along with the Ansible version. If it is loading the wrong config file, set the ANSIBLE_CONFIG environment variable to point to the correct one:

export ANSIBLE_CONFIG=./ansible.cfg

Fix 9: Handle Windows Hosts with WinRM

Ansible’s UNREACHABLE error on Windows hosts usually means you are trying to connect via SSH to a machine that expects WinRM, or vice versa.

For Windows targets, set the connection type in your inventory:

[windows]
win-server1 ansible_host=192.168.1.200

[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=ntlm
ansible_user=Administrator
ansible_password=YourPassword
ansible_port=5986
ansible_winrm_server_cert_validation=ignore

Make sure WinRM is enabled on the Windows host. Run this in an elevated PowerShell:

winrm quickconfig
winrm set winrm/config/service '@{AllowUnencrypted="true"}'
winrm set winrm/config/service/auth '@{Basic="true"}'

For production environments, use HTTPS (port 5986) with a valid certificate instead of unencrypted HTTP. The pywinrm library must be installed on the Ansible control node:

pip install pywinrm

If you prefer SSH for Windows (supported on Windows 10+ and Server 2019+), install OpenSSH Server on the Windows host and set:

[windows_ssh]
win-server2 ansible_host=192.168.1.201

[windows_ssh:vars]
ansible_connection=ssh
ansible_shell_type=powershell
ansible_user=Administrator

Fix 10: Debug with -vvvv

When none of the above fixes are obvious, run Ansible with maximum verbosity to see exactly what is happening at the SSH level:

ansible all -i inventory.ini -m ping -vvvv

Or for a playbook:

ansible-playbook playbook.yml -i inventory.ini -vvvv

The -vvvv flag (four v’s) shows the full SSH command Ansible constructs, including all arguments, the key files it tries, and the raw SSH output. This is the single most useful debugging step.

Look for these patterns in the verbose output:

  • Connection timed out — Network or firewall problem. The host is unreachable at the TCP level.
  • Connection refused — SSH daemon is not running or is on a different port.
  • Permission denied (publickey) — Authentication failure. Wrong key, wrong user, or the public key is not on the remote host.
  • Host key verification failed — The host fingerprint is unknown or changed. See Fix 4.
  • /usr/bin/python: not found — Python is missing on the remote host. See Fix 5.
  • No such file or directory — The SSH key path in your inventory does not exist.

You can also test connectivity for a single host to narrow down the problem:

ansible webserver1 -i inventory.ini -m ping -vvvv

If some hosts work and others do not, compare the verbose output between a working and a failing host to spot the difference.

Still Not Working?

If you have worked through all the fixes above and Ansible still reports UNREACHABLE, try these additional checks.

Run the raw module

The raw module does not require Python on the remote host and bypasses most of Ansible’s module machinery. If this works but ping does not, the problem is Python-related, not SSH-related:

ansible webserver1 -i inventory.ini -m raw -a "echo hello"

Check for SSH agent forwarding issues

If your setup relies on SSH agent forwarding (for example, connecting through a bastion host), make sure the agent is running and the key is loaded:

ssh-add -l

If no keys are listed, add your key:

eval "$(ssh-agent -s)"
ssh-add ~/.ssh/deploy_key

Verify DNS resolution

If your inventory uses hostnames instead of IP addresses, DNS resolution failures cause UNREACHABLE errors. Test resolution from the control node:

nslookup webserver1.example.com

If DNS is unreliable, use IP addresses directly in your inventory, or add entries to /etc/hosts on the control node.

Check for connection limits

If you are managing a large number of hosts and only some fail intermittently, you may be hitting SSH connection limits. Reduce the number of parallel connections with forks:

ansible-playbook playbook.yml -i inventory.ini -f 5

The default is 5, but if you increased it in ansible.cfg, the remote server’s MaxStartups setting in sshd_config might be dropping excess connections.

Docker container targets

If you are targeting Docker containers, the Docker Compose errors guide covers common networking pitfalls. For containers, consider using the docker connection plugin instead of SSH:

[containers]
my_container ansible_connection=docker

Inspect the full SSH command

Extract the exact SSH command Ansible is running from the -vvvv output and run it manually:

# Copy the SSH command from Ansible's verbose output, for example:
ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no \
  -o Port=22 -o User=deploy -o ConnectTimeout=10 \
  -i /home/you/.ssh/deploy_key 192.168.1.100 '/bin/sh -c "echo ~deploy"'

Running this manually tells you whether the issue is with SSH itself or with Ansible’s handling of the SSH output. If the manual command works, the problem is in Ansible’s configuration parsing or module execution.

Check the environment variable guide if you suspect environment variables like ANSIBLE_CONFIG, ANSIBLE_REMOTE_USER, or ANSIBLE_PRIVATE_KEY_FILE are overriding your intended settings. Environment variables take precedence over ansible.cfg and can silently change Ansible’s behavior.


Related: Fix: SSH Connection Timed Out or Connection Refused

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles