Fix: Nginx 502 Bad Gateway

The Error

You open your site or hit an API endpoint behind Nginx and get:

502 Bad Gateway

In your Nginx error log (/var/log/nginx/error.log), you see one of these:

connect() failed (111: Connection refused) while connecting to upstream
upstream prematurely closed connection while reading response header from upstream
connect() to unix:/run/php/php-fpm.sock failed (2: No such file or directory)
upstream timed out (110: Connection timed out) while reading response header from upstream

All of these mean the same thing: Nginx received a request, tried to forward it to a backend (upstream) server, and the backend either wasn’t reachable, didn’t respond, or returned an invalid response.

Why This Happens

Nginx is a reverse proxy. It doesn’t serve your application directly — it forwards requests to a backend process (Node.js, Python, PHP-FPM, a Docker container, etc.) and relays the response back to the client.

A 502 means the communication between Nginx and that backend broke down. Common causes:

  • The upstream server isn’t running. Your app process crashed, was never started, or the service manager failed to keep it alive.
  • Wrong proxy_pass address or port. Nginx is sending requests to a host/port where nothing is listening.
  • PHP-FPM socket misconfiguration. The socket path in the Nginx config doesn’t match the actual PHP-FPM socket.
  • The upstream is too slow. Your backend takes longer to respond than Nginx is willing to wait.
  • Response too large for Nginx buffers. The backend response headers or body exceed Nginx’s default buffer sizes.
  • SELinux is blocking the connection. On RHEL/CentOS/Fedora, SELinux prevents Nginx from making network connections by default.
  • Unix socket permissions. Nginx can’t read/write the socket file because of user/group mismatches.
  • Docker networking issues. Nginx can’t reach a backend container because of network isolation.
  • SSL/TLS mismatch with the backend. Nginx is connecting over plain HTTP but the backend expects HTTPS, or vice versa.

Fix 1: Make Sure the Upstream Server Is Running

The most common cause. Your backend process isn’t running.

Check if your app is listening:

# Check if anything is listening on the expected port (e.g., 3000)
ss -tlnp | grep 3000

If nothing shows up, your app isn’t running. Start it.

For a Node.js/Python/Go app managed by systemd:

sudo systemctl status myapp
sudo systemctl start myapp

For PHP-FPM:

sudo systemctl status php8.3-fpm
sudo systemctl start php8.3-fpm

Replace 8.3 with your PHP version. Check installed versions:

ls /etc/php/

For a process managed by PM2:

pm2 list
pm2 start ecosystem.config.js

For a Gunicorn/uWSGI app:

sudo systemctl status gunicorn
sudo systemctl start gunicorn

After starting the backend, reload Nginx and test:

sudo nginx -t && sudo systemctl reload nginx
curl -I http://localhost

Related: If the port your app needs is occupied by another process, see Fix: Port 3000 Already in Use.

Fix 2: Fix the proxy_pass Address and Port

Nginx is sending requests to the wrong place.

Open your Nginx site config:

# Check which config file is active
sudo nginx -T 2>&1 | grep "server_name\|proxy_pass\|upstream"

A typical reverse proxy block looks like this:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Common mistakes:

Wrong port. Your app runs on port 8080 but proxy_pass says 3000. Verify:

ss -tlnp | grep LISTEN

Wrong host. Using localhost vs 127.0.0.1 can matter. If your app binds to 127.0.0.1 only, use 127.0.0.1 in proxy_pass. If it binds to 0.0.0.0, either works.

Trailing slash mismatch. These behave differently:

# Passes /api/users to backend as /api/users
location /api/ {
    proxy_pass http://127.0.0.1:3000;
}

# Passes /api/users to backend as /users (strips /api/)
location /api/ {
    proxy_pass http://127.0.0.1:3000/;
}

A trailing slash on proxy_pass rewrites the URI. Getting this wrong can send requests to paths your backend doesn’t handle, causing errors that Nginx interprets as a 502.

After fixing, test and reload:

sudo nginx -t
sudo systemctl reload nginx

Fix 3: Fix PHP-FPM Socket Configuration

If you’re running PHP with Nginx, the socket path must match exactly between your Nginx config and PHP-FPM pool config.

Check what PHP-FPM is using:

# Find the pool config
sudo grep -r "^listen " /etc/php/

You’ll see something like:

/etc/php/8.3/fpm/pool.d/www.conf:listen = /run/php/php8.3-fpm.sock

Check what Nginx expects:

sudo grep -r "fastcgi_pass" /etc/nginx/

You’ll see:

fastcgi_pass unix:/run/php/php8.3-fpm.sock;

These paths must be identical. If Nginx says php8.2-fpm.sock but PHP-FPM creates php8.3-fpm.sock, you get a 502.

Fix the Nginx config to match:

location ~ \.php$ {
    fastcgi_pass unix:/run/php/php8.3-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
}

Or use TCP instead of a socket:

In PHP-FPM pool config (/etc/php/8.3/fpm/pool.d/www.conf):

listen = 127.0.0.1:9000

In Nginx:

fastcgi_pass 127.0.0.1:9000;

Restart both:

sudo systemctl restart php8.3-fpm
sudo systemctl reload nginx

Fix 4: Fix Unix Socket Permissions

Nginx might not have permission to read the PHP-FPM (or other) socket file.

Check the socket ownership:

ls -la /run/php/php8.3-fpm.sock

You’ll see something like:

srw-rw---- 1 www-data www-data 0 Mar 22 10:00 /run/php/php8.3-fpm.sock

Nginx must run as a user that can access this socket. Check the Nginx worker user:

grep "^user" /etc/nginx/nginx.conf

If Nginx runs as nginx but the socket is owned by www-data, fix it.

Option 1: Change the PHP-FPM socket ownership (in /etc/php/8.3/fpm/pool.d/www.conf):

listen.owner = nginx
listen.group = nginx
listen.mode = 0660

Option 2: Change the Nginx worker user (in /etc/nginx/nginx.conf):

user www-data;

Option 3: Use a shared group. Add the nginx user to the www-data group:

sudo usermod -aG www-data nginx

Restart both services after any change:

sudo systemctl restart php8.3-fpm
sudo systemctl restart nginx

Fix 5: Increase Upstream Timeout Settings

Your backend is taking too long to respond. Nginx has a default timeout of 60 seconds. If your app needs more time (heavy database queries, file processing, long API calls), increase these:

location / {
    proxy_pass http://127.0.0.1:3000;

    proxy_connect_timeout 300;
    proxy_send_timeout 300;
    proxy_read_timeout 300;
    send_timeout 300;
}

For FastCGI (PHP-FPM):

location ~ \.php$ {
    fastcgi_pass unix:/run/php/php8.3-fpm.sock;

    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;

    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
}

Don’t just crank these up blindly. If your backend consistently takes 5 minutes to respond, the real fix is to optimize the backend or move long-running work to a background job queue. Timeouts are a safety net, not a solution.

Also check your backend’s own timeout. PHP has max_execution_time in php.ini. Gunicorn has --timeout. If the backend kills the request before Nginx’s timeout, you still get a 502.

Fix 6: Increase Buffer Sizes

If your backend sends large response headers (big cookies, long JWT tokens, many custom headers), Nginx’s default buffers might be too small. The error log will show:

upstream sent too big header while reading response header from upstream

Increase the buffer settings:

location / {
    proxy_pass http://127.0.0.1:3000;

    proxy_buffer_size 16k;
    proxy_buffers 4 16k;
    proxy_busy_buffers_size 32k;
}

For FastCGI:

location ~ \.php$ {
    fastcgi_pass unix:/run/php/php8.3-fpm.sock;

    fastcgi_buffer_size 16k;
    fastcgi_buffers 4 16k;
    fastcgi_busy_buffers_size 32k;

    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
}

Start with 16k. If that doesn’t fix it, try 32k. If you need more than 32k for headers, something in your application is sending unusually large headers that should be investigated.

Fix 7: Fix SELinux (RHEL/CentOS/Fedora)

On RHEL-based distros, SELinux blocks Nginx from making network connections by default. This is the most common cause of 502 errors on fresh CentOS/RHEL/Fedora servers.

Check if SELinux is enforcing:

getenforce

If it returns Enforcing, check the audit log for denials:

sudo grep nginx /var/log/audit/audit.log | grep denied

Allow Nginx to make network connections:

sudo setsebool -P httpd_can_network_connect 1

This allows Nginx to connect to any network port. If you want to be more restrictive (allow only specific ports), use:

# Allow connecting to a specific port
sudo semanage port -a -t http_port_t -p tcp 3000

For Unix sockets, you may also need:

sudo setsebool -P httpd_can_network_relay 1

Don’t disable SELinux entirely. Setting SELINUX=disabled in /etc/selinux/config is the wrong approach. Use the targeted booleans above instead.

Fix 8: Fix Docker Networking with Nginx Reverse Proxy

When Nginx runs on the host and proxies to a Docker container (or both run in containers), networking must be configured correctly.

Nginx on the host, app in a container

Make sure the container exposes its port to the host:

docker run -d -p 3000:3000 myapp

Then proxy_pass http://127.0.0.1:3000; works.

Both Nginx and the app in Docker (docker-compose)

Use the service name as the hostname:

services:
  nginx:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - app

  app:
    build: .
    expose:
      - "3000"

In your nginx.conf:

server {
    listen 80;

    location / {
        proxy_pass http://app:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

The hostname is app (the service name), not localhost. Both containers must be on the same Docker network, which docker-compose handles automatically.

Common Docker 502 mistakes:

  • Using localhost or 127.0.0.1 instead of the service name.
  • Using ports instead of expose for inter-container communication (both work, but expose is more explicit).
  • The app container binding to 127.0.0.1 instead of 0.0.0.0. Inside a container, your app must listen on 0.0.0.0 to be reachable from other containers.
  • Missing depends_on. Nginx starts before the app is ready.

Related: For Docker socket permission issues, see Fix: Docker Permission Denied.

Fix 9: Fix SSL/TLS Backend Issues

If your backend speaks HTTPS but Nginx connects to it over plain HTTP (or vice versa), you get a 502.

Backend expects HTTPS:

location / {
    proxy_pass https://127.0.0.1:8443;
}

If the backend uses a self-signed certificate, you need to tell Nginx to trust it (see Fix: SSL certificate problem: unable to get local issuer certificate for more on certificate trust issues):

location / {
    proxy_pass https://127.0.0.1:8443;
    proxy_ssl_verify off;
}

Or provide the CA certificate:

location / {
    proxy_pass https://127.0.0.1:8443;
    proxy_ssl_trusted_certificate /etc/nginx/certs/backend-ca.crt;
    proxy_ssl_verify on;
}

Protocol mismatch. If your backend only speaks HTTP but you used https:// in proxy_pass, Nginx tries to do a TLS handshake and the backend sends back garbage. Result: 502.

Fix 10: Use Upstream Blocks with Health Checks

For production setups with multiple backends, use an upstream block. This gives you load balancing and automatic failover:

upstream backend {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;

    # Mark a server as down after 3 failures, check again after 30s
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
}

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_next_upstream error timeout http_502 http_503;
        proxy_next_upstream_tries 3;
    }
}

proxy_next_upstream tells Nginx to try the next backend if the current one returns a 502 or times out. This prevents a single crashed backend from taking down your entire site.

If you’re using Nginx Plus (commercial), you get active health checks:

upstream backend {
    zone backend 64k;
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
}

# Active health check (Nginx Plus only)
match server_ok {
    status 200;
}

server {
    location / {
        proxy_pass http://backend;
        health_check match=server_ok interval=5s;
    }
}

Still Not Working?

Check the Nginx error log first

Every 502 fix starts here:

sudo tail -100 /var/log/nginx/error.log

Or for a specific site:

sudo tail -100 /var/log/nginx/sites-error.log

The error log tells you exactly what went wrong. Match the error message to the fix above.

Your app crashes on certain requests

The 502 only happens on specific URLs or payloads. Your backend process receives the request, crashes, and Nginx gets a broken connection.

Check your app’s own logs:

# PM2
pm2 logs

# systemd service
sudo journalctl -u myapp --no-pager -n 100

# Docker container
docker logs myapp --tail 100

Common crash triggers: out-of-memory kills, unhandled exceptions, segfaults in native modules.

Check for OOM kills:

sudo dmesg | grep -i "oom\|killed process"

DNS resolution failure in proxy_pass

If proxy_pass uses a hostname (not an IP), Nginx resolves it at startup and caches the result. If the upstream IP changes (common with Docker, Kubernetes, or cloud services), Nginx keeps connecting to the old IP.

Fix this by using a variable and a resolver:

resolver 127.0.0.53 valid=30s;

server {
    location / {
        set $upstream http://my-backend-service:3000;
        proxy_pass $upstream;
    }
}

This forces Nginx to re-resolve the hostname on every request (with 30s caching). Use your actual DNS server IP. For Docker’s embedded DNS, use 127.0.0.11.

The backend is overloaded

Your backend can’t handle the request volume. Nginx sends requests faster than the backend can process them, connections pile up, and eventually the backend stops accepting new ones.

Signs: intermittent 502s that get worse under load. The backend’s CPU or memory is maxed out.

Solutions:

  • Scale horizontally (add more backend instances behind an upstream block).
  • Add rate limiting in Nginx:
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

server {
    location /api/ {
        limit_req zone=api burst=20 nodelay;
        proxy_pass http://backend;
    }
}
  • Enable caching for responses that don’t change often:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m max_size=1g;

server {
    location / {
        proxy_cache cache;
        proxy_cache_valid 200 10m;
        proxy_pass http://backend;
    }
}

SSH tunnel or bastion host breaks the connection

If your Nginx proxies to a backend through an SSH tunnel, the tunnel might drop silently. Add keepalive to prevent this:

upstream backend {
    server 127.0.0.1:3000;
    keepalive 32;
}

server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

Related: For SSH connection issues, see Fix: SSH Connection Timed Out.

Nginx worker process running out of file descriptors

Under heavy load, Nginx may exhaust its file descriptor limit. Check:

# Current limit
grep "worker_rlimit_nofile" /etc/nginx/nginx.conf

# How many connections Nginx is handling
sudo ls /proc/$(cat /run/nginx.pid)/fd | wc -l

Increase it in /etc/nginx/nginx.conf:

worker_rlimit_nofile 65535;

events {
    worker_connections 16384;
}

Also increase the system limit:

# /etc/security/limits.conf
nginx soft nofile 65535
nginx hard nofile 65535

Related: If your PostgreSQL backend is refusing connections behind Nginx, see Fix: PostgreSQL Connection Refused. For Docker permission issues, see Fix: Docker Permission Denied. For SSH tunneling problems to your upstream server, see Fix: SSH Connection Timed Out.

Related Articles