Skip to content

Fix: Nginx 413 Request Entity Too Large

FixDevs ·

Quick Answer

How to fix the Nginx 413 Request Entity Too Large error when uploading files by adjusting client_max_body_size, PHP limits, Node.js body parser, proxy buffers, Docker ingress, and more.

The Error

You try to upload a file through your web application and Nginx responds with:

413 Request Entity Too Large

Or in some clients, you see:

<html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>

In your Nginx error log (/var/log/nginx/error.log), the entry looks like this:

client intended to send too large body: 15728640 bytes, client: 192.168.1.10, server: example.com, request: "POST /upload HTTP/1.1"

The request never reaches your backend. Nginx rejects it at the proxy layer before your application code even runs.

Why This Happens

Nginx enforces a maximum allowed size for the client request body. The directive that controls this is client_max_body_size, and its default value is 1 MB. Any request body larger than that limit gets rejected with a 413 status code.

This is a deliberate security measure. Without a body size limit, a single client could send a multi-gigabyte payload and exhaust your server’s memory or disk. But 1 MB is too small for most real-world applications — file uploads, image submissions, video processing, API payloads with base64-encoded data, and multipart form submissions all routinely exceed that default.

The tricky part is that multiple layers can enforce their own size limits. Even after you fix the Nginx setting, the request might still fail because of:

  • PHP limits (upload_max_filesize, post_max_size)
  • Node.js/Express body parser defaults (100 KB for JSON, 100 KB for URL-encoded)
  • Reverse proxy buffer sizes that are too small for large uploads
  • Kubernetes ingress annotations that override Nginx settings
  • Django or Flask application-level upload limits
  • CDN or load balancer limits sitting in front of Nginx

You need to fix every layer in the chain, not just one. Here is how to do that.

Fix 1: Set client_max_body_size in Nginx

This is the fix for 90% of cases. Open your Nginx configuration file:

sudo nano /etc/nginx/nginx.conf

Add or update the client_max_body_size directive. You can place it in three different contexts, depending on how broadly you want it to apply.

In the http block (applies to all sites on this server):

http {
    client_max_body_size 100M;

    # ... rest of config
}

In a server block (applies to one virtual host):

server {
    listen 80;
    server_name example.com;
    client_max_body_size 100M;

    # ... rest of config
}

In a location block (applies to one specific route):

location /upload {
    client_max_body_size 500M;

    proxy_pass http://backend;
}

The most specific block wins. If you set 100M at the http level and 500M in a /upload location, uploads to /upload allow 500 MB while everything else allows 100 MB.

After editing, test and reload:

sudo nginx -t
sudo systemctl reload nginx

Note: Setting client_max_body_size to 0 disables the check entirely. Do not do this in production. Always set an explicit limit that matches your application’s actual requirements.

If you are working with Nginx virtual host files instead of the main nginx.conf, the config file is usually at /etc/nginx/sites-available/your-site or /etc/nginx/conf.d/your-site.conf. The directive works the same way in those files.

Pro Tip: If your Nginx setup uses multiple included config files, run nginx -T (capital T) to dump the full, merged configuration. This shows you every directive in effect and exactly which file it comes from. It is the fastest way to confirm your client_max_body_size change actually took effect and isn’t being overridden by another include.

Fix 2: Fix PHP Upload Limits

If your backend is PHP (WordPress, Laravel, Drupal, or plain PHP), fixing Nginx alone is not enough. PHP has its own upload size limits.

Find your active php.ini:

php --ini | grep "Loaded Configuration"

Or for PHP-FPM specifically:

php-fpm -i | grep "upload_max_filesize"

Open the file and update these two directives:

; Maximum size of an uploaded file
upload_max_filesize = 100M

; Maximum size of POST data (must be >= upload_max_filesize)
post_max_size = 120M

post_max_size must be larger than upload_max_filesize because the POST body includes not just the file but also form field data, boundaries, and headers. A good rule of thumb is to set post_max_size about 20% higher.

Also check memory_limit — PHP needs enough memory to handle the upload:

memory_limit = 256M

Restart PHP-FPM after making changes:

sudo systemctl restart php8.2-fpm

Replace php8.2-fpm with your actual PHP version. You can check which PHP-FPM service is running with:

systemctl list-units | grep php

If you are on shared hosting where you cannot edit php.ini, you can try setting these in a .htaccess file (if Apache is behind Nginx) or in a .user.ini file in your project root:

upload_max_filesize = 100M
post_max_size = 120M

For WordPress specifically, you can also add this to wp-config.php:

@ini_set('upload_max_filesize', '100M');
@ini_set('post_max_size', '120M');

If you are troubleshooting a 502 Bad Gateway error instead of a 413, your PHP-FPM process may be crashing during the upload rather than rejecting it cleanly.

Fix 3: Fix Node.js/Express Body Parser Limits

If your backend is Node.js with Express, the default body size limit in express.json() and express.urlencoded() is 100 KB — far smaller than most file uploads.

Update your Express middleware:

const express = require('express');
const app = express();

// Increase JSON body limit
app.use(express.json({ limit: '100mb' }));

// Increase URL-encoded body limit
app.use(express.urlencoded({ limit: '100mb', extended: true }));

If you are using multer for file uploads (the more common approach for multipart form data), set the file size limit in the multer configuration:

const multer = require('multer');

const upload = multer({
  dest: 'uploads/',
  limits: {
    fileSize: 100 * 1024 * 1024, // 100 MB in bytes
  },
});

app.post('/upload', upload.single('file'), (req, res) => {
  res.json({ message: 'Upload complete' });
});

If you are using the older body-parser package separately:

const bodyParser = require('body-parser');

app.use(bodyParser.json({ limit: '100mb' }));
app.use(bodyParser.urlencoded({ limit: '100mb', extended: true }));

Restart your Node.js process after making the change. If you are running behind a process manager like PM2:

pm2 restart your-app

Fix 4: Fix Proxy Buffer Settings for Reverse Proxy Setups

When Nginx acts as a reverse proxy (which is the case for most Node.js, Python, and Go backends), large uploads can fail even after setting client_max_body_size if the proxy buffer settings are too small.

Add these directives to your server or location block:

location /upload {
    client_max_body_size 100M;

    # Disable request buffering -- stream directly to backend
    proxy_request_buffering off;

    # Increase proxy buffer sizes for the response
    proxy_buffers 16 32k;
    proxy_buffer_size 64k;

    # Increase timeout for large uploads
    proxy_read_timeout 300s;
    proxy_send_timeout 300s;
    proxy_connect_timeout 60s;

    proxy_pass http://backend;
}

The key directive here is proxy_request_buffering off. By default, Nginx buffers the entire client request body to disk before forwarding it to the backend. For large uploads, this causes:

  • Extra disk I/O
  • Increased memory usage
  • Slower upload times
  • Potential timeout issues if the upload is large and slow

With proxy_request_buffering off, Nginx streams the request body directly to the backend as it arrives. This is almost always what you want for file upload endpoints.

If you are seeing 504 Gateway Timeout errors on large uploads instead of 413, the upload is getting past the size check but the backend is taking too long to process it. Increase the proxy_read_timeout value.

Similarly, if your backend is timing out during large uploads, check the upstream timed out guide for more timeout-related fixes.

Test and reload:

sudo nginx -t
sudo systemctl reload nginx

Fix 5: Fix Docker and Kubernetes Ingress Annotations

If you are running Nginx inside Docker or using the Nginx Ingress Controller in Kubernetes, the configuration is different from editing nginx.conf directly.

Docker with Nginx

If your nginx.conf is mounted as a volume in Docker, edit the file and restart the container:

docker restart nginx-container

Or if you are using a custom Docker image, update the config and rebuild:

FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf

Kubernetes Nginx Ingress Controller

For the Kubernetes Nginx Ingress Controller, you set body size limits using annotations on your Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "100m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app-service
                port:
                  number: 80

The proxy-body-size annotation is the equivalent of client_max_body_size in the Nginx Ingress Controller. The default is 1 MB, just like regular Nginx.

To set it globally for all ingresses, update the ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  proxy-body-size: "100m"

Apply and verify:

kubectl apply -f ingress.yaml
kubectl describe ingress my-app-ingress

Common Mistake: Setting client_max_body_size inside a custom Nginx config snippet via the nginx.ingress.kubernetes.io/configuration-snippet annotation but forgetting that the proxy-body-size annotation also needs to be set. The ingress controller applies its own client_max_body_size directive before your snippet, so the request gets rejected before your custom config runs.

Fix 6: Fix Django and Flask Upload Size Limits

Django

Django has its own upload size limit controlled by DATA_UPLOAD_MAX_MEMORY_SIZE and FILE_UPLOAD_MAX_MEMORY_SIZE in settings.py.

# settings.py

# Maximum size of request body (default: 2.5 MB)
DATA_UPLOAD_MAX_MEMORY_SIZE = 104857600  # 100 MB

# Maximum size of file upload (default: 2.5 MB)
FILE_UPLOAD_MAX_MEMORY_SIZE = 104857600  # 100 MB

For uploads larger than FILE_UPLOAD_MAX_MEMORY_SIZE, Django automatically writes the file to a temporary directory instead of holding it in memory. This is controlled by FILE_UPLOAD_TEMP_DIR:

FILE_UPLOAD_TEMP_DIR = '/tmp/django-uploads'

Make sure that directory exists and has the correct permissions:

sudo mkdir -p /tmp/django-uploads
sudo chown www-data:www-data /tmp/django-uploads

Flask

Flask limits request size via MAX_CONTENT_LENGTH:

from flask import Flask

app = Flask(__name__)
app.config['MAX_CONTENT_LENGTH'] = 100 * 1024 * 1024  # 100 MB

Without this setting, Flask accepts requests of any size (no default limit), but Nginx still blocks them. With it set, Flask returns a 413 of its own if the limit is exceeded.

If you are using Gunicorn as your WSGI server in front of Django or Flask, Gunicorn does not enforce a body size limit by default, so you typically only need to worry about the framework and Nginx settings. However, if you are experiencing issues, you can set Gunicorn’s --limit-request-line and --limit-request-field_size options for header-related limits.

If your Python application is returning a 403 Forbidden instead of a 413, the issue is likely a permissions problem on the upload directory rather than a size limit.

Fix 7: Use Client-Side Chunked Uploads

For very large files (hundreds of megabytes or more), increasing size limits everywhere is not the best approach. Network interruptions, timeouts, and memory pressure all become problems. A better architecture is chunked uploading, where the client splits the file into smaller pieces and uploads them individually.

Here is a basic implementation using the browser’s File API:

async function uploadInChunks(file, chunkSize = 5 * 1024 * 1024) {
  const totalChunks = Math.ceil(file.size / chunkSize);

  for (let i = 0; i < totalChunks; i++) {
    const start = i * chunkSize;
    const end = Math.min(start + chunkSize, file.size);
    const chunk = file.slice(start, end);

    const formData = new FormData();
    formData.append('file', chunk);
    formData.append('chunkIndex', i);
    formData.append('totalChunks', totalChunks);
    formData.append('fileName', file.name);

    const response = await fetch('/upload/chunk', {
      method: 'POST',
      body: formData,
    });

    if (!response.ok) {
      throw new Error(`Chunk ${i} failed: ${response.statusText}`);
    }
  }
}

On the server side (Node.js example), you receive each chunk and assemble the file:

const fs = require('fs');
const path = require('path');

app.post('/upload/chunk', upload.single('file'), (req, res) => {
  const { chunkIndex, totalChunks, fileName } = req.body;
  const chunkPath = path.join('uploads', `${fileName}.part${chunkIndex}`);

  fs.renameSync(req.file.path, chunkPath);

  if (parseInt(chunkIndex) === parseInt(totalChunks) - 1) {
    // All chunks received -- assemble the file
    const finalPath = path.join('uploads', fileName);
    const writeStream = fs.createWriteStream(finalPath);

    for (let i = 0; i < totalChunks; i++) {
      const partPath = path.join('uploads', `${fileName}.part${i}`);
      const data = fs.readFileSync(partPath);
      writeStream.write(data);
      fs.unlinkSync(partPath);
    }

    writeStream.end();
  }

  res.json({ received: chunkIndex });
});

With chunked uploads, each individual request is small (5 MB in this example), so you only need client_max_body_size 10M in Nginx — enough to cover one chunk plus overhead. This sidesteps the 413 error entirely for arbitrarily large files.

Libraries like tus-js-client, Uppy, and Resumable.js provide production-ready chunked upload implementations with retry logic and progress tracking built in.

When Nginx terminates SSL, there are additional buffer settings that can interfere with large uploads. The SSL layer adds overhead to each request, and if the buffers are not sized correctly, large uploads can fail.

Check your SSL configuration:

server {
    listen 443 ssl;
    server_name example.com;

    ssl_certificate /etc/ssl/certs/example.crt;
    ssl_certificate_key /etc/ssl/private/example.key;

    # Increase buffer for SSL connections
    ssl_buffer_size 16k;

    # Ensure body size is set in the SSL server block too
    client_max_body_size 100M;

    # Increase timeouts for large uploads over SSL
    client_body_timeout 300s;
    send_timeout 300s;

    location / {
        proxy_pass http://backend;
    }
}

The ssl_buffer_size directive (default: 16 KB) controls the size of the buffer used for sending data over SSL. While this mainly affects responses, a misconfigured SSL setup can cause unexpected behavior with large request bodies.

More importantly, check client_body_timeout. This directive sets the timeout for reading the client request body — not the total upload time, but the time between two successive read operations. For large uploads over slow connections or SSL, the default 60 seconds might not be enough:

client_body_timeout 300s;

Also verify that your client_max_body_size is set in the correct server block. If you have separate server blocks for port 80 (HTTP) and port 443 (HTTPS), you need the directive in both blocks, or at least in the HTTPS block where the actual upload requests arrive:

server {
    listen 80;
    server_name example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name example.com;
    client_max_body_size 100M;  # Must be here, not just in the port 80 block

    # ... SSL and proxy config
}

If your SSL setup is causing handshake failures unrelated to upload size, see the SSL handshake failed guide.

Still Not Working?

If you have fixed Nginx, your backend framework, and your proxy settings, but uploads still fail, check these less obvious causes:

CDN Limits

If your traffic goes through a CDN, it enforces its own upload size limits before the request even reaches your Nginx server.

Cloudflare limits upload size based on your plan:

PlanMax Upload Size
Free100 MB
Pro100 MB
Business200 MB
Enterprise500 MB (default, can be increased)

You cannot increase this limit through Nginx configuration. You either need to upgrade your Cloudflare plan, bypass Cloudflare for the upload endpoint (use a DNS-only record), or implement chunked uploads (Fix 7).

AWS CloudFront has a default body size limit of 1 MB for requests forwarded to the origin. To allow larger uploads, you need to configure the CloudFront distribution to forward the request body directly and increase the limit.

Load Balancer Limits

AWS Application Load Balancer (ALB) does not enforce a request body size limit, but AWS API Gateway has a hard limit of 10 MB for payload size. If your requests route through API Gateway to Nginx, that 10 MB limit applies regardless of your Nginx settings.

Google Cloud Load Balancer has a default request size limit of 32 MB for HTTP(S) load balancers. Adjust it in the backend service configuration.

Application-Level Validation

Your application code might be rejecting the file independently of any server configuration. Check for:

  • File type validation that returns a misleading error message
  • Database column size limits if you are storing file data as BLOBs
  • Disk space on the server or temp directory (/tmp is often a small tmpfs mount)
  • Inode limits if the upload directory has too many files

Verify the Entire Chain

The fastest way to debug is to isolate each layer. Use curl to send a test file directly to each layer:

# Test Nginx directly (bypass CDN/load balancer)
curl -v -X POST -F "file=@large-test-file.bin" https://your-server-ip/upload \
  --resolve example.com:443:your-server-ip

# Check the response headers for clues
curl -I -X POST -F "file=@large-test-file.bin" https://example.com/upload

If the curl request to the server IP succeeds but the same request through your domain fails, the problem is in the CDN or load balancer layer, not Nginx.

Check your Nginx error log for the exact rejection:

sudo tail -f /var/log/nginx/error.log

And your access log to confirm the 413 is coming from Nginx and not the backend:

sudo tail -f /var/log/nginx/access.log | grep 413

If the 413 shows up in the access log with Nginx as the source, the fix is in your Nginx config. If you see a different status code or the request reaches the backend, the problem is downstream.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles