Vucense

Nginx as a Reverse Proxy: Complete Tutorial 2026 (SSL, Load Balancing, Headers)

🟡Intermediate

Configure Nginx as a reverse proxy on Ubuntu 24.04 LTS in 2026. Covers upstream blocks, SSL termination, load balancing, security headers, rate limiting, WebSocket proxying, and caching.

Noah Choi

Author

Noah Choi

Linux & Cloud Native Infrastructure Engineer

Published

Duration

Reading

18 min

Build

20 min

Nginx as a Reverse Proxy: Complete Tutorial 2026 (SSL, Load Balancing, Headers)
Article Roadmap

Key Takeaways

  • What a reverse proxy does: Clients connect to Nginx. Nginx connects to your application. Your application never directly touches the internet — it only talks to Nginx on a private socket or localhost port.
  • proxy_pass is the core directive: proxy_pass http://localhost:3000; routes every request to a local app. Use an upstream block to proxy to multiple servers with load balancing.
  • Set the real IP headers: proxy_set_header X-Real-IP $remote_addr; and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; are required so your application knows the actual client IP rather than Nginx’s loopback address.
  • Sovereign proxy advantages: Nginx is the only component exposed to the network. Applications bind to 127.0.0.1 only. SSL terminates at Nginx. Rate limits, bot blocking, and security headers apply before a single byte reaches your application code.

Introduction

Direct Answer: How do I configure Nginx as a reverse proxy on Ubuntu 24.04 in 2026?

Install Nginx with sudo apt-get install -y nginx, then create a server block in /etc/nginx/sites-available/myapp with a location / block containing proxy_pass http://127.0.0.1:3000; plus proxy_set_header Host $host;, proxy_set_header X-Real-IP $remote_addr;, and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;. Enable the site with sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/, test with sudo nginx -t, and reload with sudo systemctl reload nginx. For SSL, install Certbot and run sudo certbot --nginx -d yourdomain.com — Certbot automatically modifies your Nginx config to handle HTTPS. For multiple backend servers, define an upstream block and reference it in proxy_pass. The complete configuration with security headers, rate limiting, gzip, and caching takes approximately 20 minutes to set up on a fresh Ubuntu 24.04 server.

“Nginx as a reverse proxy is the simplest architectural decision you can make to dramatically improve the security, performance, and maintainability of any self-hosted application.”

This guide assumes Nginx is already installed on Ubuntu 24.04. If not, follow How to Install Nginx on Ubuntu 24.04 LTS first.


Part 1: Basic Reverse Proxy — Single Backend

The minimal reverse proxy configuration: Nginx on port 80 forwarding to an application on port 3000.

# Verify Nginx is installed and running
nginx -v
sudo systemctl status nginx --no-pager | grep "Active:"

Expected output:

nginx version: nginx/1.27.3
     Active: active (running)

Start a test backend (simulates your application):

# Simple Python HTTP server on port 3000
python3 -m http.server 3000 &
BACKEND_PID=$!
echo "Backend PID: $BACKEND_PID — running on port 3000"

Create the reverse proxy config:

sudo tee /etc/nginx/sites-available/myapp << 'EOF'
server {
    listen 80;
    server_name myapp.example.com;   # Replace with your domain or server IP

    # ── Logging ───────────────────────────────────────────────────────────
    access_log /var/log/nginx/myapp-access.log;
    error_log  /var/log/nginx/myapp-error.log warn;

    # ── Proxy to application ──────────────────────────────────────────────
    location / {
        proxy_pass         http://127.0.0.1:3000;

        # Pass the real client information to the backend
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;

        # Timeouts
        proxy_connect_timeout  10s;
        proxy_send_timeout     60s;
        proxy_read_timeout     60s;

        # Buffer settings — tune for your application's response sizes
        proxy_buffer_size          16k;
        proxy_buffers              8 16k;
        proxy_busy_buffers_size    32k;
    }
}
EOF

Enable and test:

# Enable the site
sudo ln -sf /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/myapp

# Test config for syntax errors
sudo nginx -t

Expected output:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# Reload Nginx (no downtime)
sudo systemctl reload nginx

# Test the proxy
curl -s -I http://localhost/ | head -5

Expected output:

HTTP/1.1 200 OK
Server: nginx/1.27.3
Date: Tue, 22 Apr 2026 09:00:00 GMT
Content-type: text/html; charset=utf-8
# Confirm X-Forwarded headers reach the backend
kill $BACKEND_PID 2>/dev/null

# Start a headers-echoing backend instead
python3 -c "
import http.server, json

class Handler(http.server.BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-Type', 'application/json')
        self.end_headers()
        headers = {k: v for k, v in self.headers.items()}
        self.wfile.write(json.dumps(headers, indent=2).encode())
    def log_message(self, *args): pass

http.server.HTTPServer(('127.0.0.1', 3000), Handler).serve_forever()
" &
BACKEND_PID=$!

curl -s http://localhost/ | python3 -m json.tool | grep -E "X-Real|X-Forwarded|Host"
kill $BACKEND_PID 2>/dev/null

Expected output:

    "Host": "localhost",
    "X-Forwarded-For": "127.0.0.1",
    "X-Forwarded-Proto": "http",
    "X-Real-Ip": "127.0.0.1"

The backend receives the real client IP via the headers. In production, X-Real-Ip will show the actual visitor’s IP address.


Part 2: SSL Termination with Let’s Encrypt

Nginx handles HTTPS and forwards plain HTTP to the backend — the application never needs to manage certificates.

# Install Certbot Nginx plugin
sudo apt-get install -y certbot python3-certbot-nginx

# Obtain certificate and auto-configure Nginx
# Replace with your actual domain (must have DNS pointing to this server)
sudo certbot --nginx -d myapp.example.com --non-interactive --agree-tos \
  --email admin@example.com --redirect

Expected output:

Requesting a certificate for myapp.example.com
...
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/myapp.example.com/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/myapp.example.com/privkey.pem
...
Deploying certificate to VirtualHost /etc/nginx/sites-enabled/myapp
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/myapp
...
Congratulations! You have successfully enabled HTTPS on https://myapp.example.com

Certbot modifies your config automatically. Inspect what it added:

sudo cat /etc/nginx/sites-available/myapp

For manual configuration (or self-signed certs for internal use):

sudo tee /etc/nginx/sites-available/myapp-ssl << 'EOF'
# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name myapp.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;

    server_name myapp.example.com;

    # SSL certificates
    ssl_certificate     /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;

    # Modern SSL config (TLS 1.2 + 1.3 only)
    ssl_protocols             TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;
    ssl_ciphers               ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_session_cache         shared:SSL:10m;
    ssl_session_timeout       1d;
    ssl_session_tickets       off;

    # HSTS — tell browsers to always use HTTPS (1 year)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Security headers
    add_header X-Frame-Options           "SAMEORIGIN" always;
    add_header X-Content-Type-Options    "nosniff" always;
    add_header Referrer-Policy           "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy        "geolocation=(), microphone=(), camera=()" always;

    # Reverse proxy to application
    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_set_header   X-Forwarded-Host  $host;
        proxy_http_version 1.1;
        proxy_set_header   Connection "";    # Enable HTTP/1.1 keepalive to upstream
    }
}
EOF

sudo nginx -t && sudo systemctl reload nginx

Verify SSL rating:

# Test SSL configuration (requires a real domain with valid cert)
curl -sI https://myapp.example.com | grep -E "HTTP|Strict|X-Frame"

Expected output:

HTTP/2 200
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff

Part 3: Load Balancing with Upstream Blocks

Route traffic across multiple backend instances with Nginx’s upstream module.

sudo tee /etc/nginx/sites-available/loadbalanced-app << 'EOF'
# ── Upstream group — defines the backend pool ─────────────────────────────
upstream app_backend {
    # Load balancing algorithm (choose one):
    # (default: round-robin — requests distributed sequentially)
    least_conn;     # Fewest active connections — best for variable response times

    # Backend servers (add as many as needed)
    server 127.0.0.1:3001 weight=3;   # weight=3: receives 3x more requests
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=1;

    # Backup server — only used when all primary servers are down
    server 127.0.0.1:3004 backup;

    # Health checking: mark server down after 3 failures in 30s
    # Retry after 10s
}

server {
    listen 80;
    server_name myapp.example.com;

    location / {
        proxy_pass         http://app_backend;   # Reference the upstream group
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;

        # Retry on failure — try next upstream server
        proxy_next_upstream error timeout http_502 http_503;
        proxy_next_upstream_tries 3;
    }
}
EOF

Verify load balancing distributes requests:

# Start 3 backend servers on different ports
for port in 3001 3002 3003; do
  python3 -c "
import http.server

class H(http.server.BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Backend on port $port')
    def log_message(self, *a): pass

http.server.HTTPServer(('127.0.0.1', $port), H).serve_forever()
" &
done

sudo ln -sf /etc/nginx/sites-available/loadbalanced-app /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
sleep 1

# Make 9 requests — should distribute across backends
for i in $(seq 1 9); do curl -s http://localhost/; echo; done

Expected output (distributed with weight 3:2:1):

Backend on port 3001
Backend on port 3001
Backend on port 3001
Backend on port 3002
Backend on port 3002
Backend on port 3003
Backend on port 3001
Backend on port 3001
Backend on port 3002

3001 receives approximately 3× more requests due to weight=3.

kill $(lsof -t -i:3001 -i:3002 -i:3003 2>/dev/null) 2>/dev/null || true

Part 4: WebSocket Proxying

WebSockets require specific headers to upgrade the HTTP connection:

location /ws/ {
    proxy_pass         http://127.0.0.1:3000;

    # Required for WebSocket upgrade
    proxy_http_version 1.1;
    proxy_set_header   Upgrade    $http_upgrade;
    proxy_set_header   Connection "upgrade";

    # Real IP headers (same as HTTP)
    proxy_set_header   Host           $host;
    proxy_set_header   X-Real-IP      $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

    # Long timeout for WebSocket connections (they stay open)
    proxy_read_timeout 3600s;
    proxy_send_timeout 3600s;
}

This pattern covers: Socket.IO, FastAPI WebSockets, Next.js hot reload, Ollama streaming responses, and Open WebUI.


Part 5: Rate Limiting

Protect your backend from abuse and brute-force attacks:

sudo tee /etc/nginx/conf.d/rate-limiting.conf << 'EOF'
# Define rate limit zones in the http context
# Zone name: api_limit
# Key: client IP ($binary_remote_addr — compact binary format)
# Zone size: 10MB (stores ~160,000 IPs)
# Rate: 10 requests per second per IP

limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;  # Strict for auth
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
EOF

# Update your server block to use rate limiting
sudo tee -a /etc/nginx/sites-available/myapp << 'EOF'

    # General API rate limit: 10 req/s, burst of 20, log but don't delay below burst
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Strict rate limit on auth endpoints: 5 req/min
    location /api/auth/ {
        limit_req zone=login_limit burst=5;
        limit_req_status 429;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Connection limit: max 20 simultaneous connections per IP
    location /api/upload/ {
        limit_conn conn_limit 5;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Real-IP $remote_addr;
        client_max_body_size 100M;
    }
EOF

sudo nginx -t && sudo systemctl reload nginx

Test rate limiting:

# Send 25 rapid requests — expect first 20 to pass (burst), rest to get 429
for i in $(seq 1 25); do
  code=$(curl -s -o /dev/null -w "%{http_code}" http://localhost/api/)
  echo -n "$code "
done
echo ""

Expected output:

200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 429 429 429 429 429

First 20 requests succeed (10 base + 20 burst), remaining get 429 Too Many Requests.


Part 6: Proxy Caching

Cache backend responses in Nginx to reduce application load:

sudo tee /etc/nginx/conf.d/proxy-cache.conf << 'EOF'
# Define cache zone in http context
# Path: /var/cache/nginx/myapp (create this directory)
# Keys zone: 10MB (stores cache keys and metadata)
# Max size: 1GB of cached responses
# Inactive: remove if not accessed for 60 minutes
proxy_cache_path /var/cache/nginx/myapp
    levels=1:2
    keys_zone=myapp_cache:10m
    max_size=1g
    inactive=60m
    use_temp_path=off;
EOF

sudo mkdir -p /var/cache/nginx/myapp
sudo chown www-data:www-data /var/cache/nginx/myapp

# Add caching to your location block
cat >> /etc/nginx/sites-available/myapp << 'NGINX'

    # Cache GET requests to /api/public/ for 5 minutes
    location /api/public/ {
        proxy_cache            myapp_cache;
        proxy_cache_key        "$scheme$request_method$host$request_uri";
        proxy_cache_valid      200 5m;     # Cache 200 responses for 5 minutes
        proxy_cache_valid      404 1m;
        proxy_cache_use_stale  error timeout updating http_500 http_502 http_503;
        proxy_cache_lock       on;         # Prevent thundering herd

        # Add cache status header (for debugging)
        add_header X-Cache-Status $upstream_cache_status;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Real-IP $remote_addr;
    }
NGINX

sudo nginx -t && sudo systemctl reload nginx

Verify caching:

# First request — MISS (fetches from backend)
curl -sI http://localhost/api/public/data | grep X-Cache-Status
# Second request — HIT (served from cache)
curl -sI http://localhost/api/public/data | grep X-Cache-Status

Expected output:

X-Cache-Status: MISS
X-Cache-Status: HIT

Part 7: Complete Production Config

A single file combining all patterns — SSL, security headers, rate limiting, gzip, and reverse proxy:

sudo tee /etc/nginx/sites-available/production-proxy << 'NGINX'
# ── Rate limit zones (http context) ──────────────────────────────────────
limit_req_zone $binary_remote_addr zone=general:10m rate=30r/s;
limit_req_zone $binary_remote_addr zone=auth:10m    rate=5r/m;

server {
    listen 80;
    server_name myapp.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    http2 on;
    server_name myapp.example.com;

    # SSL
    ssl_certificate     /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 1d;

    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    add_header X-Frame-Options           "SAMEORIGIN" always;
    add_header X-Content-Type-Options    "nosniff" always;
    add_header Referrer-Policy           "strict-origin-when-cross-origin" always;
    add_header Content-Security-Policy   "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always;

    # Gzip compression
    gzip              on;
    gzip_types        text/plain text/css application/json application/javascript text/xml application/xml;
    gzip_comp_level   5;
    gzip_min_length   256;
    gzip_vary         on;

    # Logging
    access_log /var/log/nginx/myapp-access.log combined;
    error_log  /var/log/nginx/myapp-error.log warn;

    # Block common attack patterns
    location ~* \.(php|asp|aspx|jsp)$ {
        return 404;
    }

    # API with rate limiting
    location /api/ {
        limit_req zone=general burst=60 nodelay;
        limit_req_status 429;

        proxy_pass         http://127.0.0.1:3000;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header   Connection "";
        proxy_read_timeout 60s;
    }

    # Stricter auth rate limiting
    location /api/auth/ {
        limit_req zone=auth burst=10;
        limit_req_status 429;

        proxy_pass         http://127.0.0.1:3000;
        proxy_set_header   X-Real-IP $remote_addr;
    }

    # WebSocket
    location /ws/ {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade    $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_read_timeout 3600s;
    }

    # Static files served directly (bypass application server)
    location /static/ {
        alias /opt/myapp/static/;
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
}
NGINX

sudo nginx -t

Expected output:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Troubleshooting

502 Bad Gateway

Cause: Nginx cannot connect to the upstream backend — the application is not running or is listening on a different port. Fix:

# Check what's running on the proxied port
sudo ss -tlnp | grep 3000
# Check Nginx error log for the specific error
sudo tail -20 /var/log/nginx/myapp-error.log
# Verify proxy_pass URL matches where the app actually listens

413 Request Entity Too Large

Cause: The request body exceeds client_max_body_size (default: 1MB). Fix: Add to your server or location block:

client_max_body_size 100M;

Application sees 127.0.0.1 as client IP instead of real IP

Cause: Missing or misconfigured proxy_set_header directives. Fix: Ensure these three lines are in your location block:

proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

Also configure your application to read from these headers. In FastAPI: from starlette.middleware.trustedhost import TrustedHostMiddleware and set forwarded_allow_ips = "*" in Uvicorn.

SSL_ERROR_RX_RECORD_TOO_LONG in browser

Cause: Browser connected via HTTPS but Nginx is serving plain HTTP on port 443 (missing ssl in listen directive). Fix: Ensure listen 443 ssl; (not just listen 443;) in your server block.


Conclusion

Nginx is now acting as a sovereign gateway for your application: SSL terminates at Nginx, the backend never touches the internet directly, security headers protect every response, rate limiting blocks abuse before it reaches your application, and load balancing distributes traffic across multiple backend instances.

This configuration serves as the entry point for the entire Dev Corner stack — see How to Install Nginx on Ubuntu 24.04 LTS for the base installation, Docker Compose Tutorial for integrating this proxy into a multi-container stack, and Docker Security Best Practices to harden the containers behind this proxy.


People Also Ask

What is the difference between a forward proxy and a reverse proxy?

A forward proxy sits between clients and the internet — clients configure their browser/system to route traffic through it (a VPN or corporate proxy). A reverse proxy sits in front of servers — clients don’t know it exists and think they’re talking directly to the application. Nginx in this guide is a reverse proxy: your users connect to https://myapp.example.com, Nginx receives the request, and forwards it to your application on localhost:3000. The client never sees or knows about the backend server.

How does Nginx proxy_pass handle trailing slashes?

This is one of the most common Nginx configuration traps. proxy_pass http://backend:3000; (no trailing slash) passes the full URI to the backend unchanged. proxy_pass http://backend:3000/; (with trailing slash) strips the location prefix from the URI. Example: for location /api/, a request to /api/users is forwarded as /api/users without trailing slash but as /users with trailing slash. For most APIs you want to strip the prefix — use the trailing slash version. For a full-site proxy at location / both are equivalent.

Can Nginx proxy to a Unix socket instead of a TCP port?

Yes — and it’s faster than TCP for local connections (no TCP overhead). Use proxy_pass http://unix:/run/myapp/gunicorn.sock; in your location block, and configure Gunicorn/Uvicorn to bind to that socket: uvicorn main:app --uds /run/myapp/gunicorn.sock. Create the socket directory with the correct permissions: sudo mkdir -p /run/myapp && sudo chown www-data:www-data /run/myapp.


Further Reading


Tested on: Ubuntu 24.04 LTS (Hetzner CX22). Nginx 1.27.3. Last verified: April 22, 2026.

Further Reading

All Dev Corner

Comments