README
¶
nginx-upstream-keepalive
This repository demonstrates the proper configuration for enabling HTTP keep-alive on upstream servers when using NGINX as a reverse proxy. Enabling HTTP keep-alive in this setup can significantly improve performance by:
- Reducing CPU load on upstream servers by minimizing the number of new connections required.
- Improving request latency by reusing connections, which also enhances the ability to handle high request volumes.
This repository was created to address questions raised in the following Pull Request.
TL;DR
To configure NGINX optimally as a reverse proxy with HTTP keep-alive support, use the following configuration:
server {
location / {
# Reference to "upstream" block with the name "backend" (see below)
proxy_pass http://backend;
# Use HTTP/1.1 instead of HTTP/1.0 for upstream connections
proxy_http_version 1.1;
# Remove any "Connection: close" header and handle WebSockets (see "map" below)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# If the "Upgrade" header is present and non-empty, forward "Connection: Upgrade".
# Otherwise, do not forward the "Connection" header.
map $http_upgrade $connection_upgrade {
default upgrade;
"" "";
}
upstream backend {
# 127.0.0.1:8080 is an example upstream server
server 127.0.0.1:8080;
# Maintain 2 idle keep-alive connections to upstream servers from each worker process
keepalive 2;
}
Overview
This repository includes the following files:
main.go
: A simple HTTP server implemented in Go, serving as the NGINX upstream. It listens on port 8080 and logs requests, making it easy to check if HTTP keep-alive is active.nginx.conf
: A minimal NGINX configuration file with 4server
blocks:- The 1st block (port 8081) uses only the standard
proxy_pass
. - The 2nd block (port 8082) adds
proxy_http_version 1.1
. - The 3rd block (port 8083) adds
proxy_set_header Connection ""
. - The 4th block (port 8084) includes an
upstream
block withkeepalive
enabled.
- The 1st block (port 8081) uses only the standard
docker-compose.yaml
&Dockerfile
: A Docker Compose setup to run the Go server with NGINX as a reverse proxy.
Prerequisites
To run this example, you’ll need Docker Compose and curl as the client.
Running
To start the Go server and NGINX proxy, run:
docker compose up -d --build
To observe logs from both NGINX and the Go server, use:
docker compose logs -f
When finished, you can stop the applications with:
docker compose down
Results
Step 0: Verifying Go Server Supports HTTP Keep-Alive
First, ensure that the Go server supports HTTP keep-alive by making 3 consecutive requests directly to it on port 8080 using curl
. Check if the connection is reused:
curl -sv http://localhost:8080 http://localhost:8080 http://localhost:8080
In the curl
output, you should see:
* Connection #0 to host localhost left intact
...
* Re-using existing connection with host localhost
This indicates that curl
opened a connection for the first request and reused it for the next two. Additionally, in the Go server logs, you should see:
Received request from 192.168.107.1:55694 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.1:55694 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.1:55694 | Protocol: HTTP/1.1 | Will be closed: false
Request headers:
User-Agent: curl/8.7.1
Accept: */*
Each request uses the same port (55694
), confirming that the connection was reused.
Step 1: NGINX with Standard proxy_pass
Next, let's test NGINX with only the proxy_pass
directive:
server {
listen 8081;
location / {
proxy_pass http://golang:8080;
}
}
Run 3 requests to port 8081:
curl -sv http://localhost:8081 http://localhost:8081 http://localhost:8081
Received request from 192.168.107.3:42110 | Protocol: HTTP/1.0 | Will be closed: true
...
Received request from 192.168.107.3:42122 | Protocol: HTTP/1.0 | Will be closed: true
...
Received request from 192.168.107.3:42136 | Protocol: HTTP/1.0 | Will be closed: true
Request headers:
Connection: close
User-Agent: curl/8.7.1
Accept: */*
In the Go server logs, you will see that each request originates from different ports (42110
, 42122
, 42136
), showing that connections were not reused. This happens because NGINX defaults to HTTP/1.0
for upstream connections, which lacks connection reuse.
Step 2: NGINX Upgraded to HTTP/1.1
Enable HTTP/1.1 by adding proxy_http_version 1.1
:
server {
- listen 8081;
+ listen 8082;
location / {
proxy_pass http://golang:8080;
+ proxy_http_version 1.1;
}
}
Run 3 requests to port 8082:
curl -sv http://localhost:8082 http://localhost:8082 http://localhost:8082
Received request from 192.168.107.3:60914 | Protocol: HTTP/1.1 | Will be closed: true
...
Received request from 192.168.107.3:60918 | Protocol: HTTP/1.1 | Will be closed: true
...
Received request from 192.168.107.3:60926 | Protocol: HTTP/1.1 | Will be closed: true
Request headers:
User-Agent: curl/8.7.1
Accept: */*
Connection: close
The Go server logs show requests from different ports (60914
, 60918
, 60926
), meaning connections were not reused. This happens because NGINX adds a Connection: close
header by default, which instructs the upstream server to close the connection after each request.
Step 3: NGINX Without Connection: close
Header
Remove the Connection: close
header by adding proxy_set_header Connection "";
:
server {
- listen 8082;
+ listen 8083;
location / {
proxy_pass http://golang:8080;
proxy_http_version 1.1;
+ proxy_set_header Connection "";
}
}
Run 3 requests to port 8083:
curl -sv http://localhost:8083 http://localhost:8083 http://localhost:8083
Received request from 192.168.107.3:49260 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.3:49270 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.3:49274 | Protocol: HTTP/1.1 | Will be closed: false
Request headers:
User-Agent: curl/8.7.1
Accept: */*
Despite removing Connection: close
, NGINX still does not reuse connections, closing them automatically after each request.
Step 3.1: Fixing WebSocket Support
A keen reader might notice that the Connection
header is also essential for WebSocket connections. When establishing a WebSocket connection (e.g., in JavaScript: let ws = new WebSocket("ws://localhost:8080")
), the client sends the following headers:
Connection: Upgrade
Upgrade: websocket
However, our current configuration removes the Connection
header, which breaks WebSocket connections. Let's fix this issue with the following approach:
- If the
Upgrade
header is present and non-empty, forwardConnection: Upgrade
. - Otherwise, do not forward the
Connection
header.
To achieve this, we use the map
directive. The map
directive in NGINX allows us to create a mapping between a variable's value (in this case, $http_upgrade
) and the output value assigned to another variable (here, $connection_upgrade
). This is useful for dynamically setting configuration values based on request properties.
Here’s the updated configuration which properly handles WebSocket connections:
server {
listen 8083;
location / {
proxy_pass http://golang:8080;
proxy_http_version 1.1;
- proxy_set_header Connection "";
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection $connection_upgrade;
}
}
+map $http_upgrade $connection_upgrade {
+ default upgrade;
+ "" "";
+}
Step 4: NGINX with keepalive
To enable connection reuse, define an upstream
block with keepalive
(see NGINX docs: ngx_http_upstream_module). This specifies the number of idle keep-alive connections per worker process. Here is the final configuration:
server {
- listen 8083;
+ listen 8084;
location / {
- proxy_pass http://golang:8080;
+ proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
"" "";
}
+upstream backend {
+ server golang:8080;
+ keepalive 2;
+}
[!NOTE]
Note that thekeepalive
directive does not limit the total number of connections to upstream servers that an NGINX worker process can open – this is a common misconception. So the parameter tokeepalive
does not need to be as large as you might think.
We recommend setting the parameter to twice the number of servers listed in theupstream
block. This is large enough for NGINX to maintain keepalive connections with all the servers, but small enough that upstream servers can process new incoming connections as well.
Reference: NGINX blog: Avoiding the Top 10 NGINX Configuration Mistakes (Mistake 3: Not Enabling Keepalive Connections to Upstream Servers)
Run 3 requests to port 8084:
curl -sv http://localhost:8084 http://localhost:8084 http://localhost:8084
Received request from 192.168.107.3:55980 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.3:55980 | Protocol: HTTP/1.1 | Will be closed: false
...
Received request from 192.168.107.3:55980 | Protocol: HTTP/1.1 | Will be closed: false
Request headers:
User-Agent: curl/8.7.1
Accept: */*
Finally! In the Go server logs, all requests come from the same port (55980
), confirming that the connection was reused.
References
- NGINX blog: Avoiding the Top 10 NGINX Configuration Mistakes (Mistake 3: Not Enabling Keepalive Connections to Upstream Servers)
- NGINX blog: 10 Tips for 10x Application Performance (Tip 9 – Tune Your Web Server for Performance)
- NGINX blog: HTTP Keepalive Connections and Web Performance
- NGINX docs: ngx_http_upstream_module
Bonus: Comparing NGINX with Other Reverse Proxies
I also tested several popular open-source projects commonly used as reverse proxies, each with its default configuration:
- Apache HTTP Server (port 9090)
- Caddy (port 9091)
- Envoy (port 9092)
- HAProxy (port 9093)
- Traefik (port 9094)
All configurations for these proxies can be found in the bonus directory.
Results: All of these proxies use HTTP keep-alive by default, unlike NGINX, which requires additional setup.
To run these tests yourself, start Docker Compose with the bonus
profile:
docker compose --profile bonus up -d --build
To observe logs all applications, use:
docker compose --profile bonus logs -f
When finished, stop all applications with:
docker compose --profile bonus down
To send a sequence of HTTP requests to each proxy:
for PORT in 9090 9091 9092 9093 9094; do
curl -sv http://localhost:$PORT http://localhost:$PORT http://localhost:$PORT
done
Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
License
Documentation
¶
There is no documentation for this package.