OpenResty Benchmark

Posted by zhuizhuhaomeng Blog on November 17, 2025

upstream configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
user  nobody;
worker_processes  2;
worker_cpu_affinity 0010 0100;  # bind to CPU1 and CPU2
error_log  logs/error.log error;
pid        logs/nginx.pid;

events {
    accept_mutex off;
    worker_connections  8192;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    server {
        listen 1880;
        location / {
            content_by_lua_block {
                ngx.say("Hello world!")
            }
        }
    }
}

openresty configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
user  nobody;
worker_processes  1;
worker_cpu_affinity auto;  # This will bind to CPU0
pid        logs/nginx.pid;
error_log  logs/error.log error;


events {
    accept_mutex off;
    worker_connections  8192;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    upstream backend_1880 {
        server 127.0.0.1:1880;
        keepalive 32;
    }

    server {
        listen       1881;
        server_name  localhost;

        location / {
            # This following configuration is required to make proxy_pass work with keepalive connections.
            proxy_pass http://backend_1880;
            proxy_http_version 1.1;
            proxy_set_header Connection ""; 
        }
    }
}

benchmark

  1. Test against the upstream first, make sure it works correctly.
1
curl http://127.0.0.1:1880
  1. Test against the upstream directly, get the performance of the upstream. The upstream should not be the bottleneck.
1
wrk -d 10s -c 10 -t 2 http://127.0.0.1:1880
  1. Test against the openresty, make sure it works correctly.
1
curl http://127.0.0.1:1881
  1. Test against the openresty gatewaty, get the performance of the gateway.
1
wrk -d 10s -c 10 -t 2 http://127.0.0.1:1881