Tornado, Nginx, Apache ab - apr_socket_recv: 连接被对端重置 (104)

6

我正在一个 c1.medium 实例上运行 nginx 和 tornado。

当我运行 ab 时,下面是我的输出。Nginx 将不起作用。我已经尝试调整 ninx 的配置文件,但没有成功。如果我通过传递 nginx 来仅在一个端口上运行,例如 `

  http://127.0.0.1:8050/pixel?tt=ff` 

那么它就很快了。请看底部。这一定是一个nginx的问题,所以我该如何解决?以下是nginx的配置文件。

root@ip-10-130-167-230:/etc/service# ab -n 10000 -c 50 http://127.0.0.1/pixel?tt=ff
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 9100 requests completed

这应该会冒烟,但实际上没有。
我设置了以下参数:
ulimit is at 100000

# General gigabit tuning:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
# this gives the kernel more memory for tcp
# which you need with many (100k+) open socket connections
net.ipv4.tcp_mem = 50576   64768   98152
net.core.netdev_max_backlog = 2500

以下是我的nginx配置文件:

user www-data;
worker_processes 1;  # 2*number of cpus
pid /var/run/nginx.pid;
worker_rlimit_nofile 32768;
events {
         worker_connections  30000;
         multi_accept on;
         use epoll;
}

http {
        upstream frontends {
          server 127.0.0.1:8050;
          server 127.0.0.1:8051;
        }
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # Only retry if there was a communication error, not a timeout
    # on the Tornado server (to avoid propagating "queries of death"
    # to all frontends)
    proxy_next_upstream error;

        server {
        listen   80;
        server_name 127.0.0.1;
                ##For tornado
                location / {
                    proxy_pass_header Server;
                    proxy_set_header Host $http_host;
                    proxy_redirect off;
                    proxy_set_header X-Real-IP $remote_addr;
                    proxy_set_header X-Scheme $scheme;
                    proxy_pass http://frontends;
                }

如果我通过nginx运行ab:

ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff



root@ip-10-130-167-230:/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer# ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        TornadoServer/2.2.1
Server Hostname:        127.0.0.1
Server Port:            8050

Document Path:          /pixel?tt=ff
Document Length:        42 bytes

Concurrency Level:      1000
Time taken for tests:   52.436 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      31200000 bytes
HTML transferred:       4200000 bytes
Requests per second:    1907.08 [#/sec] (mean)
Time per request:       524.363 [ms] (mean)
Time per request:       0.524 [ms] (mean, across all concurrent requests)
Transfer rate:          581.06 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  411 1821.7      0   21104
Processing:    23   78 121.2     65    5368
Waiting:       22   78 121.2     65    5368
Total:         53  489 1845.0     65   23230

Percentage of the requests served within a certain time (ms)
  50%     65
  66%     69
  75%     78
  80%     86
  90%    137
  95%   3078
  98%   3327
  99%   9094
 100%  23230 (longest request)


2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1"
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1"

在使用ab工具时,使用-v 10选项会输出以下内容:
GIF89a
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Wed, 16 May 2012 21:56:50 GMT
Content-Type: image/gif
Content-Length: 42
Connection: close
Etag: "d5fceb6532643d0d84ffe09c40c481ecdf59e15a"
Server: TornadoServer/2.2.1
Set-Cookie: rtbhui=867bccde-2bc0-4518-b422-8673e07e19f6; Domain=rtb.rtbhui.com; expires=Fri, 16 May 2014 21:56:50 GMT; Path=/

我找到了我的问题...可悲的是..哈哈...我的chef运行每30秒就会重新启动。我在那里有一个错误。修复后问题得到解决。 - Tampa
2个回答

2

0

我遇到了相同的问题,查看日志信息后找到了以下行:

Oct 15 10:41:30 bal1 kernel: [1031008.706185] nf_conntrack: table full, dropping packet.
Oct 15 10:41:31 bal1 kernel: [1031009.757069] nf_conntrack: table full, dropping packet.
Oct 15 10:41:32 bal1 kernel: [1031009.939489] nf_conntrack: table full, dropping packet.
Oct 15 10:41:32 bal1 kernel: [1031010.255115] nf_conntrack: table full, dropping packet.

在我的特定情况下,conntrack模块在iptables中使用,因为同一台服务器有防火墙。
解决这个问题的一个方法是卸载conntrack模块,另一个简单的方法是在防火墙策略中应用这两行代码。
iptables -t raw -I PREROUTING -p tcp  -j NOTRACK
iptables -t raw -I OUTPUT -p tcp  -j NOTRACK

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接