NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream

匿名 (未验证) 提交于 2019-12-03 01:55:01

问题:

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:

upstream timed out (110: Connection timed out) while reading response header from upstream

If I query my upstream directly without nginx proxy, with the same request, I get the required data.

The Nginx timeout occurs once the proxy is put in.

**nginx.conf**  http {     keepalive_timeout 10m;     proxy_connect_timeout  600s;     proxy_send_timeout  600s;     proxy_read_timeout  600s;     fastcgi_send_timeout 600s;     fastcgi_read_timeout 600s;     include /etc/nginx/sites-enabled/*.conf; }  **virtual host conf**  upstream ss_api {   server 127.0.0.1:3000 max_fails=0  fail_timeout=600; }  server {   listen 81;   server_name xxxxx.com; # change to match your URL    location / {     # match the name of upstream directive which is defined above     proxy_pass http://ss_api;      proxy_set_header  Host $http_host;     proxy_set_header  X-Real-IP  $remote_addr;     proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;     proxy_cache cloud;     proxy_cache_valid  200 302  60m;     proxy_cache_valid  404      1m;     proxy_cache_bypass $http_authorization;     proxy_cache_bypass http://ss_api/account/;     add_header X-Cache-Status $upstream_cache_status;   } } 

Nginx has a bunch of timeout directives. I don't know if I'm missing something important. Any help would be highly appreciated....

回答1:

I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.

Then based on that you can adjust proxy_read_timeout fastcgi_read_timeout or uwsgi_read_timeout.

Also make sure your config is loaded.

More details here Nginx upstream timed out (why and how to fix)



回答2:

You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.

I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here: https://stackoverflow.com/a/36589120/479632

server {     location / {         proxy_set_header   X-Real-IP $remote_addr;         proxy_set_header   Host      $http_host;          # these two lines here         proxy_http_version 1.1;         proxy_set_header Connection "";          proxy_pass http://localhost:5000;     } } 

Unfortunately I can't explain why this works and didn't manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I'd be very interested to hear it.



回答3:

In your case it helps a little optimization in proxy, or you can use "# time out settings"

location /  {            # time out settings   proxy_connect_timeout 159s;   proxy_send_timeout   600;   proxy_read_timeout   600;   proxy_buffer_size    64k;   proxy_buffers     16 32k;   proxy_busy_buffers_size 64k;   proxy_temp_file_write_size 64k;   proxy_pass_header Set-Cookie;   proxy_redirect     off;   proxy_hide_header  Vary;   proxy_set_header   Accept-Encoding '';   proxy_ignore_headers Cache-Control Expires;   proxy_set_header   Referer $http_referer;   proxy_set_header   Host   $host;   proxy_set_header   Cookie $http_cookie;   proxy_set_header   X-Real-IP  $remote_addr;   proxy_set_header X-Forwarded-Host $host;   proxy_set_header X-Forwarded-Server $host;   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } 


回答4:

First figure out which upstream is slowing by consulting the nginx error log file and adjust the read time out accordingly in my case it was fastCGI

2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx" 

So i have to adjust the fastcgi_read_timeout in my server configuration

.........................  location ~ \.php$ {             fastcgi_read_timeout 240;             ............     } ................................ 

See: original post



回答5:

I think this error can happen for various reasons, but it can be specific to the module you're using. For example I saw this using the uwsgi module, so had to set "uwsgi_read_timeout".



回答6:

This happens because your upstream takes too much to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error. Just include and increase proxy_read_timeout in location. Same thing happened to me and I used 1 hour timeout for an internal app at work:

proxy_read_timeout 3600; 

With this, NGINX will wait for an hour for its upstream to return something.



回答7:

Might be worth the look http://howtounix.info/howto/110-connection-timed-out-error-in-nginx (he put the proxy_read_timeout in the location block



回答8:

From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.



回答9:

I had the same problem and resulted that was an "every day" error in the rails controller. I don't know why, but on production, puma runs the error again and again causing the message:

upstream timed out (110: Connection timed out) while reading response header from upstream

Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I'm calling a different action in the controller, so, a single typo blocks all the app.

Check your log/puma.stderr.log file to see if that is the situation.



易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!