问题
I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn't throw any errors. Uwsgi doesn't complain.
But when I check nginx's error.log I see a lot of
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely closed connection while reading response header from upstream, client: 34.34.34.34, server: me.com, request: "GET /download/export.csv HTTP/1.1", upstream: "uwsgi://0.0.0.0:5002", host: "me.com", referrer: "https://me.com/download/export.csv"
I deploy the uwsgi like
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
回答1:
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example
回答2:
As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini
you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
回答3:
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
回答4:
Replace uwsgi_pass 0.0.0.0:5002;
with uwsgi_pass 127.0.0.1:5002;
or better use unix sockets.
回答5:
It seems many causes can stand behind this error message. I know you are using uwsgi_pass
, but for those having the problem on long requests when using proxy_pass
, setting http-timeout
on uWSGI may help (it is not harakiri setting).
回答6:
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive
setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI
's side as well, with --http-keepalive
(available since 1.9).
回答7:
I fixed this issue by passing socket-timeout = 65
(uwsgi.ini file) or --socket-timeout=65
(uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65
in uwsgi.ini file worked in my case.
来源:https://stackoverflow.com/questions/27396248/uwsgi-nginx-flask-upstream-prematurely-closed