Requests not being distributed across gunicorn workers

﹥>﹥吖頭↗ 提交于 2019-12-11 10:03:18

问题


I'm trying to write an app using tornado with gunicorn handling the worker threads. I've created the code shown below, but despite starting multiple workers it isn't sharing the requests. One worker seems to process all of the requests all of the time (not intermittent).

Code:

from tornado.web import RequestHandler, asynchronous, Application
from tornado.ioloop import IOLoop

import time
from datetime import timedelta
import os

class MainHandler(RequestHandler):
    def get(self):
        print "GET start"
        print "pid: "+str(os.getpid())
        time.sleep(3)
        self.write("Hello, world.<br>pid: "+str(os.getpid()))
        print "GET finish"

app = Application([
    (r"/", MainHandler)
])

Output in console (I refreshed 3 browser tabs easily within the 3 second window, yet they still use the same process and run sequentially):

2014-04-12 20:57:52 [30465] [INFO] Starting gunicorn 18.0
2014-04-12 20:57:52 [30465] [INFO] Listening at: http://127.0.0.1:8000 (30465)
2014-04-12 20:57:52 [30465] [INFO] Using worker: tornado
2014-04-12 20:57:52 [30474] [INFO] Booting worker with pid: 30474
2014-04-12 20:57:52 [30475] [INFO] Booting worker with pid: 30475
2014-04-12 20:57:52 [30476] [INFO] Booting worker with pid: 30476
2014-04-12 20:57:52 [30477] [INFO] Booting worker with pid: 30477
GET start
pid: 30474
GET finish
GET start
pid: 30474
GET finish
GET start
pid: 30474
GET finish

I've tried using IOLoop.add_timeout with asynchronous as well, nothing better in that case. By reading I realized that gunicorn might even be somehow looking inside and interpreting the asynchronous decorator as meaning that it can stuff them all down one thread so I reverted to what I've shown here. Just for my sanity's sake I've pastebinned the unedited version of what I've done.

In summary, why isn't gunicorn distributing my requests across workers?


回答1:


Well, apparently the browser is causing this. By looking at wireshark I have determined that at least firefox (and I'm assuming chrome is doing the same thing) is serializing the requests when the URLs are the same. Perhaps this is so that if they're cacheable it can just reuse them.



来源:https://stackoverflow.com/questions/23038678/requests-not-being-distributed-across-gunicorn-workers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!