Flask and Tornado Applciation does not handle multiple concurrent requests

久未见 提交于 2019-12-04 06:18:44

问题


I am running a simple Flask app with Tornado, but the view only handles one request at a time. How can I make it handle multiple concurrent requests?

The fix I'm using is to fork and use the multiple processes to handle requests, but I don't like that solution.

from flask import Flask

app = Flask(__name__)

@app.route('/flask')
def hello_world():
    return 'This comes from Flask ^_^'

from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler, RequestHandler, Application
from flasky import app

class MainHandler(RequestHandler):
  def get(self):
    self.write("This message comes from Tornado ^_^")

tr = WSGIContainer(app)

application = Application([
    (r"/tornado", MainHandler),
    (r".*", FallbackHandler, dict(fallback=tr)),
])

if __name__ == "__main__":
    application.listen(8000)
    IOLoop.instance().start()

回答1:


The immediate answer is that you should use a dedicated WSGI server, such as uWSGI or Gunicorn, and configure it to use multiple workers. Do not Tornado as a WSGI server.


Your fix of spawning processes is correct in as much as using WSGI with Tornado is "correct". WSGI is a synchronous protocol: one worker handles one request at a time. Flask doesn't know about Tornado, so it can't play nice with it by using coroutines: handling the request happens synchronously.

Tornado has a big warning in their docs about this exact thing.

WSGI is a synchronous interface, while Tornado’s concurrency model is based on single-threaded asynchronous execution. This means that running a WSGI app with Tornado’s WSGIContainer is less scalable than running the same app in a multi-threaded WSGI server like gunicorn or uwsgi. Use WSGIContainer only when there are benefits to combining Tornado and WSGI in the same process that outweigh the reduced scalability.

In other words: to handle more concurrent requests with a WSGI application, spawn more workers. The type of worker also matters: threads vs. processes vs. eventlets all have tradeoffs. You're spawning workers by creating processes yourself, but it's more common to use a WSGI server such as uWSGI or Gunicorn.



来源:https://stackoverflow.com/questions/39644247/flask-and-tornado-applciation-does-not-handle-multiple-concurrent-requests

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!