How to get a concurrency of 1000 requests with Flask and Gunicorn [closed]

孤者浪人 提交于 2020-01-22 15:21:31

问题


I have 4 machine learning models of size 2GB each, i.e. 8GB in total. I am getting requests around 100 requests at a time. Each request is taking around 1sec.
I have a machine having 15GB RAM. Now if I increase the number of workers in Gunicorn, total memory consumption go high. So I can't increase the number of workers beyond 2.
So I have few questions regarding it :

  1. How workers can share models or memory between them?
  2. Which type of worker will be suitable, sync or async considering mentioned situation?
  3. How to use preload option in Gunicorn if it is a solution? I used it but it is of no help. May be I am doing it in a wrong way.

Here is the Flask code which I am using
https://github.com/rathee/learnNshare/blob/master/agent_api.py


回答1:


Use the gevent worker (or another event loop worker), not the default worker. The default sync worker handles one request per worker process. An async worker handles an unlimited number of requests per worker process as long as each request is non-blocking.

gunicorn -k gevent myapp:app

Predictably, you need to install gevent for this: pip install gevent.



来源:https://stackoverflow.com/questions/35914587/how-to-get-a-concurrency-of-1000-requests-with-flask-and-gunicorn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!