The proper way to scale python tornado application

孤人 提交于 2019-12-10 21:35:53

问题


I am searching for some way to scale one instance of tornado application to many. I have 5 servers and want to run at each 4 instances of application. The main issue I don't know how to resolve - is to make communication between instances in right way. I see next approaches to make it:

  • Use memcached for sharing data. I don't think this approach is good, because much traffic would go to server with memcached. Therefore in the future there can be trafic-related issues.
  • Open sockets between each instance. As for me it will be too hard to maintain such way of communication.
  • Use tools like ZeroMQ. I am not familiar with this technology. Is it can be the way to scale application between servers?

回答1:


I'm actually looking at something similar and the thought I have come up with is this. Use the Python Multiprocessing module ( http://docs.python.org/library/multiprocessing.html ) to link the processes together in that way on the individual servers. Then use a memcached server for session specific data. (SessionIDs, IP information, information used to tie the session to a specific user and to the thread of activity they are using) The rest being data driven from a DB instance.




回答2:


What you could do is for each server you run a memcached instance and a tornado instance. Make the memcached instances "Master replicate" with each other using repcached so each instance of tornado can access memcached data from its machine. Four servers for the tornado and memcached instances and the fifth to run haproxy to load balance the others.

www.haproxy.org/

repcached.lab.klab.org/



来源:https://stackoverflow.com/questions/8637366/the-proper-way-to-scale-python-tornado-application

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!