Sharing data between multiple tornado instances

我与影子孤独终老i 提交于 2020-01-06 08:27:07

问题


I have nginx server proxying requests to a few tornado instances. Each tornado instance is based on the long-polling chat demo that comes with Tornado. The script has an array that stores the callbacks, which are then used to dispatch messages back to the client.

The problem I have is that when there are multiple tornado instances, nginx uses a round-robin strategy. Since the callbacks are stored per instance (and not maintained centrally), depending on when the request is made, it goes to one of the instances. Because of this, when the data has to be pushed, it only goes to the callbacks that are stored in the same tornado instance.

Is there a standard practice of how to store the data between multiple tornado instances? I was thinking of using memcached, but then if I need to iterate all the keys in the store, that wouldnt be possible (although its not something that Id need all the time). I just wanted to find out if there is a standard practice for storing data between multiple python processes. I also read about mmap but wasnt sure how it would work with storing callbacks (which are python methods).


回答1:


There is no ready recipe, you can use mmap or message provider like RabbitMQ or simple noSQL DB like Redis. In your case I would have try ZeroMQ maybe.




回答2:


If it's a "chat" style application, you might be better off looking at Redis and the Pub/Sub handling that is implemented there.

This is a good Question that was asked about the pub/sub. What is the proper way to handle Redis connection in Tornado ? (Async - Pub/Sub)



来源:https://stackoverflow.com/questions/10361018/sharing-data-between-multiple-tornado-instances

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!