问题
I have nginx server proxying requests to a few tornado instances. Each tornado instance is based on the long-polling chat demo that comes with Tornado. The script has an array that stores the callbacks, which are then used to dispatch messages back to the client.
The problem I have is that when there are multiple tornado instances, nginx uses a round-robin strategy. Since the callbacks are stored per instance (and not maintained centrally), depending on when the request is made, it goes to one of the instances. Because of this, when the data has to be pushed, it only goes to the callbacks that are stored in the same tornado instance.
Is there a standard practice of how to store the data between multiple tornado instances? I was thinking of using memcached, but then if I need to iterate all the keys in the store, that wouldnt be possible (although its not something that Id need all the time). I just wanted to find out if there is a standard practice for storing data between multiple python processes. I also read about mmap but wasnt sure how it would work with storing callbacks (which are python methods).
回答1:
There is no ready recipe, you can use mmap or message provider like RabbitMQ or simple noSQL DB like Redis. In your case I would have try ZeroMQ maybe.
回答2:
If it's a "chat" style application, you might be better off looking at Redis and the Pub/Sub handling that is implemented there.
This is a good Question that was asked about the pub/sub. What is the proper way to handle Redis connection in Tornado ? (Async - Pub/Sub)
来源:https://stackoverflow.com/questions/10361018/sharing-data-between-multiple-tornado-instances