Loadbalancing web sockets

半腔热情 提交于 2019-11-28 15:07:30

Put a L3 load-balancer that distributes IP packets based on source-IP-port hash to your WebSocket server farm. Since the L3 balancer maintains no state (using hashed source-IP-port) it will scale to wire speed on low-end hardware (say 10GbE). Since the distribution is deterministic (using hashed source-IP-port), it will work with TCP (and hence WebSocket).

Also note that a 64k hard limit only applies to outgoing TCP/IP for a given (source) IP address. It does not apply to incoming TCP/IP. We have tested Autobahn (a high-performance WebSocket server) with 200k active connections on a 2 core, 4GB RAM VM.

Also note that you can do L7 load-balancing on the HTTP path announced during the initial WebSocket handshake. In that case the load balancer has to maintain state (which source IP-port pair is going to which backend node). It will probably scale to millions of connections nevertheless on decent setup.

Disclaimer: I am original author of Autobahn and work for Tavendo.

Note that if your websocket server logic runs on nodejs with socket.io, you can tell socket.io to use a shared redis key/value store for synchronization. This way you don't even have to care about the load balancer, events will propagate among the server instances.

var io = require('socket.io')(3000);
var redis = require('socket.io-redis'); 
io.adapter(redis({ host: 'localhost', port: 6379 }));

See : http://socket.io/docs/using-multiple-nodes/

But at some point I guess redis can become the bottleneck...

You can also achieve layer 7 load balancing with inspection and "routing functionality"

See "How to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port." https://splash.riverbed.com/docs/DOC-1451

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!