load-balancing

Clustered EJBs not being balanced in JBoss AS 7

萝らか妹 提交于 2019-12-03 08:31:00
I've successfully setup a cluster of 2 JBoss AS 7 instances, and deployed the following SLSB: @Stateless @Remote(TestEJBRemote.class) @Clustered public class TestEJB implements TestEJBRemote { private static final long serialVersionUID = 1L; private static final Logger logger = Logger.getLogger(...); @Override public void test() { String nodeName = System.getProperty("jboss.node.name"); logger.info(nodeName); } } From the log files I can see that the bean is correctly deployed on the cluster. On the client side I then create a number of threads that lookup and invoke an instance of TestEJB .

Node.js + Socket.IO scaling with redis + cluster

早过忘川 提交于 2019-12-03 07:34:42
问题 Currently, I'm faced with the task where I must scale a Node.js app using Amazon EC2. From what I understand, the way to do this is to have each child server use all available processes using cluster, and have sticky connections to ensure that every user connecting to the server is "remembered" as to what worker they're data is currently on from previous sessions. After doing this, the next best move from what I know is to deploy as many servers as needed, and use nginx to load balance

Scaling with sticky sessions and websockets

心已入冬 提交于 2019-12-03 07:32:34
Initially we have two AWS EC2 instances with node.js running behind a load balancer with sticky sessions. As the load increases more instances are added. But we are facing problems with this approach. As out application is mainly for workshops, the load usually increases within a short period of time (workshop start) and every workshop participant has a sticky session with the first two instances and the new ones have almost none. Because of this the performance stays bad. First thought was: let's disable the sticky sessions. But that destroys our websockets because they need sticky sessions

What is the conceptual difference between Service Discovery tools and Load Balancers that check node health?

一个人想着一个人 提交于 2019-12-03 07:30:49
问题 Recently several service discovery tools have become popular/"mainstream", and I’m wondering under what primary use cases one should employ them instead of traditional load balancers. With LBs, you cluster a bunch of nodes behind the balancer, and then clients make requests to the balancer, who then (typically) round robins those requests to all the nodes in the cluster. With service discovery (Consul, ZK, etc.), you let a centralized “consensus” service determine what nodes for particular

Difference between pool and cluster

泪湿孤枕 提交于 2019-12-03 06:29:08
From a purest perspective, they kind of feel like identical concepts. Both manage sets of reosurces/nodes and control their access from or by external components. With a pool, you borrow and return these resources/nodes to and from the pool. With a cluster, you have a load balancer sitting in front of the resources/nodes and you hit the load balancer with a request. In both cases you have absolutely no control over which resource/node your request/borrow gets mapped to. So I pose the question: what's the fundamental difference between the "pool" pattern and a load-balanced cluster? A pool is

How to lock a object when using load balancing

老子叫甜甜 提交于 2019-12-03 06:04:28
问题 Background : I'm writing a function putting long lasting operations in a queue, using C#, and each operation is kind of divided into 3 steps: 1. database operation (update/delete/add data) 2. long time calculation using web service 3. database operation (save the calculation result of step 2) on the same db table in step 1, and check the consistency of the db table, e.g., the items are the same in step 1 (Pls see below for a more detailed example) In order to avoid dirty data or corruptions,

understanding load balancing in asp.net

泪湿孤枕 提交于 2019-12-03 05:47:18
问题 I'm writing a website that is going to start using a load balancer and I'm trying to wrap my head around it. Does IIS just do all the balancing for you? Do you have a separate web layer that sits on the distributed server that does some work before sending to the sub server, like auth or other work? It seems like a lot of the articles I keep reading don't really give me a straight answer, or I'm just not understanding them correctly, I'd like to get my head around how true load balancing

How to tell uWSGI to prefer processes to threads for load balancing

江枫思渺然 提交于 2019-12-03 05:44:28
I've installed Nginx + uWSGI + Django on a VDS with 3 CPU cores. uWSGI is configured for 6 processes and 5 threads per process. Now I want to tell uWSGI to use processes for load balancing until all processes are busy, and then to use threads if needed. It seems uWSGI prefer threads, and I have not found any config option to change this behaviour. First process takes over 100% CPU time, second one takes about 20%, and another processes are mostly not used. Our site receives 40 r/s. Actually even having 3 processes without threads is anough to handle all requests usually. But request processing

Heuristic algorithm for load balancing among threads

五迷三道 提交于 2019-12-03 05:18:14
问题 I'm working on a multi-threaded program where I have a number of worker threads performing tasks of unequal length. I want to load-balance the tasks to ensure that they do roughly the same amount of work. For each task T i I have a number c i which provides a good approximation to the amount of work that is required for that task. I'm looking for an efficient (O(N) N = number of tasks or better) algorithm which will give me "roughly" a good load balance given the values of c i . It doesn't

How load balancer works in RabbitMQ

人走茶凉 提交于 2019-12-03 05:06:29
问题 I am new to RabbitMQ, so please excuse me for trivial questions: 1) In case of clustering in RabbitMQ, if a node fails, load shift to another node (without stopping the other nodes). Similarly, we can also add new fresh nodes to the existing cluster without stopping existing nodes in cluster. Is that correct? 2) Assume that we start with a single rabbitMQ node, and create 100 queues on it. Now producers started sending message at faster rate. To handle this load, we add more nodes and make a