load-balancing

Node.js supports multiple load balance across servers?

丶灬走出姿态 提交于 2019-11-29 06:53:14
Im curious about horizontal scalling in node.js is possible to load balance acrross multiple virtual servers like rackspace cloud servers? I read about cluster plugin but I think it is only for a single server with a multi core cpu. Try roundrobin.js for node-http-proxy : var httpProxy = require('http-proxy'); // // A simple round-robin load balancing strategy. // // First, list the servers you want to use in your rotation. // var addresses = [ { host: 'ws1.0.0.0', port: 80 }, { host: 'ws2.0.0.0', port: 80 } ]; httpProxy.createServer(function (req, res, proxy) { // // On each request, get the

Elastic Load Balancing both internal and internet-facing

戏子无情 提交于 2019-11-29 06:21:18
We are trying to use Elastic Load Balancing in AWS with auto-scaling so we can scale in and out as needed. Our application consists of several smaller applications, they are all on the same subnet and the same VPC. We want to put our ELB between one of our apps and the rest. Problem is we want the load balancer to be working both internally between different apps using an API and also internet-facing because our application still has some usage that should be done externally and not through the API. I've read this question but I could not figure out exactly how to do it from there, it does not

How to configure dotNetOpenId in an session less load balancing environment

假如想象 提交于 2019-11-29 03:15:40
问题 You've probably solved this before. I need to be able to use open id in an environment that does not have session stickiness. The servers do preserve the headers. I'm using ASP.NET MVC and dotNetOpenId version 3.2.0.9177. Although the authentication on the 3rd party web site goes without a hitch when returning the response I get an error and authentication fails. Any thoughts? 回答1: Stateful The most optimized method is to write a custom persistence store that implements

Enabling sticky sessions on a load balancer

孤者浪人 提交于 2019-11-29 02:37:27
Any advise on this one would be greatly appreciated, I've been researching all morning and I'm still scratching my head. I started at a new company a few weeks ago, where I'm the only .NET developer as the development was originally done by an outsourcing company and I've been asked to research. My knowledge of the existing system is extremely limited but from what I can gather the situation is as follows. We would like to enable sticky sessions on an asp.net web site. From my research I have gathered, I need to do the following steps. We are using the ASP.NET State Service The setup is a load

How to specify static IP address for Kubernetes load balancer?

大憨熊 提交于 2019-11-28 22:51:44
I have a Kubernetes cluster running on Google Compute Engine and I would like to assign static IP addresses to my external services ( type: LoadBalancer ). I am unsure about whether this is possible at the moment or not. I found the following sources on that topic: Kubernetes Service Documentation lets you define an external IP address, but it fails with cannot unmarshal object into Go value of type []v1.LoadBalancerIngress The publicIPs field seems to let me specify external IPs, but it doesn't seem to work either This Github issue states that what I'm trying to do is not supported yet, but

How can I modify the Load Balancing behavior Jenkins uses to control slaves? [closed]

冷暖自知 提交于 2019-11-28 19:44:58
We use Jenkins for our CI build system. We also use 'concurrent builds' so that Jenkins will build each change independently. This means we often have 5 or 6 builds of the same job running simultaneously. To accommodate this, we have 4 slaves each with 12 executors. The problem is that Jenkins doesn't really 'load balance' among its slaves. It tries to build a job on the same slave that it previously built on (presumably to reduce the time syncing from source control). This is a problem because Jenkins will build all 6 instances of our build on the same slave (or more likely between 2 slaves).

Load balance web application

China☆狼群 提交于 2019-11-28 17:06:05
There are load balanced tomcat web servers. Every request could be served by different tomcat server. How could we take care of this while writing code for the j2ee (struts) based web application? First of all, you'll want to set up your load balancer for session affinity/sticky sessions, so that it continues forwarding all requests to the same Tomcat (as long as it is up) based on JSESSIONID. The Tomcat clustering doc states two important requirements for your application to successfully have it's sessions replicated: All your session attributes must implement java.io.Serializable Make sure

Loadbalancing web sockets

半腔热情 提交于 2019-11-28 15:07:30
I have a question about how to load balance web sockets. I have a server which supports web sockets. Browsers connect to my site and each one opens a web socket to www.mydomain.com . That way, my social network app can push messages to the clients. Traditionally, using just HTTP requests, I would scale up by adding a second server and a load balancer in front of the two web servers. With web sockets, the connection has to be directly with the web server, not the load balancers, because if a machine has a physical limit of say 64k open ports, and the clients were connecting to the load balancer

Database cluster and load balancing

不打扰是莪最后的温柔 提交于 2019-11-28 13:17:14
问题 What is database clustering? If you allow the same database to be on 2 different servers how do they keep the data between synchronized. And how does this differ from load balancing from a database server perspective? 回答1: Database clustering is a bit of an ambiguous term, some vendors consider a cluster having two or more servers share the same storage, some others call a cluster a set of replicated servers. Replication defines the method by which a set of servers remain synchronized without

GCE LoadBalancer : Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1

随声附和 提交于 2019-11-28 12:23:38
In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB) But, as I validate the change, I got an error saying Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1 even if i didn't change the namedPort. This issue seems to be the same, but the only solution is a workaround that does not work in my case : Thanks for your help, I'm sure the OP has resolved this by now, but for anyone else pulling their