load-balancing

nginx loadbalancer Too many open files

会有一股神秘感。 提交于 2019-12-01 13:29:20
I've a loadbalancer and I get this kind errors: 2017/09/12 11:18:38 [crit] 22348#22348: accept4() failed (24: Too many open files) 2017/09/12 11:18:38 [alert] 22348#22348: *4288962 socket() failed (24: Too many open files) while connecting to upstream, client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page" 2017/09/12 11:18:38 [crit] 22348#22348: *4288962 open() "/usr/local/nginx/html/50x.html" failed (24: Too many open files), client: x.x.x.x, server: example.com, request:

Cloud Foundry / Bluemix load balancing

泄露秘密 提交于 2019-12-01 10:50:23
I know that by default, Bluemix / Cloud Foundry use round-robin load balancing . Is there a way to change that? If I deploy 2 apps with the same route, and want 90% of my traffic to go to blue, and 10% to green, is that possible? You would have to deploy more than two instances of the app to have better than 50-50 control over who sees what. If you have 10 instances, for example, and you update 1, then you could get your 90-10 split. Check out this CF CLI plugin: https://github.com/krujos/scaleover-plugin Configuring the load balancer is not possible. One workaround you could use to "simulate"

How to identify which web server in a farm served a request?

末鹿安然 提交于 2019-12-01 07:01:42
问题 We're debugging intermittent issues with a website running on IIS7. Since we have many nodes behind the load balancer, we can't tell which host responded to a given request. Is there any way at the IIS level to specify which host served a request? For example, could IIS append a header in the response that indicates the IP of the host that sent the response? Ideally, I would like a solution that does not require any coding. 回答1: Without writing any code you could just configure a custom HTTP

How to make RabbitMQ scalable?

大憨熊 提交于 2019-12-01 06:38:12
I tried to test RabbitMQ, but I found that rabbitmq has some problems: if I created a cluster of 3 nodes, I can't publish/delivered more than 6000/s. in other hand, if I worked with one single node, I can publish/delivery until 25000/s. which means, more that I add nodes, more performance is deteriorating. but from this article : https://blog.pivotal.io/pivotal/products/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine they can publish more than 1 million, so how they can do that? I want to make RabbitMQ process more than 1 million messages per second I resolved the

AWS Pass traffic from NLB to an ALB?

风流意气都作罢 提交于 2019-12-01 06:08:10
问题 I am trying to pass incoming traffic from amazon's Network Load Balancer to Application Load Balancer, I am using NLB since it has an Elastic IP attachment and I want it to serve as a proxy for the ALB. is that even possible? 回答1: It is possible , but it's slightly messy. The problem is that Application Load Balancers can scale up, out, in, and/or down, and in each case the internal IP addresses of the balancers can change... but NLB requires static addresses for its targets. So, at a low

Basic Apache Camel LoadBalancer Failover Example

情到浓时终转凉″ 提交于 2019-12-01 04:24:15
问题 To start I just want to let you know I am new to Camel and very recently I grasped its main concepts. I am trying to create a basic working example using Apache-Camel with ActiveMQ as a broker and using jms-component as a client of a loadbalancer using the failover construct. All this is done using the Java DSL only (if possible). The example consists of 4 main apps, called MyApp-A, MyApp-B, MyApp-C and MyApp-D. In a normal scenario MyApp-A reads a file from my computer and then transforms it

Curl to Google Compute load balancer gets error 502

末鹿安然 提交于 2019-12-01 03:16:31
If I curl a POST request with file upload to my google compute load balancer (LB) I get a 502 error. If I do the same curl to the worker node behind the LB, it works . If I use a library like PHP Guzzle, it works either way. If I do a basic GET request on the LB, I get the correct response but the worker log does not acknowledge receiving the request as if the LB cached it. What is going on? FYI, google LB newb. Thanks Edit: I'm using GCE HTTP LB. The Curl command looks like this: curl http://1.2.3.4 -F "key=value" -F "data=@path/to/file" This curl command works when using the GCE VM IP but

service fabric URL routing

浪子不回头ぞ 提交于 2019-12-01 00:58:14
I am using the Azure Load Balancer with Azure service fabric to host multiple self host web applications, I'd like to create a rule that allows me to route based on the users URL request. So for example if a user navigates to : http:// domain.com/Site1 then the rule would route to: http:// domain.com**:8181**/Site1 within the cluster if the user navigates to: http:// domain.com/Site2 then the rule would route to: http:// domain.com**:8282**/Site2 within the cluster Is this possible with azure service fabric/load balancer? The Azure Load Balancer only forwards traffic it receives on a port to a

SolrCloud load-balancing

陌路散爱 提交于 2019-12-01 00:37:42
问题 i'm working on a .NET application that uses Solr as Search Engine. I had configured a SolrCloud installation with two server (one for Replica) and i didn't split the index in shards (number of shards = 1). I have read that SolrCloud (via Zookeeper) can do some load balancing, but i didn't understand how. If a call a specific address where an instance of solr is deployed, the query appears only on the logs of that specific server. On the documentation of SolrCloud i've found that: Explicitly

Apache load balance tomcat websocket

有些话、适合烂在心里 提交于 2019-12-01 00:17:46
I am currently developing a websocket application, which is deployed on a Tomcat server. Because of the huge number of users I would like to distribute the workload to multiple Tomcat instances. I decided to use Apache for load balancing. Now I have a problem with the implementation of Apache load balancing and sticky session for websockets requests. This is my Apache configuration: ProxyRequests off SSLProxyEngine on RewriteEngine On <Proxy balancer://http-localhost/> BalancerMember https://mcsgest1.desy.de:8443/Whiteboard/ route=jvm1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout