load-balancing

Wrong balance between Aerospike instances in cluster

我的未来我决定 提交于 2019-12-12 03:19:13
问题 I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud. By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change. Is there any configuration that I could change to make this cluster more

WebSphere: issue merging plugin-cfg.xml for load balancing

醉酒当歌 提交于 2019-12-12 02:50:13
问题 I'm trying to merge and load balance several stand-alone WebSphere 6.1 Express servers. I'm using the instructions provided here: Merging plugin-cfg.xml files from multiple nodes http://www-01.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.doc/ae/twsv_configsimplelb.html?lang=en and here: Configuring simple load balancing across multiple application server profiles http://www-01.ibm.com/support/knowledgecenter/SSAW57_6.1.0/com.ibm.websphere.base.doc/info/aes/ae/twsv

Is there any loss of functionality if I use load balancer which does not communicate with zookeeper in solrcloud?

懵懂的女人 提交于 2019-12-12 02:18:34
问题 In a solr cloud setup, there are 8 solr nodes and 3 zookeeper nodes. There is one load balancer that gets all the indexing and search queries and distributes them to these 8 solr nodes in solr cloud. Before sending the solr query to particular solr node, it first checks if the service endpoint is active. Only if it is active then it sends the request to that particular solr node. Zookeeper handles the elections of leaders in shard. In this setup, zookeeper is not handling the query

What does auto-scaling in wso2 elb mean?

不想你离开。 提交于 2019-12-11 21:31:16
问题 Can WSO2 ELB automatically start another instance of ESB when the limit is reached? If yes can I get a miniature example with a limit as 2 or 3? 回答1: Yes. WSO2 ELB can automatically spawn instances using the configured cartridges. However, WSO2 ELB is no longer recommended and it has been discontinued. We recommend WSO2 Private PaaS if you need an auto-scaling platform with WSO2 products. With WSO2 Private PaaS, you can use auto-scaling policies. 来源: https://stackoverflow.com/questions

Envoy Pod to Pod communication within a Service in K8

萝らか妹 提交于 2019-12-11 17:41:59
问题 Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes when Envoy is configured? Important : I have another question here that directed me to ask with Envoy specific tags. E. G. Service name = UserService , 2 Pods (replica = 2) Pod 1 --> Pod 2 //using pod ip not load balanced hostname Pod 2 --> Pod 1 The connection is over Rest GET 1.2.3.4:7079/user/1 The value for host + port is taken from kubectl get ep Both of the pod IP's work

Apache LoadBalancing: SSL/TLS settings for healthchecks

╄→гoц情女王★ 提交于 2019-12-11 17:40:17
问题 I'm trying to setup a loadbalancer with apache. The communication to the backend servers is TLS-encrypted. When i enable healthchecks, this works as long as the ProxySSL* directives are set on VHost Level, and not inside the Proxy section. When i move them inside the Proxy section, the SSL/TLS settings are no longer evaluated correctly (the connection to the backend uses the default SSL/TLS settings and not the one specified). But according to documentation, it should be possible to define

Is it possible to consolidate multiple responses and send one response in NGINX

久未见 提交于 2019-12-11 17:24:57
问题 I have Nginx/openresty and some other services running on one VM. Basically VM accepts requests on Openresty and then openresty forwards requests to appropriate service. e.g. below requests getting forwarded to ServiceA, ServiceB and ServiceC respectively. It is working fine. http://server:80/services/refA http://server:80/services/refB http://server:80/services/refC Now I need to expose a new endpoint which could get the responses from all services A, B and C. and then return one

Use Heroku server url at nginx.conf.erb for load balancing

笑着哭i 提交于 2019-12-11 16:56:35
问题 I have 2 server: Server 1 is for loading balance with Nginx - https://server1.herokuapp.com/ Server 2 is for acting RESTful APIs. - https://server2.herokuapp.com/ Here my configuration of nginx.conf.erb at Server 1: https://gist.github.com/ntvinh11586/5b6fde3e804482aa400f3f7faca3d65f When I try call https://server1.herokuapp.com/ , instead of return data from https://server2.herokuapp.com/, I reach a 400 - Bad request . I don't know somewhere in my nginx.conf.erb wrong or I need implement

Azure Vnet peering with public IP load balancer

心已入冬 提交于 2019-12-11 16:11:26
问题 I got two Vnets: Vnet #1: 1 VM with Public (internet facing) IP load balancer - internet connected App VMs. Vnet #2: 3 VMs with public (internet facing) IP load balancer - internet and private DB servers (the load balancer is using public ip so that I could access the DBs). I set up a peering between Vnet1 & Vnet2 so that the communication between them will be private/internal and fast with no internet routing. I want to access the DBs (using a load balancer) in Vnet2 from Vnet1 - so in the

GCP load balancer 502 server error and “backend_connection_closed_before_data_sent_to_client” IIS 10

时光总嘲笑我的痴心妄想 提交于 2019-12-11 15:53:29
问题 I have GCP load balancer with 4 IIS 10 web servers. Sporadically it comes with 502-Server error . In the logs it shows it is because of backend_connection_closed_before_data_sent_to_client . I have read thru the article https://cloud.google.com/compute/docs/load-balancing/http/ and it says keepalive timout need to be set to 620 seconds for nginx and apache. How do I do the same in IIS 10. 回答1: Figured this out after raising a ticket google cloud team. I am putting it here so that others can