load-balancing

Google Cloud LB: Change “server error” default html page

安稳与你 提交于 2019-12-12 10:48:16
问题 By default, if the load balance can't find a backend to redirect traffic to, for example if all available backends are down, it shows this html page: Transcript: Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds. I would like to use my own static html page instead. I saw this on the LB + Cloud storage page here: You can also configure a custom index page and a custom error page that will be served if the requested

Load balancing web servers + keeping content synced

笑着哭i 提交于 2019-12-12 09:44:54
问题 I'm considering implementing EC2's Elastic Load Balancing features, but I'm running a web application (on the LAMP stack) that has frequent changes and wondering what the most common strategy is for keeping the application in sync between the servers that are load balanced. The database would live elsewhere, so I'm only worried (at this point) about keeping the actual scripts in sync when I make changes. 回答1: The one way of addressing this problem is using a continuous integration which can

Kubernetes load balancer SSL termination in google container engine?

早过忘川 提交于 2019-12-12 09:37:00
问题 Background: I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for http(s) load-balancing which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL

Testing specific Azure Web Site Instance

时间秒杀一切 提交于 2019-12-12 09:18:03
问题 I have an Azure Web Site configured to use multiple (2) instances: I have a service bus that should pass messages (ie Cache Evict) between the instances. I need to test this mechanism. In a conventional (on premise) system I would point a browser to instance 1 (ie http://myserver1.example.com), perform an action, then point my browser to the other instance (http://myserver2.example.com) to test. However, in Azure I can't see a way to hit a specific instance. Is it possible? Or is there an

Cross-region load balancing + routing on Google Container Engine

爷,独闯天下 提交于 2019-12-12 07:32:48
问题 How do I achieve cross-region load balancing on Google Container Engine? I will have one Kubernetes cluster per region in several regions and I need to route traffic from a single domain name to the geographically closest cluster. Some options I've investigated: Kubernetes LoadBalancers seem to be restricted to one cluster. I'm not sure how you get Kubernetes Ingress to talk to different clusters. (It sounds like this object is backed by Compute Engine HTTP load balancers though.) Compute

How to set maximum queue connection for nginx port in Windows?

别等时光非礼了梦想. 提交于 2019-12-12 06:48:13
问题 I am learning to design scalable system, for now using Windows machine. I created two servers that will listen to port 27016 and 27015, all they do is return "HelloWorld!" response. I had set listen(ListenSocket, SOMAXCONN) for both the servers when creating them in Visual studio following Winsock tutorial. Using jmter performed load test on each of them individually (1000 request per sec) and got everything OK. Now when I introduced nginx which is listening to port 80 and load balancing the

WSO2 Dynamically Adding an EndPoint to LoadBalance Endpoint

强颜欢笑 提交于 2019-12-12 06:47:49
问题 I have this configuration: 1) WSO2 4.7.0 ESB 2) WSO2 MB 2.1.0 3) a topic = MyTopic 4) one subscriber to MyTopic 5) N publishers on MyTopic 6) Static LoadBalance Endpoint deployed on ESB My goal is that when one of the N endpoints publishes a message on MyTopic, the subscriber on the ESB should be able to add an endpoint to the LoadBalanceEndpoint list. Is that possible? Do I need to use DynamicLoadBalanceEndpoint, and if so, how? 回答1: ok i found the answer by myself. It can be done by

Google Container Engine - How to auto-scale an instance group based on HTTP load?

半世苍凉 提交于 2019-12-12 04:59:08
问题 In Google Container Engine, when using an L7 ingress, what’s the correct way to auto-scale an instance group based on HTTP load? When I try to enable auto-scaling for my instance group, I get the warning that I must add the instance group the L7 ingress’ backend service. However, the backend service is already using a k8-ig group, which I cannot enable auto-scaling for. 回答1: Autoscaling based on HTTP load is not currently supported through the Ingress. You can of course grow the size of you

Ehcache on Jboss Replication in One Split Server

牧云@^-^@ 提交于 2019-12-12 04:36:01
问题 First, let me describe the environment we have. Currently, we are deploying a project on a JBoss AS 7 application server in a remote computer somewhere. As of now, this JBoss has been clustered into 4 nodes. 2 servers split into two sub-server groups: Server One Server One - 1 Server One - 2 Server Two Server Two - 1 Server Two - 2 NOTE: Deploying the app in JBoss with these active effectively makes it available to all of them. It's like deploying the same app to all four of them. That said,

Unable to load balance using Docker, Consul and nginx

*爱你&永不变心* 提交于 2019-12-12 04:23:31
问题 What I want to achive is load balancing using this stack: Docker, Docker Compose, Registrator, Consul, Consul Template, NGINX and, finally, a tiny service that prints out "Hello world" in browser. So, at this moment I have a docker-compose.yml file. It looks like so: version: '2' services: accent: build: context: ./accent image: accent container_name: accent restart: always ports: - 80 consul: image: gliderlabs/consul-server:latest container_name: consul hostname: ${MYHOST} restart: always