load-balancing

Load balancing web servers + keeping content synced

狂风中的少年 提交于 2019-12-05 12:13:01
I'm considering implementing EC2's Elastic Load Balancing features, but I'm running a web application (on the LAMP stack) that has frequent changes and wondering what the most common strategy is for keeping the application in sync between the servers that are load balanced. The database would live elsewhere, so I'm only worried (at this point) about keeping the actual scripts in sync when I make changes. The one way of addressing this problem is using a continuous integration which can transfer your files with rsync and build the project on the servers, this is not just yet. There are quite a

502 response coming from errors in Google Cloud LoadBalancer

我们两清 提交于 2019-12-05 11:48:59
I'm using Google App Engine Flexible env (already migrated to env:flex) with python 3.4 runtime. Over last month, i noticed multiple times(but less than 5% of requests) that i or automated processes, get a 502 from the server (Bad Gateway). Couldn't reproduce it locally and couldn't find any trace for it under GAE service logs. But looking for 502 error across all services, i realized that they come from Cloud HTTP Load Balancer service. Going over the jsonPayload of these 502 errors, i see this reason: statusDetails: "failed_to_pick_backend" @type:"type.googleapis.com/google.cloud

WCF http service via https load balancer

你。 提交于 2019-12-05 11:46:22
I have a WCF webservice that can be accessed via a http endpoint. Now, this service shall be published with a load balancer via https. Clients are created in .Net via svcutil.exe but the WSDL is also needed for a Java client. What I understand is: Internally the webservice is a http service and nothing needs to be changed. The address is http ://myserver.com/example.svc with WSDL ..?wsdl Externally the service has to show up as a https service with address https ://loadbalancer.com/example.svc and WSDL ..?wsdl From other posts I have learned that the load balancer problem can be solved with

How do you do a rolling deploy with capistrano?

北城以北 提交于 2019-12-05 10:40:16
We have 2 instances behind a load balancer running the same rails app with passenger. When we deploy, the server startup time causes requests to timeout. As a result we have a script that updates each webserver individually by taking one off the LB, deploying with cap, testing a dynamic page load, putting it back on the LB. How can we get capistrano to do this for us with one command? I have been able to set it up to deploy to all instances simultaneously but they all restart at the same time and cause the site to be unavailable for 20 seconds. What am I missing here? Seems like this should be

Can I have sticky sessions with HAProxy and socket.io with authentication?

假如想象 提交于 2019-12-05 10:01:37
I have several instances of socket.io with authentication running under HAProxy and I need to force that the authentication request and the socket connection go to the same instance. I've set up HAProxy based on this answer to a SO question with some modifications as so: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend www_backend acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws use_backend socket_backend if is_websocket backend www_backend balance

docker-compose --scale X nginx.conf configuration

白昼怎懂夜的黑 提交于 2019-12-05 08:13:35
My nginx.conf file currently has the routes defined directly: worker_processes auto; events { worker_connections 1024; } http { upstream wordSearcherApi { least_conn; server api1:61370 max_fails=3 fail_timeout=30s; server api2:61370 max_fails=3 fail_timeout=30s; server api3:61370 max_fails=3 fail_timeout=30s; } server { listen 80; server_name server_name 0.0.0.0; location / { proxy_pass http://wordSearcherApi; } } } Is there any way to create just one service in docker-compose.yml and when docker-compose up --scale api=3 , does nginx do automatic load balance? It's not possible with your

How to configure same context applications to use different machines with ModCluster and Wildfly10

天涯浪子 提交于 2019-12-05 06:48:36
问题 I'm trying to use ModCluster to load balance some servers. We have one single EAR that need to be load balanced by different DNSs. We have this scenario. We need to maintain the same context 'system1' because of backward compatibility 4 servers for urla.com.br/system1/ 2 servers for urlb.com.br/system1/ Using Wildfly 10.1.0 in domain mode, they are separated by two server groups: URLA and URLB . They share the same profile (URL-HA) and socket bindings (URL-HA-SOCKET). I have an Apache with

PerformanceCounter creation take a LONG time

若如初见. 提交于 2019-12-05 05:03:30
I'm working on a charge balancing system and thus I need to know the charge of each machine. PerformanceCounter seem the way to go, but creating the first one take between 38 and 60 sec. Each subsequent new Counter or 'NextValue' call is nearly instant however. Here is the code I'm using : [TestClass] public class PerfMon { [TestMethod] public void SimpleCreationTest() { Stopwatch Time = new Stopwatch(); Time.Start(); Debug.WriteLine("Time is : " + Time.ElapsedMilliseconds); // Create PerformanceCounter RAM = new PerformanceCounter("Memory", "Available MBytes"); Debug.WriteLine("Time is : " +

One domain name “load balanced” over multiple regions in Google Compute Engine

北城以北 提交于 2019-12-05 04:35:39
问题 I have service running on Google Compute Engine. I've got few instances in Europe in a target pool and few instances in US in a target pool. At the moment I have a domain name, which is hooked up to the Europe target pool IP, and can load balance between those two instances very nicely. Now, can I configure the Compute Engine Load Balancer so that the one domain name is connected to both regions? All load balancing rules seem to be related to a single region, and I don't know how I could get

Kubernetes load balancer SSL termination in google container engine?

纵饮孤独 提交于 2019-12-05 03:34:40
Background: I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for http(s) load-balancing which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL termination for ages). Questions: Is creating and maintaining an nginx instance the way to go if all