load-balancing

What is pass-through load balancer? How is it different from proxy load balancer?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-03 02:00:53
Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ). I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial? Are there any other type of load balancers except pass-through and proxy? It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it

Enable HTTPS on GCE/GKE

有些话、适合烂在心里 提交于 2019-12-03 01:56:10
I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with: Error creating load balancer (will retry): Failed to create load balancer for service default/web: requested ip <IP> is

How do I set up global load balancing using Digital Ocean DNS and Nginx?

耗尽温柔 提交于 2019-12-03 01:54:51
问题 UPDATE: See the answer I've provided below for the solution I eventually got set up on AWS. I'm currently experimenting with methods to implement a global load-balancing layer for my app servers on Digital Ocean and there's a few pieces I've yet to put together. The Goal Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore. Additionally, I would eventually like to automate the maintenance of this

Distributed Concurrency Control

泪湿孤枕 提交于 2019-12-03 01:43:33
问题 I've been working on this for a few days now, and I've found several solutions but none of them incredibly simple or lightweight. The problem is basically this: We have a cluster of 10 machines, each of which is running the same software on a multithreaded ESB platform. I can deal with concurrency issues between threads on the same machine fairly easily, but what about concurrency on the same data on different machines? Essentially the software receives requests to feed a customer's data from

Zero downtime deployment for Java apps

半世苍凉 提交于 2019-12-03 00:44:11
I am trying to build the very lightweight solution for zero downtime deployment for Java apps. For the sake of simplicity lets think that we have two servers. My solution is to use: On the "front" -- some load balancer (software) - I am thinking about HAProxy here. On the "back" - two servers, both running Tomcat with deployed application. When we are about to deploy new release We disable one of the servers with HAProxy, so only one server (let's call it server A, which is running old release) will be available. Deploy new release on other server (let's call it server B), run production unit

Apache proxy load balancing backend server failure detection

江枫思渺然 提交于 2019-12-03 00:41:20
Here's my scenario (designed by my predecessor): Two Apache servers serving reverse proxy duty for a number of mixed backend web servers (Apache, IIS, Tomcat, etc.). There are some sites for which we have multiple backend web servers, and in those cases, we do something like: <Proxy balancer://www.example.com> BalancerMember http://192.168.1.40:80 BalancerMember http://192.168.1.41:80 </Proxy> <VirtualHost *:80> ServerName www.example.com:80 CustomLog /var/log/apache2/www.example.com.log combined <Location /> Order allow,deny Allow from all ProxyPass balancer://www.example.com/

Using Erlang, how should I distribute load amongst a cluster?

妖精的绣舞 提交于 2019-12-03 00:32:29
I was looking at the slave/pool modules and it seems similar to what I want, but it also seems like I have a single point of failure in my application (if the master node goes down). The client has a list of gateways (for the sake of fallback - all do the same thing) which accept connections, and one is chosen from randomly by the client. When the client connects all nodes are examined to see which has the least load and then the IP of the least- loaded server is forwarded back to the client. The client then connects to this server and everything is executed there. In summary, I want all nodes

What is “Reverse Proxy” and “Load Balancing” in Nginx / Web server terms?

强颜欢笑 提交于 2019-12-02 23:57:28
These are two phrases I hear about very often, mainly associated with Nginx. Can someone give me a laymans defintion? Definitions are often difficult to understand. I guess you just need some explanation for their use case. A short explanation is: load balancing is one of the functionalities of reverse proxy, and reverse proxy is one of the softwares that can do load balancing. And a long explanation is given below. For example a service of your company has customers in UK and German. Because the policy is different for these two countries, your company has two web servers, uk.myservice.com

Why does Elastic Load Balancing report 'Out of Service'?

假如想象 提交于 2019-12-02 21:42:22
I am trying to set up Elastic Load Balancing (ELB) in AWS to split the requests between multiple instances. I have created several images of my webserver based on the same AMI, and I am able to ssh into each individually and access the site via each distinct public DNS. I have added each of my instances to the load balancer, but they all come back with the Status: Out of Service because they failed the health check. I'm mostly confused because I can access each instance from its public DNS, but I get a timeout whenever I visit the load balancer DNS name. I've been trying to read through all

What is the conceptual difference between Service Discovery tools and Load Balancers that check node health?

牧云@^-^@ 提交于 2019-12-02 21:04:43
Recently several service discovery tools have become popular/"mainstream", and I’m wondering under what primary use cases one should employ them instead of traditional load balancers. With LBs, you cluster a bunch of nodes behind the balancer, and then clients make requests to the balancer, who then (typically) round robins those requests to all the nodes in the cluster. With service discovery ( Consul , ZK , etc.), you let a centralized “consensus” service determine what nodes for particular service are healthy, and your app connects to the nodes that the service deems as being healthy. So