load-balancing

Dynamic wildcard subdomain ingress for Kubernetes

柔情痞子 提交于 2019-12-02 20:42:47
I'm currently using Kubernetes on GKE to serve the various parts of my product on different subdomains with the Ingress resource. For example: api.mydomain.com , console.mydomain.com , etc. ingress.yml (current) : apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: rules: - host: api.mydomain.com http: paths: - backend: serviceName: api-service servicePort: 80 - host: console.mydomain.com http: paths: - backend: serviceName: console-service servicePort: 80 That works wonderfully, with the L7 GCE load balancer routing to the appropriate places. What I would like to do,

How to use S3 as static web page and EC2 as REST API for it together? (AWS)

爷,独闯天下 提交于 2019-12-02 20:41:44
With AWS services we have the Web application running from the S3 bucket and accessing the data through the REST API from Load Balancer (which is set of Node.js applications running on EC2 instance). Currently we have specified URL's as following: API Load Balancer: api. somedomain.com Static Web App on S3: somedomain.com But having this setup brought us a set of problems since requests are CORS with this setup. We could workaround CORS with special headers, but that doesn't work with all browsers. What we want to achieve is running API on the same domain but with different path: API Load

How is load balancing done in Docker-Swarm mode

自作多情 提交于 2019-12-02 20:27:57
I'm working on a project to set up a cloud architecture using docker-swarm. I know that with swarm I could deploy replicas of a service which means multiple containers of that image will be running to serve requests. I also read that docker has an internal load balancer that manages this request distribution. However, I need help in understanding the following: Say I have a container that exposes a service as a REST API or say its a web app. And If I have multiple containers (replicas) deployed in the swarm and I have other containers (running some apps) that talk to this HTTP/REST service.

Heuristic algorithm for load balancing among threads

梦想的初衷 提交于 2019-12-02 19:43:07
I'm working on a multi-threaded program where I have a number of worker threads performing tasks of unequal length. I want to load-balance the tasks to ensure that they do roughly the same amount of work. For each task T i I have a number c i which provides a good approximation to the amount of work that is required for that task. I'm looking for an efficient (O(N) N = number of tasks or better) algorithm which will give me "roughly" a good load balance given the values of c i . It doesn't have to be optimal, but I would like to be able to have some theoretical bounds on how bad the resulting

understanding load balancing in asp.net

两盒软妹~` 提交于 2019-12-02 19:06:38
I'm writing a website that is going to start using a load balancer and I'm trying to wrap my head around it. Does IIS just do all the balancing for you? Do you have a separate web layer that sits on the distributed server that does some work before sending to the sub server, like auth or other work? It seems like a lot of the articles I keep reading don't really give me a straight answer, or I'm just not understanding them correctly, I'd like to get my head around how true load balancing works from a techincal side, and if anyone has any code to share that would also be nice. I understand

How load balancer works in RabbitMQ

那年仲夏 提交于 2019-12-02 18:19:23
I am new to RabbitMQ, so please excuse me for trivial questions: 1) In case of clustering in RabbitMQ, if a node fails, load shift to another node (without stopping the other nodes). Similarly, we can also add new fresh nodes to the existing cluster without stopping existing nodes in cluster. Is that correct? 2) Assume that we start with a single rabbitMQ node, and create 100 queues on it. Now producers started sending message at faster rate. To handle this load, we add more nodes and make a cluster. But queues exist on first node only. How does load balanced among nodes now? And if we need to

Percentage load balance thread requests

你。 提交于 2019-12-02 17:11:10
问题 I have a pool of worker threads in which I send request to them based on percentage. For example, worker 1 must process 60% of total requests, worker 2 must process 31% of total requests and lastly worker 3 processes 9%. I need to know mathematically how to scale down the numbers and maintain ratio so I don't have to send 60 requests to thread 1 and then start sending requests to worker 2. It sounds like a "Linear Scale" math approach. In any case, all inputs on this issue are appreciated 回答1

Elastic Load Balancing in EC2 [closed]

自闭症网瘾萝莉.ら 提交于 2019-12-02 16:57:23
It's been on the cards for a while, but now that Amazon have released Elastic Load balancing (ELB), what are your thoughts on deploying this solution for a high-traffic web application? Should we replace HAProxy or consider ELB as a complimentary service in front of HAProxy? arfon I've been running an ELB instead of HAProxy for about a month now on a site that gets about 100,000 visits per day, and I've been pretty pleased with the results. A gotcha though (UPDATE, this issue has been fixed by Amazon AWS , see comments below): You can't load balance the root of a domain as you have to create a

Why is Azure not dispatching HTTP requests on one of my two instances?

怎甘沉沦 提交于 2019-12-02 15:13:04
问题 I have an Azure web role with two instances. Both instances are "ready" - running okay. On my desktop I have four instances of the same program running simultaneously and hitting the web role URL with HTTP requests. Yet according to the logs all requests are dispatched to instance 0 only. I need requests to be dispatched to both instances to test concurrent operation. Why are requests not dispatched to the second instance and how do I make them dispatched there? 回答1: This is likely from the

How do I set up global load balancing using Digital Ocean DNS and Nginx?

不羁岁月 提交于 2019-12-02 14:04:52
UPDATE: See the answer I've provided below for the solution I eventually got set up on AWS. I'm currently experimenting with methods to implement a global load-balancing layer for my app servers on Digital Ocean and there's a few pieces I've yet to put together. The Goal Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore. Additionally, I would eventually like to automate the maintenance of this by writing a daemon that can monitor, scale, and heal any of the servers on the system. Or I'll combine