load-balancing

Custom client domains for my web service

你。 提交于 2019-12-11 10:34:09
问题 I have a web service running on EC2 behind an elastic balancer. I would like to allow my clients to point their A record to my web service so they could have their domain on my server. Similar to shopify or github pages. However, I don't want to give them the IP of the web service, I'd like the request to go though the load balancer. How can I achieve this? Should I create a small server to forward requests? How does that work? Many thanks! 回答1: If you are running your service behind an

How to configure apache with active passive setup

和自甴很熟 提交于 2019-12-11 08:51:45
问题 I have two servers both having Apache httpd with identical configurations Server1 and Server2. I want to create active and passive setup for these servers. Server1(lbserver.my.com) IP:192.168.10.88 (Active) Server2(lbserver.my.com) IP:192.168.10.89 (Passive) Server1 should respond to http requests. In case Server1 goes down then Server2 should become Active server and respond to http requests. Can anyone suggest how to achieve this. I tried this with keepalived configured on both the servers

Kubernetes loadbalancer stops serving traffic if using local traffic policy

拟墨画扇 提交于 2019-12-11 07:58:15
问题 Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this? Load Balancer Service: apiVersion: v1 kind: Service metadata: labels: app: loadbalancer role: loadbalancer-service name: lb-test namespace: default spec: clusterIP: 10.3.249.57

Forward specific urls on same domain to different servers

拜拜、爱过 提交于 2019-12-11 07:12:03
问题 During a rollout of a large site (IIS, .NET, EPiServer) with multiple markets we want to forward markets to the new server when the market has been added to the new web platform, but we still want to use the same domain. www.customer.com/marketA -> old server, ip 1.1.1.1 www.customer.com/marketB -> old server, ip 1.1.1.1 www.customer.com/marketC -> new server, ip 2.2.2.2 What is best practice for this? Should we add a load balancer in front of the servers that based on the url sends the

How to handle wesocket connections on load balanced servers

丶灬走出姿态 提交于 2019-12-11 06:45:00
问题 Our .net core web app currently accepts websocket connections and pushes out data to clients on certain events (edit, delete, create of some of our entities). We would like to load balance this application now but foresee a problem in how we handle the socket connections. Basically, if I understand correctly, only the node that handles a specific event will push data out to its clients and none of the clients connected to the other nodes will get the update. What is a generally accepted way

reverse proxy confusion

前提是你 提交于 2019-12-11 06:41:36
问题 Currently I use nginx + passenger for serving my rails app. I have been doing some research on reverse proxies and a few names pop up (squid, varnish and nginx mostly). Now If I am using nginx as my web server can I stil use it as my reverse proxy? The general sense is that most sites use nginx for proxying static content and apache/mongrel or something like that for dynamic content. If I wanna stick with my nginx, passenger setup, what would my architecture look like when I introduce a

How to load-balance the workload of a service in .NET

扶醉桌前 提交于 2019-12-11 06:36:50
问题 I am thinking of building an application using a Service Oriented Architecture (SOA). This architecture is not as complex and messy as a microservices solution (I think), but I am facing similar design problems. Imagine I have services of type ServiceA that send work to services of type ServiceB. I guess, if I use a queue, then load balancing will not be a problem (since consumers will take what they can handle from the queue). But queues tend to generate some bad asynchrony in the code that

Kubernetes Pod warm-up for load balancing

梦想的初衷 提交于 2019-12-11 06:36:35
问题 We are having a Kubernetes service whose pods take some time to warm up with first requests. Basically first incoming requests will read some cached values from Redis and these requests might take a bit longer to process. When these newly created pods become ready and receive full traffic, they might become not very responsive for up to 30 seconds, before everything is correctly loaded from Redis and cached. I know, we should definitely restructure the application to prevent this,

Scaling issue while sending push notifications (in bulk) to all the devices subscribed to a topic using FCM

╄→гoц情女王★ 提交于 2019-12-11 06:35:30
问题 I have subscribed all the devices to a topic i.e around 1 million users . When notification is received in the device there is an action button which calls a REST api. Now if I trigger a notification to all the devices subscribed to the particular topic, all the users receive the notification and tap on the action button, which calls the rest API to fetch data. Too many rest API calls increase the CPU utilisation to 100% and my server stops responding. Is there any way I can made FCM to send

FAILED_NOT_VISIBLE Google-managed SSL certificate in Load Balancing

跟風遠走 提交于 2019-12-11 06:05:54
问题 I am working with Load Balancing to have https to my static website and I have my domain in GoDaddy At the initial stage I only had Http so I painted my domain with cname pointing to c.storage.googleapis.com with domain name for storage and public its working. Now to have Https through Load Balancing, I created two Frontend configuration one is Http having static IP and enabled CDN that is point to my cloud storage I can reach my website with the static IP. With the same configuration I have