load-balancing

Dynamic wildcard subdomain ingress for Kubernetes

自古美人都是妖i 提交于 2019-12-20 10:13:03
问题 I'm currently using Kubernetes on GKE to serve the various parts of my product on different subdomains with the Ingress resource. For example: api.mydomain.com , console.mydomain.com , etc. ingress.yml (current) : apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: rules: - host: api.mydomain.com http: paths: - backend: serviceName: api-service servicePort: 80 - host: console.mydomain.com http: paths: - backend: serviceName: console-service servicePort: 80 That works

What is “Reverse Proxy” and “Load Balancing” in Nginx / Web server terms?

房东的猫 提交于 2019-12-20 10:04:48
问题 These are two phrases I hear about very often, mainly associated with Nginx. Can someone give me a laymans defintion? 回答1: Definitions are often difficult to understand. I guess you just need some explanation for their use case. A short explanation is: load balancing is one of the functionalities of reverse proxy, and reverse proxy is one of the softwares that can do load balancing. And a long explanation is given below. For example a service of your company has customers in UK and German.

How to use S3 as static web page and EC2 as REST API for it together? (AWS)

末鹿安然 提交于 2019-12-20 09:49:25
问题 With AWS services we have the Web application running from the S3 bucket and accessing the data through the REST API from Load Balancer (which is set of Node.js applications running on EC2 instance). Currently we have specified URL's as following: API Load Balancer: api. somedomain.com Static Web App on S3: somedomain.com But having this setup brought us a set of problems since requests are CORS with this setup. We could workaround CORS with special headers, but that doesn't work with all

Elastic Load Balancing in EC2 [closed]

早过忘川 提交于 2019-12-20 08:39:50
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . It's been on the cards for a while, but now that Amazon have released Elastic Load balancing (ELB), what are your thoughts on deploying this solution for a high-traffic web application? Should we replace HAProxy or consider ELB as a complimentary service in front of HAProxy? 回答1: I've been running an ELB instead

Master/Slave Mysql Architecture vs Server/Read DB and Separate DB for writes only [closed]

烈酒焚心 提交于 2019-12-20 04:59:12
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 8 years ago . What are the advantages and disadvantages for Master/Slave type mysql architectures vs load balancing web servers with read only db's in each using a separate server holding a mysql database just for the writes?

Does an Azure Web App care if its instances are healthy/unhealthy?

廉价感情. 提交于 2019-12-20 03:53:14
问题 If I deploy a web app (formerly known as an Azure WebSite) to an App Hosting Plan in Azure with a couple of instances (scale = 2) will the load balancer in front of the instances care if any of the instances is unhealthy? I'm troubleshooting an issue that sometimes causes my site to return an http 503 ~50% of the time. My thinking here is that one of two of my instances has failed but the load balancer hasn't noticed. If the load balancer does care , what does it look for? I can't find anyway

Does an Azure Web App care if its instances are healthy/unhealthy?

送分小仙女□ 提交于 2019-12-20 03:53:06
问题 If I deploy a web app (formerly known as an Azure WebSite) to an App Hosting Plan in Azure with a couple of instances (scale = 2) will the load balancer in front of the instances care if any of the instances is unhealthy? I'm troubleshooting an issue that sometimes causes my site to return an http 503 ~50% of the time. My thinking here is that one of two of my instances has failed but the load balancer hasn't noticed. If the load balancer does care , what does it look for? I can't find anyway

Google load balancer force https

一世执手 提交于 2019-12-20 02:55:09
问题 I not sure if this is possible (it wasn't last year according to the internet), but i'm hoping its available now. Is there any way of using google load balancer to force https connection only, ie. get the load balancer to redirect http requests? I can do it at the backend server, but i would rather have this handled by the load balancer. Thanks in advance, Max 回答1: I think not. As far as I know, forcing HTTPS is not a managed feature of Google Cloud Load Balancer. It will not redirect HTTP to

nginx loadbalancer Too many open files

爷,独闯天下 提交于 2019-12-19 10:28:37
问题 I've a loadbalancer and I get this kind errors: 2017/09/12 11:18:38 [crit] 22348#22348: accept4() failed (24: Too many open files) 2017/09/12 11:18:38 [alert] 22348#22348: *4288962 socket() failed (24: Too many open files) while connecting to upstream, client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page" 2017/09/12 11:18:38 [crit] 22348#22348: *4288962 open() "/usr/local

Curl to Google Compute load balancer gets error 502

江枫思渺然 提交于 2019-12-19 05:34:39
问题 If I curl a POST request with file upload to my google compute load balancer (LB) I get a 502 error. If I do the same curl to the worker node behind the LB, it works . If I use a library like PHP Guzzle, it works either way. If I do a basic GET request on the LB, I get the correct response but the worker log does not acknowledge receiving the request as if the LB cached it. What is going on? FYI, google LB newb. Thanks Edit: I'm using GCE HTTP LB. The Curl command looks like this: curl http:/