load-balancing

service fabric URL routing

本秂侑毒 提交于 2019-12-19 04:12:48
问题 I am using the Azure Load Balancer with Azure service fabric to host multiple self host web applications, I'd like to create a rule that allows me to route based on the users URL request. So for example if a user navigates to : http:// domain.com/Site1 then the rule would route to: http:// domain.com**:8181**/Site1 within the cluster if the user navigates to: http:// domain.com/Site2 then the rule would route to: http:// domain.com**:8282**/Site2 within the cluster Is this possible with azure

symfony2 behind Amazon ELB: always trust proxy data?

孤街浪徒 提交于 2019-12-19 03:59:16
问题 I'm running a Symfony2 web application on AWS, and am using an Elastic Load Balancer. In a controller method, I need to do the following to get the IP of a user requesting a web page: $request->trustProxyData(); $clientIp = $request->getClientIp(True); Does this present any security risks? I'm not using the client IP for privilege escalation, I'm just logging it. Is there some way to force trustProxyData() always, or otherwise reconfigure $request->getClientIp() to DWIM? My app will always be

ASP.NET Forms Authentication on Load Balanced Servers

那年仲夏 提交于 2019-12-18 13:26:56
问题 Are there any possible issues with using the default Forms Authentication (see below) on Load Balanced servers? If there can be, what can I do to prevent the issues. <authentication mode="Forms"> <forms loginUrl="~/Login/" protection="All" timeout="30" /> </authentication> Can I use cookies (used by default)? Do I have to go cookieless? etc... Also, does Microsoft (or VMWare) have a VirtualPC download that is an instant Load Balanced testing environment? 回答1: There is one issue. The cookies

How do I cluster ServiceMix?

孤者浪人 提交于 2019-12-18 13:17:11
问题 I am looking for some initial pointers on how to cluster a ServiceMix solution. Basically what I need is: having 2 (or more) ServiceMix instances serving my routing needs and sharing the load if one instance fails, other(s) continue to serve if the failed one is brought back to life, it joins the party Searching for information confuses me since some references (eg. http://trenaman.blogspot.fi/2010/04/four-things-you-need-to-know-about-new.html) talk about "JBI cluster engine". I don't want

How to Scale Node.js WebSocket Redis Server?

戏子无情 提交于 2019-12-18 10:52:42
问题 I'm writing a chat server for Acani, and I have some questions about Scaling node.js and websockets with load balancer scalability. What exactly does it mean to load balance Node.js? Does that mean there will be n independent versions of my server application running, each on a separate server? To allow one client to broadcast a message to all the others, I store a set of all the webSocketConnections opened on the server. But, if I have n independent versions of my server application running,

Explanation of Tenant Load Balancer in SaaS maturity model level 4

一笑奈何 提交于 2019-12-18 05:12:29
问题 I've already done some research about SaaS maturity level based on Gianpaolo SaaS maturity model. Right now I got confused about SaaS maturity level 4. It said, it has a "Tenant load balancer" that dynamically calls a new application instance to serve load balancing for a SaaS application. I want to know what this "Tenant load balancer" really means. How do we implement this "Tenant load balancer" in the real world or in an application server? Can anyone give me some good explanation and an

Tomcat load balancer solutions

心不动则不痛 提交于 2019-12-18 04:54:12
问题 I'm looking for a good load balancer to use with Tomcat. Our application doesn't store anything in the session context so it isn't important to redirect to same server for the same user. I'd simply like something that can queue requests round-robin style or based on each server's inidividual load. I'd also like to be able to add application servers to those available to handle requests without having to restart the load balancer. We're running the application on linux if that matters. 回答1: If

Node.js supports multiple load balance across servers?

不羁岁月 提交于 2019-12-18 04:52:17
问题 Im curious about horizontal scalling in node.js is possible to load balance acrross multiple virtual servers like rackspace cloud servers? I read about cluster plugin but I think it is only for a single server with a multi core cpu. 回答1: Try roundrobin.js for node-http-proxy: var httpProxy = require('http-proxy'); // // A simple round-robin load balancing strategy. // // First, list the servers you want to use in your rotation. // var addresses = [ { host: 'ws1.0.0.0', port: 80 }, { host:

Enabling sticky sessions on a load balancer

▼魔方 西西 提交于 2019-12-18 03:48:24
问题 Any advise on this one would be greatly appreciated, I've been researching all morning and I'm still scratching my head. I started at a new company a few weeks ago, where I'm the only .NET developer as the development was originally done by an outsourcing company and I've been asked to research. My knowledge of the existing system is extremely limited but from what I can gather the situation is as follows. We would like to enable sticky sessions on an asp.net web site. From my research I have

GCE LoadBalancer : Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1

天大地大妈咪最大 提交于 2019-12-17 20:12:06
问题 In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB) But, as I validate the change, I got an error saying Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1 even if i didn't change the namedPort. This issue seems to be the same, but the only solution is a workaround that does not work in my case