HAproxy

Haproxy调度器详细介绍

余生长醉 提交于 2019-11-28 08:16:04
测试环境: 调度器:Haproxy: IP : 192.168.4.5/24 客户端:client : IP : 192.168.4.10/24 一.Haproxy 特点: 1.它是免费,快速并且可靠的一种解决方案 2.适用于那些负载特别大的web站点,这些站点通常又需要会话保持或七层处理 3.提供高可用性,负载均衡以及基于TCP和HTTP应用的代理 二.Haproxy 工作模式 1.mode http #客户端请求被深度分析后在发往服务器 2.mode tcp #4层调度,不检查第7层信息 3.mode health #仅做健康状态检查,使用的频率很低. 三.Haproxy:配置文件说明: 配置文件可由如下部分构成: 1.defaults: 为后续的其他部分设置缺省参数 缺省参数可以被后续部分重置 2.frontend 描述接收客户端侦听套接字集 3.backend 描述转发链接的服务器集 4.listen 把frontend和backend结合到一起的完整声明 四.配置文件板块说明 /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 ##[err warning info debug] chroot /usr/local/haproxy pidfile /var/run/haproxy.pid #

HAProxy介绍

Deadly 提交于 2019-11-28 07:23:56
文章目录 负载均衡简介 为什么使用负载均衡 负载均衡类型 应用场景 HAProxy简介 HAProxy功能 负载均衡简介 负载均衡(Load Balance,简称LB)是一种服务或基于硬件设备等实现的高可用反向代理技术,负载均衡将特定的业务(web服务、网络流量等)分担给指定的一个或多个后端特定的服务器或设备,从而提高了公司业务的并发处理能力、保证了业务的高可用性、方便了业务后期的水平动态扩展。 为什么使用负载均衡 Web服务器的动态水平扩展–>对用户无感知 增加业务并发访问及处理能力–>解决单服务器瓶颈问题 节约公网IP地址–>降低IT支出成本 隐藏内部服务器IP–>提高内部服务器安全性 配置简单–>固定格式的配置文件 功能丰富–>支持四层和七层,支持动态下线主机 性能较强–>并发数万甚至数十万 负载均衡类型 四层: 1、LVS(Linux Virtual Server) 2、HAProxy(High Availability Proxy) 3、Nginx() 七层: 1、HAProxy 2、Nginx 硬件: 1、F5 :https://f5.com/zh 2、Netscaler :https://www.citrix.com.cn/products/citrix-adc/ 3、Array :https://www.arraynetworks.com.cn/ 4、深信服

HAProxy编译安装

£可爱£侵袭症+ 提交于 2019-11-28 07:23:51
文章目录 HAProxy2.0.4编译安装 LUA脚本语言: HAProxy HAProxy2.0.4编译安装 LUA脚本语言: 下载 curl -R -O http://www.lua.org/ftp/lua-5.3.5.tar.gz 安装环境 yum -y install libtermcap-devel ncurses-devel libevent-devel readline-devel gcc gcc-c++ 安装 cd /usr/local/src tar xvf ~/lua-5.3.5.tar.gz cd lua-5.3.5 make linux test 查看版本 ./src/lua -v HAProxy wget http://www.haproxy.org/download/2.0/src/haproxy-2.0.4.tar.gz #编译环境 yum -y install gcc gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel net-tools vim iotop bc zip unzip zlib-devel lrzsz tree screen lsof tcpdump wget ntpdate 编译 make ARCH = x86_64 \

HAProxy配置文件细说

天涯浪子 提交于 2019-11-28 07:23:44
配置详解: HAPrpxy的配置文件:/etc/haproxy/haproxy.cfg 由两大部分组成,分别是 global 和 proxies 部分。 global : 全局配置段 进程及安全配置相关的参数 性能调整相关参数 Debug参数 proxies : 代理配置段,有多个子段 defaults 为frontend, backend, listen提供默认配置 listen 同时拥有前端和后端配置 #下面不常用 frontend 前端,相当于nginx中的server { } backend 后端,相当于nginx中的upstream { } global配置参数: 官方文档:https://cbonte.github.io/haproxy-dconv/2.0/intro.html global #锁定运行目录 chroot / usr / local / haproxy #定义全局的syslog服务器;最多可以定义两个 log 127.0 .0 .1 local3 info #指定pid文件路径 pidfile / var / run / haproxy . pid #以守护进程运行 deamon #每个haproxy进程的最大并发连接数 maxconn 4000 #运行haproxy的用户身份 user haproxy group haproxy #或者 uid 99

HAProxy调度算法

核能气质少年 提交于 2019-11-28 07:23:41
文章目录 HAProxy调度算法 一、静态算法 1. static-rr 2. first 二、动态算法: 1. roundrobin 2. leastconn 三、混合算法 1. source 2. uri 3. url_param: 4. hdr 5. rdp-cookie 6. random 4层与7层的区别 IP透传: 四层IP透传 七层IP透传: HAProxy调度算法 HAProxy通过固定参数balance指明对后端服务器的调度算法,该参数可以配置在listen或backend选项中。 HAProxy的调度算法分为静态和动态调度算法,但是有些算法可以根据参数在静态和动态算法中相互转换。 一、静态算法 静态算法:按照事先定义好的规则轮询公平调度,不关心后端服务器的当前负载、链接数和相应速度等,且无法实时修改权重,只能靠重启HAProxy生效。 1. static-rr 基于权重的轮询调度,不支持权重的运行时调整及后端服务器慢启动,其后端主机数量没有限制 listen web_host bind 192.168.7.101:80,:8801-8810,192.168.7.101:9001-9010 mode http log global balance static-rr server web1 192.168.7.103:80 weight 1 check inter

Duplicate TCP traffic with a proxy

自作多情 提交于 2019-11-28 04:58:32
I need to send (duplicate) traffic from one machine (port) and to two different machines (ports). I need to take care of TCP session as well. In the beginnig I used em-proxy , but it seems to me that the overhead is quite large (it goes over 50% of cpu). Then I installed haproxy and I managed to redirect traffic (not to duplicate). The overhead is reasonable (less than 5%). The problem is that I could not say in haproxy config file the following: - listen on specific address:port and whatever you find send on the two different machines:ports and discard the answers from one of them. Em-proxy

http keep-alive in the modern age

走远了吗. 提交于 2019-11-28 02:36:46
So according to the haproxy author, who knows a thing or two about http: Keep-alive was invented to reduce CPU usage on servers when CPUs were 100 times slower. But what is not said is that persistent connections consume a lot of memory while not being usable by anybody except the client who openned them. Today in 2009, CPUs are very cheap and memory is still limited to a few gigabytes by the architecture or the price. If a site needs keep-alive, there is a real problem. Highly loaded sites often disable keep-alive to support the maximum number of simultaneous clients. The real downside of not

Centos 7 安装 Haproxy

半城伤御伤魂 提交于 2019-11-27 21:35:38
【环境】 Centos 7.2 Web1:192.168.136.170 web2:192.168.136.166 Haproxy:192.168.136.173 【web服务器1、2】 安装Nginx(可参考 YUM快速搭建LNMP )两台Web服务器都需要安装Nginx 为了方便区分web服务器,我们编辑下首页 web1: echo 'This is web 1' > /usr/share/nginx/html/index.html systemctl restart nginx web2: echo 'This is web 1' > /usr/share/nginx/html/index.html systemctl restart nginx 【Haproxy服务器】 yum -y install haproxy 编辑haproxy配置文件 vim /etc/haproxy/haproxy.cfg (下列红色的#部分,是需要注释的,最下方的是web服务器的地址,如有多个节点,就继续添加) #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full

Load Balancing (HAProxy or other) - Sticky Sessions

断了今生、忘了曾经 提交于 2019-11-27 19:08:16
I'm working on scaling out my app to multiple servers, and one requirement is that a client is always communicating with the same server (too much live data is used to allow bouncing between servers efficiently). My current setup is a small server cluster (using Linode). I have a frontend node running HAProxy using "balance source" so that an IP is always pointed towards the same node. I'm noticing that "balance source" is not a very even distribution. With my current test setup (2 backend servers), one server often has 3-4x as many connections when using a sample size of 80-100 source IPs. Is

Openshift Layer4 connection, App Won't Start

烂漫一生 提交于 2019-11-27 18:40:34
问题 I recently pushed a set of node.js changes to an app on Openshift. The app runs fine on my local machine and is pretty close to the vanilla example deployed by Openshift. The Openshift haproxy log has this final line: [fbaradar-hydrasale.rhcloud.com logs]> [WARNING] 169/002631 (93881) : Server express/local-gear is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.