700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > keepalived实现haproxy高可用

keepalived实现haproxy高可用

时间:2022-06-22 19:28:20

相关推荐

keepalived实现haproxy高可用

Haproxy 介绍

软件:haproxy—主要是做负载均衡的7层,也可以做4层负载均衡 apache也可以做7层负载均衡,但是很麻烦。实际工作中没有人用。负载均衡是通过OSI协议对应的 7层负载均衡:用的7层http协议, 4层负载均衡:用的是tcp协议加端口号做的负载均衡

ha-proxy 概述

ha-proxy是一款高性能的负载均衡软件。因为其专注于负载均衡这一些事情,因此与nginx比起来在负载均衡这件事情上做更好,更专业。

ha-proxy 的特点

ha-proxy 作为目前流行的负载均衡软件,必须有其出色的一面。下面介绍一下ha-proxy相对LVS,Nginx等负载均衡软件的优点。•支持tcp / http 两种协议层的负载均衡,使得其负载均衡功能非常丰富。•支持8种左右的负载均衡算法,尤其是在http模式时,有许多非常实在的负载均衡算法,适用各种需求。•性能非常优秀,基于单进程处理模式(和Nginx类似)让其性能卓越。•拥有一个功能出色的监控页面,实时了解系统的当前状况。•功能强大的ACL支持,给用户极大的方便。

haproxy 算法:

1.roundrobin 基于权重进行轮询,在服务器的处理时间保持均匀分布时,这是最平衡,最公平的算法.此算法是动态的,这表示其权重可以在运行时进行调整.

2.static-rr 基于权重进行轮询,与roundrobin类似,但是为静态方法,在运行时调整其服务器权重不会生效.不过,其在后端服务器连接数上没有限制

3.leastconn 新的连接请求被派发至具有最少连接数目的后端服务器.

项目准备

准备四台虚拟机,两台做代理服务器,两台做真实服务器(真实服务器只是用来进行web测试)

1、选择两台Haproxy服务器作为代理服务器(一台master 一台backup)。真实服务器需要nginx来提供web服务进行测试

2、给两台代理服务器安装keepalived制作高可用生成VIP

3、配置nginx的负载均衡 以上两台nginx服务器配置文件一致 根据站点分区进行调度 配置upstream文件

master 192.168.119.156 主节点

backup 192.168.119.157 备用节点

RS1 192.168.119.153 第一台真实服务器

RS2 192.168.119.155 第二台真实服务器

对IP进行解析

所有虚拟机,都需要配置[root@master ~]# cat /etc/hosts127.0.0.1 localhost192.168.119.156 master192.168.119.157 backup192.168.119.153 RS1192.168.119.155 RS2

nginx安装

只给两台真实服务器RS1和RS2配置安装nginx ,所有机器关闭防火墙和selinux

[root@RS1 ~]# systemctl stop firewalld && setenforce 0[root@RS1 ~]# yum install yum-utils -y[root@RS1 ~]# yum install nginx -y[root@RS1 ~]# systemctl start nginx [root@RS1 ~]# echo "this is first RS1" > /usr/share/nginx/html/index.html [root@RS1 ~]# vim /etc/nginx/nginx.conf#27行设置长链接,默认是keepalive_timeout 65;改65为0[root@RS1 ~]# nginx -s reload[root@RS2 ~]# systemctl stop firewalld && setenforce 0[root@RS2 ~]# yum install yum-utils -y[root@RS2 ~]# yum install nginx -y[root@RS2 ~]# systemctl start nginx[root@RS2 ~]# echo "this is first RS2" > /usr/share/nginx/html/index.html[root@RS2 ~]# vim /etc/nginx/nginx.conf27keepalive_timeout 0;#27行设置长链接,默认是keepalive_timeout 65;改65为0[root@RS2 ~]# nginx -s reload

调度器配置Haproxy(主/备)都执行

[root@master ~]# systemctl stop firewalld && setenforce 0[root@master ~]# yum -y install haproxy[root@master ~]# cp -rf /etc/haproxy/haproxy.cfg{,.bak}#备份[root@master ~]# sed -i -r '/^[ ]*#/d;/^$/d' /etc/haproxy/haproxy.cfg #修改配置文件去掉注释,或者你可以直接复制我的代码[root@master haproxy]# echo $?0 #执行成功[root@master ~]# vim /etc/haproxy/haproxy.cfggloballog 127.0.0.1 local2 infopidfile /var/run/haproxy.pidmaxconn 4000 #优先级低user haproxygroup haproxydaemon #以后台形式运行ha-proxynbproc 2#工作进程数量 cpu内核是几就写几,不知道用lscpu查看defaultsmode http #工作模式 http ,tcp 是 4 层,http是 7 层log globalretries 3 #健康检查。3次连接失败就认为服务器不可用,主要通过后面的check检查option redispatch #服务不可用后重定向到其他健康服务器。maxconn 4000 #优先级中contimeout 5000 #ha服务器与后端服务器连接超时时间,单位毫秒msclitimeout 50000 #客户端超时srvtimeout 50000 #后端服务器超时listen statsbind *:81stats enablestats uri /haproxy #使用浏览器访问 http://192.168.246.169/haproxy,可以看到服务器状态stats auth admin:admin #用户认证,客户端使用elinks浏览器的时候不生效frontend webmode httpbind *:80 #监听哪个ip和什么端口option httplog #日志类别 http 日志格式acl html url_reg -i \.html$ #1.访问控制列表名称html。规则要求访问以html结尾的url(可选)use_backend httpservers if html #2.如果满足acl html规则,则推送给后端服务器httpserversdefault_backend httpservers #默认使用的服务器组backend httpservers #名字要与上面的名字必须一样balance roundrobin #负载均衡的方式server http1 192.168.119.153:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2server http2 192.168.119.155:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2[root@master ~]# systemctl start haproxy.service [root@master ~]#

[root@backup ~]# systemctl stop firewalld && setenforce 0[root@backup haproxy]# yum -y install haproxy[root@backup ~]# cd /etc/haproxy/[root@backup haproxy]# lshaproxy.cfg[root@backup haproxy]# cp -rf /etc/haproxy/haproxy.cfg{,.bak}[root@backup haproxy]# lshaproxy.cfg haproxy.cfg.bak[root@backup haproxy]# vim haproxy.cfggloballog 127.0.0.1 local2 infopidfile /var/run/haproxy.pidmaxconn 4000 user haproxygroup haproxydaemon nbproc 2defaultsmode http log globalretries 3 option redispatch maxconn 4000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen statsbind *:81stats enablestats uri /haproxystats auth admin:admin frontend webmode httpbind *:80 option httplog acl html url_reg -i \.html$ use_backend httpservers if html default_backend httpservers backend httpservers balance roundrobin server http1 192.168.119.153:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2server http2 192.168.119.155:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2[root@backup haproxy]# [root@backup ~]# systemctl start haproxy.service [root@backup ~]#

如果我们访问http://192.168.119.156:81/haproxy

页面主要参数解释

Queue Cur: current queued requests //当前的队列请求数量 Max:max queued requests //最大的队列请求数量 Limit://队列限制数量Errors Req:request errors //错误请求 Conn:connection errors //错误的连接

Server列表:Status:状态,包括up(后端机活动)和down(后端机挂掉)两种状态 LastChk: 持续检查后端服务器的时间 Wght: (weight) : 权重

如果我们访问192.168.119.156:80

第一次是RS1第二次就是RS2以轮询的方式浏览

Keepalived实现调度器HA(两个节点都需要下载)

注:主/备调度器均能够实现正常调度

1.主/备调度器安装软件

[root@master ~]# yum install -y keepalived[root@master ~]# cd /etc/keepalived/[root@master keepalived]# lskeepalived.conf[root@master keepalived]# cp keepalived.conf /etc/keepalived/keepalived.conf.bak#备份防止出错[root@master keepalived]# lskeepalived.conf keepalived.conf.bak[root@master keepalived]# vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lb01 #只是名字而已,辅节点改为目录2(两个名字一定不能一样)}vrrp_instance VI_1 {state MASTER #定义主还是备,备用的话写backupinterface ens33 #VIP绑定接口注意每个虚拟机的网卡不一样virtual_router_id 51 #整个集群的调度器一致(在同一个集群)priority 100 #(优先权)back改为50(50一间隔)advert_int 1 #发包authentication {auth_type PASSauth_pass longling}virtual_ipaddress {192.168.119.250/24 #VIP(自己网段的)}}[root@master keepalived]# systemctl start keepalived[root@master keepalived]# systemctl enable keepalived[root@backup ~]# yum install -y keepalived[root@backup ~]# cd /etc/keepalived/[root@backup keepalived]# cp keepalived.conf keepalived.conf.bak[root@backup keepalived]# lskeepalived.conf keepalived.conf.bak[root@backup keepalived]# vim keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lb02 #设置为lb02}vrrp_instance VI_1 {state BACKUP #设置为backupinterface ens33nopreempt #设置到back上面,不抢占资源virtual_router_id 51priority 90 #修改这里advert_int 1authentication {auth_type PASSauth_pass longling}virtual_ipaddress {192.168.119.250}}[root@backup keepalived]# [root@backup keepalived]# systemctl start keepalived[root@backup keepalived]# systemctl enable keepalived

查看两个节点服务器,你会发现VIP在主节点

master

[root@master ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:ca:73:d6 brd ff:ff:ff:ff:ff:ffinet 192.168.119.156/24 brd 192.168.119.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.119.250/24 scope global secondary ens33valid_lft forever preferred_lft foreverinet6 fe80::8cc8:a95b:7bf5:8e2a/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@master ~]#

backup

[root@backup ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:19:0e:29 brd ff:ff:ff:ff:ff:ffinet 192.168.119.157/24 brd 192.168.119.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8cc8:a95b:7bf5:8e2a/64 scope link dadfailed tentative noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::ed56:6f83:9078:cf92/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@backup ~]#

测试服务器宕机是否VIP漂移到备用节点

如果这个时候主节点服务器宕机了(我们把服务停止了用来测试),VIP会自己漂移到备用节点上。

master

[root@master ~]# systemctl stop keepalived.service [root@master ~]# systemctl status keepalived.service ● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: inactive (dead) since Mon -10-10 11:42:23 CST; 6s agoMain PID: 33685 (code=exited, status=0/SUCCESS)[root@master ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:ca:73:d6 brd ff:ff:ff:ff:ff:ffinet 192.168.119.156/24 brd 192.168.119.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::8cc8:a95b:7bf5:8e2a/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@master ~]#

backup

[root@backup ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:19:0e:29 brd ff:ff:ff:ff:ff:ffinet 192.168.119.157/24 brd 192.168.119.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.119.250/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::8cc8:a95b:7bf5:8e2a/64 scope link dadfailed tentative noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::ed56:6f83:9078:cf92/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@backup ~]#

而且浏览器访问没有问题,网页照样可以继续使用,这就是高可用性。

基于nginx的高可用性

以上我们只是实现了高可用,基于Haproxy的前提是Haproxy服务是正常。如果有突发情况使得nginx服务不能启动,但是我们的keepalived服务是正常,这个时候用户是访问不到的,VIP也不会自动漂移到备用的节点服务器上。

所以我们需要写一些代码来判断一下Haproxy服务是不是正常,如果不正常的话我们就将Haproxy服务关掉,然后实现VIP的漂移,这个时候用户就不会出现无法访问的情况了。

思路:让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Haproxy失败,则关闭本机的Keepalived[root@master ~]# mkdir /scripts[root@master ~]# cd /scripts/[root@master scripts]# vim check_nginx.sh#!/bin/bashnginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)if [ $nginx_status -lt 1 ];thensystemctl stop keepalivedfi[root@master scripts]# chmod +x check_nginx.sh [root@master scripts]# lltotal 4-rwxr-xr-x. 1 root root 143 Oct 8 23:07 check_nginx.sh[root@localhost scripts]# [root@master scripts]# vim notify.sh#!/bin/bashVIP=$2case "$1" inmaster)nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)if [ $nginx_status -lt 1 ];thensystemctl start nginxfi ;;backup)nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)if [ $nginx_status -gt 0 ];thensystemctl stop nginxfi;;*)echo "Usage:$0 master|backup VIP";;esac[root@master scripts]# chmod +x notify.sh[root@master scripts]# lltotal 8-rwxr-xr-x 1 root root 168 Oct 19 23:38 check_n.sh-rwxr-xr-x 1 root root 594 Oct 20 03:24 notify.sh//测试脚本是否可以执行成功[root@master scripts]# ./notify.sh backup[root@master scripts]# ss -anltStateRecv-QSend-Q Local Address:Port Peer Address:PortProcessLISTEN0128 0.0.0.0:22 0.0.0.0:*LISTEN0128 [::]:22 [::]:*[root@master scripts]# ./notify.sh master[root@master scripts]# ss -anltStateRecv-QSend-Q Local Address:Port Peer Address:PortProcessLISTEN0128 0.0.0.0:22 0.0.0.0:*LISTEN0128 0.0.0.0:80 0.0.0.0:*LISTEN0128 [::]:22 [::]:*LISTEN0128 [::]:80 [::]:*注:必须先启动Haproxy,再启动keepalived,建议备用节点也添加上.[root@master scripts]# scp notify.sh 192.168.119.157:/scriptsThe authenticity of host '192.168.119.156 (192.168.119.156)' can't be established.ECDSA key fingerprint is SHA256:EgQ8hBfCkJDrUIWEwLcNl83tFecnCfsGfeRONx68g3o.Are you sure you want to continue connecting (yes/no/[fingerprint])? yesWarning: Permanently added '192.168.119.156' (ECDSA) to the list of known hosts.root@192.168.119.156's password: notify.sh 100% 441 570.4KB/s 00:00 [root@master scripts]#

配置keepalived加入监控脚本的配置

配置主keepalived

[root@master scripts]# vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lb01}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass longling}virtual_ipaddress {192.168.119.250/24} #在这里插入脚本执行命令track_script {nginx_check}notify_master "/scripts/notify.sh master 192.168.119.250"# notify_backup "/scripts/notify.sh backup 192.168.119.250"}[root@master ~]# systemctl restart keepalived.service

配置备keepalived

backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭

[root@backup scripts]# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id lb02}vrrp_instance VI_1 {state BACKUPinterface ens33nopreemptvirtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass longling}virtual_ipaddress {192.168.119.250/24}track_script {nginx_check}notify_master "/scripts/notify.sh master 192.168.119.250"notify_backup "/scripts/notify.sh backup 192.168.119.250"}[root@backup ~]# systemctl restart keepalived.service

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。