1 nginx和haproxy的异同点
相同点:
- 反向代理服务器:Nginx和HAProxy都是反向代理服务器,它们将请求转发到后端服务器以处理客户端请求。
- 高可用性:两者都支持负载均衡,可以将请求分配到多个后端服务器上,从而提高应用程序的可用性和可靠性。
- 高性能:Nginx和HAProxy都非常快速,可以处理大量的并发连接请求。
- 可扩展性:Nginx和HAProxy都具有可扩展性,可以通过添加更多服务器或实例来扩展负载均衡系统。
不同点:
- 协议支持:Nginx支持HTTP、HTTPS、SMTP、POP3、IMAP和TCP协议,而HAProxy仅支持TCP和HTTP协议。
- 负载均衡算法:Nginx支持轮询、IP hash、least connections和generic hash等负载均衡算法,而HAProxy支持轮询、加权轮询、IP hash、加权IP hash、least connections、random、和hash等负载均衡算法。
- 功能:Nginx除了作为反向代理服务器外,还可以用作Web服务器、缓存服务器和媒体流服务器,而HAProxy专注于负载均衡和代理。
- 配置语言:Nginx使用基于文件的配置语言,而HAProxy使用基于文本的配置语言。
2 实现haproxy四层地址透传
在LVS 传统的四层负载设备中,把client发送的报文目标地址(原来是负载均衡设备的IP地址),根据均衡设备设置的选择web服务器的规则选择对应的web服务器IP地址,这样client就可以直接跟此服务器建立TCP连接并发送数据,而四层负载自身不参与建立连接
而和LVS不同,haproxy是伪四层负载均衡,因为haproxy 需要分别和前端客户端及后端服务器建立连接
2.1 首先搭建两台web服务器利用haproxy负载均衡
环境:
- DNS服务器:192.168.10.200
- internet服务器:192.168.10.123, 模拟客户端
- web1:10.0.0.100 (安装nginx)
- web2:10.0.0.18 (安装nginx)
- HAProxy服务器:10.0.0.8(配备两块网卡,eth0NAT模式,属于内网;eth1 仅主机模式192.168.10.129,外网)
- 搭建DNS服务器
[root@dns ~]$ cat install_dns.sh
#!/bin/bash
DOMAIN=yanlinux.org
HOST=www
HOST_IP=192.168.10.129
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
. /etc/os-release
color () {
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \E[0m"
echo -n "$1" && $MOVE_TO_COL
echo -n "["
if [ $2 = "success" -o $2 = "0" ] ;then
${SETCOLOR_SUCCESS}
echo -n $" OK "
elif [ $2 = "failure" -o $2 = "1" ] ;then
${SETCOLOR_FAILURE}
echo -n $"FAILED"
else
${SETCOLOR_WARNING}
echo -n $"WARNING"
fi
${SETCOLOR_NORMAL}
echo -n "]"
echo
}
install_dns () {
if [ $ID = 'centos' -o $ID = 'rocky' ];then
yum install -y bind bind-utils
elif [ $ID = 'ubuntu' ];then
color "不支持Ubuntu操作系统,退出!" 1
exit
#apt update
#apt install -y bind9 bind9-utils
else
color "不支持此操作系统,退出!" 1
exit
fi
}
config_dns () {
sed -i -e '/listen-on/s/127.0.0.1/localhost/' -e '/allow-query/s/localhost/any/' /etc/named.conf
cat >> /etc/named.rfc1912.zones <<EOF
zone "$DOMAIN" IN {
type master;
file "$DOMAIN.zone";
};
EOF
cat > /var/named/$DOMAIN.zone <<EOF
\$TTL 1D
@ IN SOA master admin (
1 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS master
master A `hostname -I`
$HOST A $HOST_IP
EOF
chmod 640 /var/named/$DOMAIN.zone
chgrp named /var/named/$DOMAIN.zone
}
start_service () {
systemctl enable --now named
systemctl is-active named.service
if [ $? -eq 0 ] ;then
color "DNS 服务安装成功!" 0
else
color "DNS 服务安装失败!" 1
exit 1
fi
}
install_dns
config_dns
start_service
#安装
[root@dns ~]$ sh install_dns.sh
#在internet服务器上测试
[root@internet ~]$ ping www.yanlinux.org
PING www.yanlinux.org (192.168.10.129) 56(84) bytes of data.
64 bytes from 192.168.10.129 (192.168.10.129): icmp_seq=1 ttl=64 time=0.358 ms
64 bytes from 192.168.10.129 (192.168.10.129): icmp_seq=2 ttl=64 time=0.475 ms
^C
--- www.yanlinux.org ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1008ms
rtt min/avg/max/mdev = 0.358/0.416/0.475/0.061 ms
- 搭建两台web服务器
#web1搭建
[root@web1 ~]$ apt -y install nginx
[root@web1 ~]$ cat /var/www/html/index.html
<h1>10.0.0.100 www.yanlinux.org</h1>
#web2搭建
[root@web2 ~]$ yum -y install nginx
[root@web2 ~]$ cat > /var/www/html/index.html
<h1>10.0.0.18 www.yanlinux.org</h1>
- 搭建HAProxy服务器
[root@haproxy ~]$ cat install_haproxy.sh
#!/bin/bash
HAPROXY_VERSION=2.6.9
HAPROXY_FILE=haproxy-${HAPROXY_VERSION}.tar.gz
LUA_VERSION=5.4.4
LUA_FILE=lua-${LUA_VERSION}.tar.gz
HAPROXY_INSTALL_DIR=/apps/haproxy
SRC_DIR=/usr/local/src
CWD=`pwd`
CPUS=`lscpu|awk '/^CPU\(s\)/{print $2}'`
LOCAL_IP=$(hostname -I|awk '{print $1}')
STATS_AUTH_USER=admin
STATS_AUTH_PASSWD=123456
. /etc/os-release
color () {
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \E[0m"
echo -n "$1" && $MOVE_TO_COL
echo -n "["
if [ $2 = "success" -o $2 = "0" ] ;then
${SETCOLOR_SUCCESS}
echo -n $" OK "
elif [ $2 = "failure" -o $2 = "1" ] ;then
${SETCOLOR_FAILURE}
echo -n $"FAILED"
else
${SETCOLOR_WARNING}
echo -n $"WARNING"
fi
${SETCOLOR_NORMAL}
echo -n "]"
echo
}
check_file (){
if [ ! -e ${HAPROXY_FILE} ];then
color "请下载${HAPROXY_FILE}文件!" 1
exit
elif [ ! -e ${LUA_FILE} ];then
color "请先下载${LUA_FILE}文件!" 1
exit
else
color "相关文件已准备" 0
fi
}
install_haproxy (){
#安装依赖环境
if [ $ID = "centos" -o $ID = "rocky" ];then
yum -y install gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel libtermcap-devel ncurses-devel libevent-devel readline-devel
elif [ $ID = "ubuntu" ];then
apt update
apt -y install gcc make openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev libreadline-dev libsystemd-dev
else
color "不支持此操作系统!" 1
exit
fi
#安装lua环境
tar xf ${LUA_FILE} -C ${SRC_DIR}
LUA_DIR=${LUA_FILE%.tar*} #变量高级用法,直接返回去掉.tar*的后缀
cd ${SRC_DIR}/${LUA_DIR}
make all test
#编译安装haproxy
cd ${CWD}
tar xf ${HAPROXY_FILE} -C ${SRC_DIR}
HAPROXY_DIR=${HAPROXY_FILE%.tar*}
cd ${SRC_DIR}/${HAPROXY_DIR}
make -j ${CPUS} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=${SRC_DIR}/${LUA_DIR}/src/ LUA_LIB=${SRC_DIR}/${LUA_DIR}/src/ PREFIX=${HAPROXY_INSTALL_DIR}
make install PREFIX=${HAPROXY_INSTALL_DIR}
[ $? -eq 0 ] && color "HAProxy编译安装成功" 0 || { color "HAProxy编译安装失败,退出" 1;exit; }
[ -L /usr/sbin/haproxy ] || ln -s ${HAPROXY_INSTALL_DIR}/sbin/haproxy /usr/sbin/ &> /dev/null
[ -d /etc/haproxy ] || mkdir /etc/haproxy &> /dev/null
[ -d /var/lib/haproxy/ ] || mkdir -p /var/lib/haproxy &> /dev/null
#准备配置文件
cat > /etc/haproxy/haproxy.cfg <<EOF
global
maxconn 100000
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth ${STATS_AUTH_USER}:${STATS_AUTH_PASSWD}
EOF
#创建用户
groupadd -g 99 haproxy
useradd -u 99 -g haproxy -d /var/lib/haproxy -M -r -s /sbin/nologin haproxy
#创建service文件
cat > /lib/systemd/system/haproxy.service <<EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now haproxy
systemctl is-active haproxy &> /dev/null && color "HAProxy安装完成" 0 || { color "HAProxy安装失败" 1;exit; }
echo "-------------------------------------------------------------------"
echo -e "请访问链接: \E[32;1mhttp://${LOCAL_IP}:9999/haproxy-status\E[0m"
echo -e "用户和密码: \E[32;1m${STATS_AUTH_USER}/${STATS_AUTH_PASSWD}\E[0m"
}
main (){
check_file
install_haproxy
}
main
#安装haproxy
[root@haproxy ~]$ sh install_haproxy.sh
#配置proxies
#使用子配置文件保存配置
#当业务众多时,将所有配置都放在一个配置文件中,会造成维护困难。可以考虑按业务分类,将配置信息拆分,放在不同的子配置文件中,从而达到方便维护的目的。
#注意: 子配置文件的文件后缀必须为.cfg
#创建子配置目录
[root@haproxy ~]$ mkdir /etc/haproxy/conf.d
#添加子配置文件目录到service文件中
[root@haproxy ~]$ vi /lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/conf.d -c -q #这一行添加-f 子配置文件目录
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/conf.d -p /var/lib/haproxy/haproxy.pid #这一行添加-f 子配置文件目录
ExecReload=/bin/kill -USR2
[Install]
WantedBy=multi-user.target
#创建子配置文件
[root@haproxy ~]$ cat /etc/haproxy/conf.d/www.yanlinux.org.conf
listen yanlinux_http_80
bind 192.168.10.129:80
server web1 10.0.0.100:80 check inter 3000 fall 3 rise 5
server web2 10.0.0.18:80 check inter 3000 fall 3 rise 5
#重启服务
[root@haproxy ~]$ systemctl restart haproxy.service
#端口打开
[root@haproxy ~]$ ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:9999 0.0.0.0:*
LISTEN 0 128 192.168.10.129:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
- internet服务器测试连接
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.100 www.yanlinux.org</h1>
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.18 www.yanlinux.org</h1>
- 健康性检测
#停掉web1的服务
[root@web1 ~]$ systemctl stop nginx.service
#internet测试,不会轮询到web1服务上了
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.18 www.yanlinux.org</h1>
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.18 www.yanlinux.org</h1>
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.18 www.yanlinux.org</h1>
状态页也可以看出来web1下线了
2.2 实现四层地址透传
- 修改haproxy配置文件,修改mode为tcp,并在sever配置中添加
send-proxy
[root@haproxy ~]$ cat /etc/haproxy/conf.d/www.yanlinux.org.cfg
listen www.yanlinux.org_http_80
mode tcp #不支持http协议
bind 192.168.10.129:80
server web1 10.0.0.100:80 send-proxy check inter 3000 fall 3 rise 5
server web2 10.0.0.18:80 send-proxy check inter 3000 fall 3 rise 5
- 在两台后端服务器中修改nginx配置,在访问日志中通过变量$proxy_protocol_addr记录透传过来的客户端IP
#修改主配置文件
[root@web1 ~]$ vi /etc/nginx/nginx.conf
......
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$proxy_protocol_addr"';
access_log /var/log/nginx/access.log main;
......
server {
listen 80 default_server proxy_protocol; #启用proxy_protocol,将无法直接访问此网站,只能通过四层代理访问
listen [::]:80 default_server proxy_protocol;
......
- 测试透传
#客户端访问测试
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.100 www.yanlinux.org</h1>
[root@internet ~]$ curl www.yanlinux.org
<h1>10.0.0.18 www.yanlinux.org</h1>
#web1日志查看IP透传
[root@web2 ~]$ tail -f /var/log/nginx/access.log
10.0.0.8 - - [06/Mar/2023:05:51:35 +0000] "GET / HTTP/1.1" 200 37 "-" "curl/7.61.1" "-" "192.168.10.123"
#web2日志查看IP透传
[root@web2 ~]$ tail -f /var/log/nginx/access.log
10.0.0.8 - - [06/Mar/2023:13:51:37 +0800] "GET / HTTP/1.1" 200 36 "-" "curl/7.61.1" "-" "192.168.10.123"
3 基于cookie的会话保持
在2.1小节的架构基础上实现会话保持
frontend www.yanlinux.org_http_80
bind 192.168.10.129:80
use_backend www.yanlinux.org_http_nodes
backend www.yanlinux.org_http_nodes
balance roundrobin
cookie test-cookie insert indirect nocache
server web1 10.0.0.100:80 check inter 3000 fall 3 rise 5 cookie web1 #自己添加cookie的值
server web2 10.0.0.18:80 check inter 3000 fall 3 rise 5 cookie web2 #自己添加cookie的值
验证cookie信息
换个浏览器调度到另一台服务器
4 实现自定义错误页面和https的实验
架构说明:还是使用2.1搭建的架构
4.1 实现自定义错误页面
对指定的报错进行重定向,进行优雅的显示错误页面
使用errorfile
和errorloc
指令的两种方法,可以实现自定义各种错误页面
默认情况下,所有后端服务器都down机后,会显示下面页面
[root@haproxy ~]$ cat /etc/haproxy/haproxy.cfg
global
maxconn 100000
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
nbthread 4
cpu-map 1/all 0-3
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
errorfile 503 /etc/haproxy/html/503.http #设置503页面
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 10.0.0.8:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:123456
#创建503页面信息
[root@haproxy ~.d]$ mkdir /etc/haproxy/html
[root@haproxy ~]$ cat /etc/haproxy/html/503.http
HTTP/1.1 503 Service Unavailable
Content-Type:text/html;charset=utf-8
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>报错页面</title>
</head>
<body>
<center><h1>网站维护中......请稍候再试</h1></center>
<center><h2>联系电话:400-123-4567</h2></center>
<center><h3>503 Service Unavailable</h3></center>
</body>
[root@haproxy ~]$ systemctl restart haproxy.service
#将后端服务器down,观察页面
自己定义错误页面的信息
4.2 实现https
HAProxy实现https有两种方式:
- 从用户到haproxy是https,haproxy到后端服务器利用http通信
- haproxy支持https,证书在后端服务器上实现,然后用户到haproxy基于tcp模式再到后端服务器,这样减少haproxy服务器的压力
4.2.1 第一种实现方式
这种方式这需要在haproxy服务器端实现https,构建证书。
4.2.1.1 证书制作
[root@haproxy ~]$ mkdir /etc/haproxy/ssl
[root@haproxy ~]$ cd /etc/haproxy/ssl
[root@haproxy ssl]$ openssl genrsa -out haproxy.key 2048
[root@haproxy ssl]$ openssl req -x509 -newkey rsa:2048 -subj "/CN=www.yanlinux.org" -keyout haproxy.key -nodes -days 365 -out haproxy.crt
#haproxy需要将私钥文件和证书文件合在一起
[root@haproxy ssl]$ cat haproxy.key haproxy.crt > haproxy.pem
4.2.1.2 Https 配置
[root@haproxy ~]$ vi /etc/haproxy/conf.d/www.yanlinux.org.cfg
listen www.yanlinux.org_http_80
bind 192.168.10.129:80
bind 192.168.10.129:443 ssl crt /etc/haproxy/ssl/haproxy.pem
redirect scheme https if !{ ssl_fc } # 注意{ }内的空格
http-request set-header X-forwarded-Port %[dst_port]
http-request add-header X-forwarded-Proto https if { ssl_fc }
mode http
server web1 10.0.0.100:80 check inter 3000 fall 3 rise 5
server web2 10.0.0.18:80 check inter 3000 fall 3 rise 5
[root@haproxy ~]$ systemctl restart haproxy.service
[root@haproxy ~]$ ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 10.0.0.8:9999 0.0.0.0:*
LISTEN 0 128 192.168.10.129:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 192.168.10.129:443 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
验证https
4.2.2 基于tcp模式实现https
这种模式就需要在后端服务器上搭建证书,在haproxy服务器上利用tcp模式
#将前面制作的密钥及证书文件拷贝到两台web服务器上,在web服务器上实现https
##在web服务器上创建证书存放目录
[root@web1 ~]$ mkdir /etc/nginx/ssl
[root@web2 ~]$ mkdir /etc/nginx/ssl
##拷贝证书文件
[root@haproxy ssl]$ scp haproxy.crt haproxy.key 10.0.0.100:/etc/nginx/ssl
[root@haproxy ssl]$ scp haproxy.crt haproxy.key 10.0.0.18:/etc/nginx/ssl
##在nginx配置文件中添加ssl配置
[root@web1 ssl]$ mv haproxy.crt www.yanlinux.org.crt
[root@web1 ssl]$ mv haproxy.key www.yanlinux.org.key
[root@web1 ssl]$ vi /etc/nginx/sites-enabled/default
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate "/etc/nginx/ssl/www.yanlinux.org.crt";
ssl_certificate_key "/etc/nginx/ssl/www.yanlinux.org.key";
......
[root@web2 ssl]$ mv haproxy.crt www.yanlinux.org.crt
[root@web2 ssl]$ mv haproxy.key www.yanlinux.org.key
[root@web2 ~]$ vi /etc/nginx/nginx.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate "/etc/nginx/ssl/www.yanlinux.org.crt";
ssl_certificate_key "/etc/nginx/ssl/www.yanlinux.org.key";
......
#haproxy修改proxies设置
[root@haproxy ~]$ vi /etc/haproxy/conf.d/www.yanlinux.org.cfg
listen www.yanlinux.org_https
bind 192.168.10.129:443
mode tcp #必须为tcp协议
server web1 10.0.0.100:443 check inter 3000 fall 3 rise 5
server web2 10.0.0.18:443 check inter 3000 fall 3 rise 5
listen www.yanlinux.org_http_80
bind 192.168.10.129:80
redirect scheme https if !{ ssl_fc }
http-request set-header X-forwarded-Port %[dst_port]
http-request add-header X-forwarded-Proto https if { ssl_fc }
server web1 10.0.0.100:80 check inter 3000 fall 3 rise 5
server web2 10.0.0.18:80 check inter 3000 fall 3 rise 5
[root@haproxy ~]$ systemctl restart haproxy.service
5 完成keepalived的单播非抢占多主机高可用IP, 抢占邮件通知。
默认为抢占模式 preempt
,即当高优先级的主机恢复在线后,会抢占低先级的主机的master角色,造成网络抖动,建议设置为非抢占模式 nopreempt
,即高优先级主机恢复后,并不会抢占低优先级主机的master 角色
注意: 非抢占模式下,如果原主机down机, VIP迁移至的新主机, 后续新主机也发生down时,仍会将VIP迁移回原主机
注意:要关闭 VIP抢占,必须将各 Keepalived 服务器 state 配置为 BACKUP
默认keepalived主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量
注意:启用 vrrp_strict
时,不能启用单播
#ka1主机配置
[root@ka1 ~]$ vi /etc/keepalived/conf.d/www.yanlinux.org.conf
vrrp_instance VI_1 {
state BACKUP #设为BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
nopreempt #非抢占模式
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200 dev eth0 label eth0:1
}
unicast_src_ip 10.0.0.8 #本机IP
unicast_peer{
10.0.0.18 #指定对方主机IP,如果有多个keepalived,再加其他节点的IP
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
#ka2节点配置,生产中ka2主机是抢占式,不添加nopreempt,否则会导致ka1即使优先级降低,也不会切换至ka2
[root@ka2 ~]$ cat /etc/keepalived/conf.d/www.yanlinux.org.conf
vrrp_instance VI_1 {
state BACKUP #BACKUP
interface eth0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200 dev eth0 label eth0:1
}
unicast_src_ip 10.0.0.18 #本机IP
unicast_peer {
10.0.0.8 #指定对方IP
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
在所有keepalive节点上配置邮件通知脚本
#在所有 keepalived节点配置如下
[root@ka1 ~]$ cat /etc/keepalived/notify.sh
#!/bin/bash
contact='lgq6579@163.com'
email_send='1499214187@qq.com'
email_passwd='zzvjrqnkrkafbaec'
email_smtp_server='smtp.qq.com'
. /etc/os-release
msg_error() {
echo -e "\033[1;31m$1\033[0m"
}
msg_info() {
echo -e "\033[1;32m$1\033[0m"
}
msg_warn() {
echo -e "\033[1;33m$1\033[0m"
}
color () {
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \E[0m"
echo -n "$1" && $MOVE_TO_COL
echo -n "["
if [ $2 = "success" -o $2 = "0" ] ;then
${SETCOLOR_SUCCESS}
echo -n $" OK "
elif [ $2 = "failure" -o $2 = "1" ] ;then
${SETCOLOR_FAILURE}
echo -n $"FAILED"
else
${SETCOLOR_WARNING}
echo -n $"WARNING"
fi
${SETCOLOR_NORMAL}
echo -n "]"
echo
}
install_sendemail () {
if [[ $ID =~ rhel|centos|rocky ]];then
rpm -q sendemail &> /dev/null || yum -y install sendemail
elif [ $ID = 'ubuntu' ];then
dpkg -l | grep -q sendemail || { apt update; apt -y install libio-socket-ssl-perl libnet-ssleay-perl sendemail; }
else
color "不支持此操作系统,退出!" 1
exit
fi
}
send_mail() {
local email_receive="$1"
local email_subject="$2"
local email_message="$3"
sendemail -f $emial_send -t $email_receive -u $email_subject -m $email_message -s $email_smtp_server -o message-charset=utf-8 -o tls=yes -xu $email_send -xp $email_passwd
[ $? -eq 0 ] && color "邮件发送成功" 0 || color "邮件发送失败" 1
}
notify() {
if [[ $1 =~ ^(master|backup|fault)$ ]];then
mailsubject="$(hostname) to be $1, vip floating"
mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
send_mail "$contact" "$mailsubject" "$mailbody"
else
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
fi
}
install_sendemail
notify $1
#加上执行权限
[root@ka1 ~]$ chmod +x /etc/keepalived/notify.sh
#拷贝到第二台keepalived服务器
[root@ka1 ~]$ scp /etc/keepalived/notify.sh 10.0.0.18:/etc/keepalived/
模拟故障
#停掉ka1节点的keepalived服务
[root@ka1 ~]$ systemctl stop keepalived.service
6 完成lvs + keepalived + nginx 高可用配置
实现单主的 LVS-DR 模式
准备客户端(仅主机模式)
#修改网络配置文件,添加路由器eth1的地址为网关
[root@ubuntu2004 ~]$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
renderer: networkd
version: 2
ethernets:
eth0:
addresses:
- 192.168.10.131/24
gateway4: 192.168.10.130 #路由器eth1 IP作为网关
nameservers:
addresses: [180.76.76.76, 114.114.114.114]
[root@ubuntu2004 ~]$ netplan apply
准备路由器(两块网卡,eth0:NAT模式,eth1:仅主机模式)并开启ip_forward
[root@router ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:24:e2:5d brd ff:ff:ff:ff:ff:ff
inet 10.0.0.48/24 brd 10.0.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe24:e25d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:24:e2:67 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.130/24 brd 192.168.10.255 scope global dynamic noprefixroute eth1
valid_lft 1342sec preferred_lft 1342sec
inet6 fe80::dadb:8a41:7913:8c16/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#开启ip_forward
[root@router ~]$ vi /etc/sysctl.conf
net.ipv4.ip_forward=1
[root@router ~]$ sysctl -p
net.ipv4.ip_forward = 1
[root@router ~]$ sysctl -a|grep ip_for
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0
准备web服务器并使用脚本绑定VIP至web服务器lo网卡,修改网关指向router eth0地址
#准备两台后端RS主机
[root@web1 ~]$ cat lvs_dr_rs.sh
#!/bin/bash
vip=10.0.0.200
mask='255.255.255.255'
dev=lo:1
rpm -q nginx &> /dev/null || yum -y install nginx &>/dev/null
systemctl enable --now nginx &> /dev/null && echo "The nginx Server is Ready!"
echo "<h1>`hostname`</h1>" > /usr/share/nginx/html/index.html
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $dev $vip netmask $mask
echo "The RS Server is Ready!"
;;
stop)
ifconfig $dev down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "The RS Server is Canceled!"
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
[root@web1 ~]$ sh lvs_dr_rs.sh start
The nginx Server is Ready!
The RS Server is Ready!
[root@web1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.200/32 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:d3:27:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.28/24 brd 10.0.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed3:2718/64 scope link
valid_lft forever preferred_lft forever
#修改网关
[root@web1 ~]$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=10.0.0.28
PREFIX=24
GATEWAY=10.0.0.48 #指向router eth0 IP
DNS1=223.5.5.5
DNS2=114.114.114.114
ONBOOT=yes
#修改路由表
[root@web1 ~]$ ip route del default via 10.0.0.2 dev eth0 proto static metric 100
[root@web1 ~]$ ip route add default via 10.0.0.48 dev eth0 proto static metric 100
[root@web1 ~]$ ip route
default via 10.0.0.48 dev eth0 proto static metric 100
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.28 metric 100
[root@web2 ~]$ sh lvs_dr_rs.sh start
The nginx Server is Ready!
The RS Server is Ready!
[root@web2 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.200/32 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:d4:26:8f brd ff:ff:ff:ff:ff:ff
inet 10.0.0.38/24 brd 10.0.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed4:268f/64 scope link
valid_lft forever preferred_lft forever
#修改网关
[root@web2 ~]$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=10.0.0.38
PREFIX=24
GATEWAY=10.0.0.48
DNS1=223.5.5.5
DNS2=114.114.114.114
ONBOOT=yes
#修改路由表
[root@web2 ~]$ ip route del default via 10.0.0.2 dev eth0 proto static metric 100
[root@web2 ~]$ ip route add default via 10.0.0.48 dev eth0 proto static metric 100
[root@web2 ~]$ ip route
default via 10.0.0.48 dev eth0 proto static metric 100
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.38 metric 100
#测试直接访问两台RS
[root@client ~]$ curl 10.0.0.28
<h1>10.0.0.28 </h1>
[root@client ~]$ curl 10.0.0.38
<h1>10.0.0.38 </h1>
配置keepalived
#ka1节点配置
[root@ka1 ~]$ vi /etc/keepalived/conf.d/www.yanlinux.org.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200 dev eth0 label eth0:1
}
unicast_src_ip 10.0.0.8
unicast_peer{
10.0.0.18
}
#notify_master "/etc/keepalived/notify.sh master"
#notify_backup "/etc/keepalived/notify.sh backup"
#notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 10.0.0.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 10.0.0.28 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 10.0.0.38 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
#ka2配置
[root@ka2 ~]$ vi /etc/keepalived/conf.d/www.yanlinux.org.conf
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.200 dev eth0 label eth0:1
}
unicast_src_ip 10.0.0.18
unicast_peer {
10.0.0.8
}
# notify_master "/etc/keepalived/notify.sh master"
# notify_backup "/etc/keepalived/notify.sh backup"
# notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 10.0.0.200 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 10.0.0.28 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 10.0.0.38 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
访问测试结果
[root@client ~]$ curl 10.0.0.200
<h1>10.0.0.28 </h1>
[root@client ~]$ curl 10.0.0.200
<h1>10.0.0.38 </h1>
模拟故障
#第一台web1故障,自动切换至RS2
[root@web1 ~]$ systemctl stop nginx.service
[root@client ~]$ curl 10.0.0.200
<h1>10.0.0.38 </h1>
[root@client ~]$ curl 10.0.0.200
<h1>10.0.0.38 </h1>
[root@client ~]$ curl 10.0.0.200
<h1>10.0.0.38 </h1>
[root@ka1 ~]$ ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.200:80 rr
-> 10.0.0.38:80 Route 1 0 5
#ka1故障,自动切换至ka2
[root@ka2 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:c5:f0:b9 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.18/24 brd 10.0.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 10.0.0.200/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec5:f0b9/64 scope link
valid_lft forever preferred_lft forever