第八周作业

1 redis搭建哨兵原理和集群实现。

1.1 哨兵原理

Sentinel是一个分布式系统,即需要在多个节点上各自同时运行一个sentinel进程,Sentienl 进程通过
言协议(gossip protocols)来接收关于Master是否下线状态,并使用投票协议(Agreement Protocols)来决定是否执行自动故障转移,并选择合适的Slave作为新的Master

每个Sentinel进程会向其它Sentinel、Master、Slave定时发送消息,来确认对方是否存活,如果发现某个节点在指定配置时间内未得到响应,则会认为此节点已离线,即为主观宕机Subjective Down,简称为 SDOWN

如果哨兵集群中的多数Sentinel进程认为Master存在SDOWN,共同利用 is-master-down-by-addr命令互相通知后,则认为客观宕机Objectively Down, 简称 ODOWN

接下来利用投票算法,从所有slave节点中,选一台合适的slave将之提升为新Master节点,然后自动修
改其它slave相关配置,指向新的master节点,最终实现故障转移failover

1.2 实现哨兵架构

实现一主两从的基于哨兵的高可用Redis架构

1.2.1 准备主从复制环境

#所有从节点利用编译安装脚本安装redis,
[20:24:34 root@rocky8 ~]$ cat install_redis.sh
read -p "$(echo -e '\033[1;32m请输入下载的版本号:\033[0m')" NUM
read -p "$(echo -e '\033[1;32m请输入设置的密码: \033[0m')" PASSWORD
REDIS_VERSION=redis-${NUM}
INSTALL_DIR=/apps/redis
CPU=`lscpu|awk '/^CPU\(s\)/{print $2}'`
. /etc/os-release

color () {
    RES_COL=60
    MOVE_TO_COL="echo -en \\033[${RES_COL}G"
    SETCOLOR_SUCCESS="echo -en \\033[1;32m"
    SETCOLOR_FAILURE="echo -en \\033[1;31m"
    SETCOLOR_WARNING="echo -en \\033[1;33m"
    SETCOLOR_NORMAL="echo -en \E[0m"
    echo -n "$1" && $MOVE_TO_COL
    echo -n "["
    if [ $2 = "success" -o $2 = "0" ] ;then
        ${SETCOLOR_SUCCESS}
        echo -n $"  OK  "    
    elif [ $2 = "failure" -o $2 = "1"  ] ;then 
        ${SETCOLOR_FAILURE}
        echo -n $"FAILED"
    else
        ${SETCOLOR_WARNING}
        echo -n $"WARNING"
    fi
    ${SETCOLOR_NORMAL}
    echo -n "]"
    echo 
}

prepare(){
    if [ $ID = "centos" -o $ID = "rocky" ];then
        yum -y install gcc make jemalloc-devel systemd-devel
    else
        apt update
        apt -y install gcc make libjemalloc-dev libsystemd-dev
    fi
    if [ $? -eq 0 ];then
        color "依赖包安装成功" 0
    else
        color "依赖包安装失败,请检查网络配置" 1
    fi
}

install(){
    #下载源码
    if [ ! -f ${REDIS_VERSION}.tar.gz ];then
        wget http://download.redis.io/releases/${REDIS_VERSION}.tar.gz || { color "Redis 源码下载失败" 1 ; exit; }
    fi
    tar xf ${REDIS_VERSION}.tar.gz -C /usr/local/src
    cd /usr/local/src/${REDIS_VERSION}

    #编译安装
    make -j $CPU USE_SYSTEMD=yes PREFIX=${INSTALL_DIR} install && color "Redis 编译安装完成" 0 || { color "Redis 编译安装失败" 1 ; exit; }

    #配置环境变量
    ln -s ${INSTALL_DIR}/bin/redis-* /usr/bin/

    #准备相关目录和配置文件
    mkdir ${INSTALL_DIR}/{etc,log,data,run}
    cp redis.conf ${INSTALL_DIR}/etc/

    #修改配置文件
    sed -i -e 's/bind 127.0.0.1/bind 0.0.0.0/'  -e "/# requirepass/a requirepass $PASSWORD"  -e "/^dir .*/c dir ${INSTALL_DIR}/data/"  -e "/logfile .*/c logfile ${INSTALL_DIR}/log/redis-6379.log"  -e  "/^pidfile .*/c  pidfile ${INSTALL_DIR}/run/redis_6379.pid" ${INSTALL_DIR}/etc/redis.conf

    if id redis &> /dev/null ;then
        color "Redis 用户已存在" 1
    else
        useradd -r -s /sbin/nologin redis
        color "Redis 用户创建成功" 0
    fi

    #修改安装路径权限
    chown -R redis.redis ${INSTALL_DIR}

    #消除启动时的Warning提示信息
    cat >> /etc/sysctl.conf <<EOF
net.core.somaxconn = 1024
vm.overcommit_memory = 1
EOF
    sysctl -p
    if [ $ID = "centos" -o $ID = "rocky" ];then
        echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.d/rc.local
        chmod +x /etc/rc.d/rc.local
        /etc/rc.d/rc.local
    else
        echo -e '#!/bin/bash\necho never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local
        chmod +x /etc/rc.local
        /etc/rc.local
    fi

    #创建redis服务service文件
    cat > /lib/systemd/system/redis.service <<EOF
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
ExecStart=${INSTALL_DIR}/bin/redis-server ${INSTALL_DIR}/etc/redis.conf --supervised systemd
ExecStop=/bin/kill -s QUIT \$MAINPID
Type=notify
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755
LimitNOFILE=1000000

[Install]
WantedBy=multi-user.target
EOF

    #启动service
    systemctl daemon-reload
    systemctl enable --now redis &> /dev/null
    if [ $? -eq 0 ];then
        color "Redis服务启动成功,Redis信息如下:" 0
    else
        color "Redis启动失败" 1
        exit
    fi
    sleep 2
    redis-cli -a ${PASSWORD} INFO Server 2> /dev/null
}

prepare
install


#然后再redis.conf配置文件中添加以下两行信息,实现主从复制,所有节点的密码一样
replicaof 10.0.0.8 6379
masterauth lgq123456
#然后重启两个从节点的redis服务

#在主节点配置文件中添加下面一行信息
masterauth lgq123456

#在主节点查看状态
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
min_slaves_good_slaves:2
slave0:ip=10.0.0.18,port=6379,state=online,offset=868,lag=1
slave1:ip=10.0.0.28,port=6379,state=online,offset=868,lag=1
master_failover_state:no-failover
master_replid:2b34ad02ae4648196573139f495053803bdb972e
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:882
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:882

1.2.2 在所有主从节点上修改范例配置文件

#如果是编译安装,在源码目录有sentinel.conf,复制到安装目录即可,
#如:/apps/redis/etc/sentinel.conf
[11:01:09 root@rocky ~]$ ls /usr/local/src/redis-6.2.6/
00-RELEASENOTES  CONTRIBUTING  INSTALL    README.md   runtest-cluster    sentinel.conf  TLS.md
BUGS             COPYING       Makefile   redis.conf  runtest-moduleapi  src            utils
CONDUCT          deps          MANIFESTO  runtest     runtest-sentinel   tests
#在所有主从节点上将sentinel.conf文件拷贝至安装路径下
[11:01:44 root@rocky ~]$ cp /usr/local/src/redis-6.2.6/sentinel.conf /apps/redis/etc/
[11:02:47 root@rocky ~]$ chown redis.redis /apps/redis/etc/sentinel.conf
#在所有主从节点上修改范例配置文件
##主节点修改配置文件
[11:17:28 root@rocky ~]$ vi /apps/redis/etc/sentinel.conf
#下面列出的就是一些主要修改过的行
bind 0.0.0.0
logfile "/apps/redis/log/sentinel.log"
sentinel monitor mymaster 10.0.0.8 6379 2
sentinel auth-pass mymaster lgq123456
sentinel down-after-milliseconds mymaster 3000
##其他的配置选项没做修改

##从节点1上修改配置文件
[11:22:20 root@rocky8 ~]$ vi /apps/redis/etc/sentinel.conf
#下面列出的就是一些主要修改过的行
bind 0.0.0.0
logfile "/apps/redis/log/sentinel.log"
sentinel monitor mymaster 10.0.0.8 6379 2
sentinel auth-pass mymaster lgq123456
sentinel down-after-milliseconds mymaster 3000
##其他的配置选项没做修改

##从节点2修改配置文件
[11:26:59 root@rocky8 ~]$ vi /apps/redis/etc/sentinel.conf
#下面列出的就是一些主要修改过的行
bind 0.0.0.0
logfile "/apps/redis/log/sentinel.log"
sentinel monitor mymaster 10.0.0.8 6379 2
sentinel auth-pass mymaster lgq123456
sentinel down-after-milliseconds mymaster 3000
##其他的配置选项没做修改

1.2.3 启动哨兵服务

#编译安装的,创建service文件
#在所有节点生成新的service文件
[11:32:43 root@rocky ~]$ vi /lib/systemd/system/redis-sentinel.service
[Unit]
Description=Redis Sentinel
After=network.target

[Service]
ExecStart=/apps/redis/bin/redis-sentinel /apps/redis/etc/sentinel.conf --supervised systemd
ExecStop=/bin/kill -s QUIT $MAINPID
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target

#将service文件拷贝至其他两个从节点
[11:37:17 root@rocky ~]$ scp /lib/systemd/system/redis-sentinel.service 10.0.0.18:/lib/systemd/system/
[11:37:33 root@rocky ~]$ scp /lib/systemd/system/redis-sentinel.service 10.0.0.28:/lib/systemd/system/

#启动sentinel服务
[11:35:29 root@rocky ~]$ systemctl enable --now redis-sentinel.service

1.2.4 验证哨兵服务

#1 查看哨兵服务端口状态
[12:29:26 root@rocky ~]$ ss -ntl
State      Recv-Q     Send-Q           Local Address:Port            Peer Address:Port     Process     
LISTEN     0          511                    0.0.0.0:26379                0.0.0.0:*                    
LISTEN     0          511                    0.0.0.0:6379                 0.0.0.0:*

#2 当前sentinel状态
##在sentinel状态中尤其是最后一行,涉及到masterIP是多少,有几个slave,有几个sentinels,必须是符合全部服务器数量
127.0.0.1:26379> INFO sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=10.0.0.8:6379,slaves=2,sentinels=3 #两个slave,三个sentinel服务器,如果sentinel值不符合,检查myid可能冲突

1.2.5 停止Mster实现故障转移

#1 停止Master节点
[12:37:00 root@rocky ~]$ systemctl stop redis

#2 查看各节点上哨兵信息
#在原来从节点10.0.0.18上查看
[12:39:38 root@rocky8 ~]$ redis-cli -a lgq123456
127.0.0.1:6379> INFO replication
# Replication
role:master #原来是从节点现在提升为了主节点
connected_slaves:1
slave0:ip=10.0.0.28,port=6379,state=online,offset=269753,lag=1
master_failover_state:no-failover
master_replid:9d20221ff15a2033d123538a784a8265590f62db
master_replid2:785de2f5d49695821aa6153f2bfdf8aa3aa89d3a
master_repl_offset:269753
second_repl_offset:241513
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:269753

#在10.0.0.28上观察状态
127.0.0.1:6379> INFO replication
# Replication
role:slave
master_host:10.0.0.18 #改为10.0.0.18为其主节点了
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_read_repl_offset:306356
slave_repl_offset:306356
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:9d20221ff15a2033d123538a784a8265590f62db
master_replid2:785de2f5d49695821aa6153f2bfdf8aa3aa89d3a
master_repl_offset:306356
second_repl_offset:241513
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:15
repl_backlog_histlen:306342

#3 故障转移时sentinel信息
[12:29:30 root@rocky ~]$ tail -f /apps/redis/log/sentinel.log
2752:X 27 Oct 2022 12:37:46.644 # +sdown master mymaster 10.0.0.8 6379
2752:X 27 Oct 2022 12:37:46.701 # +new-epoch 1
2752:X 27 Oct 2022 12:37:46.703 # +vote-for-leader bb0bb2967f9d97cf868e7a658238cc24dc8ab47a 1
2752:X 27 Oct 2022 12:37:46.709 # +odown master mymaster 10.0.0.8 6379 #quorum 3/2
2752:X 27 Oct 2022 12:37:46.709 # Next failover delay: I will not start a failover before Thu Oct 27 12:43:46 2022
2752:X 27 Oct 2022 12:37:47.772 # +config-update-from sentinel bb0bb2967f9d97cf868e7a658238cc24dc8ab47a 10.0.0.18 26379 @ mymaster 10.0.0.8 6379
2752:X 27 Oct 2022 12:37:47.773 # +switch-master mymaster 10.0.0.8 6379 10.0.0.18 6379
2752:X 27 Oct 2022 12:37:47.774 * +slave slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.18 6379
2752:X 27 Oct 2022 12:37:47.774 * +slave slave 10.0.0.8:6379 10.0.0.8 6379 @ mymaster 10.0.0.18 6379
2752:X 27 Oct 2022 12:37:50.813 # +sdown slave 10.0.0.8:6379 10.0.0.8 6379 @ mymaster 10.0.0.18 6379

1.2.6 验证故障转移

#故障转移后redis.conf中的replicaof行的master IP会被修改
#在10.0.0.28上查看
[12:47:44 root@rocky8 ~]$ grep ^replicaof /apps/redis/etc/redis.conf 
replicaof 10.0.0.18 6379 #切换成了10.0.0.18

#哨兵配置文件的sentinel monitor IP 同样也会被修改
[12:48:02 root@rocky8 ~]$ grep "^[a-Z]" /apps/redis/etc/sentinel.conf 
bind 0.0.0.0
port 26379
daemonize no
pidfile "/var/run/redis-sentinel.pid"
logfile "/apps/redis/log/sentinel.log"
dir "/tmp"
sentinel monitor mymaster 10.0.0.18 6379 2 #主节点的IP地址也换了
sentinel auth-pass mymaster lgq123456
sentinel down-after-milliseconds mymaster 3000
acllog-max-len 128
sentinel deny-scripts-reconfig yes
sentinel resolve-hostnames no
sentinel announce-hostnames no
protected-mode no
supervised systemd
user default on nopass ~* &* +@all
sentinel myid 50d92394837a013456af8a5b36931d37b2a5f0c5
sentinel config-epoch mymaster 1
sentinel leader-epoch mymaster 1
sentinel current-epoch 1
sentinel known-replica mymaster 10.0.0.28 6379
sentinel known-replica mymaster 10.0.0.8 6379
sentinel known-sentinel mymaster 10.0.0.8 26379 1baea43d8ae61c17d90577b63034595d2d7e0080
sentinel known-sentinel mymaster 10.0.0.18 26379 bb0bb2967f9d97cf868e7a658238cc24dc8ab47a

1.2.7 原master重新加入Redis集群

#重启原master节点redis服务
[12:37:43 root@rocky ~]$ systemctl start redis
#发现其配置文件中添加了现在主节点的信息
[12:51:52 root@rocky ~]$ grep ^replicaof /apps/redis/etc/redis.conf 
replicaof 10.0.0.18 6379
[12:55:56 root@rocky ~]$ grep ^[a-z] /apps/redis/etc/sentinel.conf 
bind 0.0.0.0
port 26379
daemonize no
pidfile "/var/run/redis-sentinel.pid"
logfile "/apps/redis/log/sentinel.log"
dir "/tmp"
sentinel monitor mymaster 10.0.0.18 6379 2
sentinel auth-pass mymaster lgq123456
sentinel down-after-milliseconds mymaster 3000
acllog-max-len 128
sentinel deny-scripts-reconfig yes
sentinel resolve-hostnames no
sentinel announce-hostnames no
protected-mode no
supervised systemd
user default on nopass ~* &* +@all
sentinel myid 1baea43d8ae61c17d90577b63034595d2d7e0080
sentinel config-epoch mymaster 1
sentinel leader-epoch mymaster 1
sentinel current-epoch 1
sentinel known-replica mymaster 10.0.0.8 6379
sentinel known-replica mymaster 10.0.0.28 6379
sentinel known-sentinel mymaster 10.0.0.18 26379 bb0bb2967f9d97cf868e7a658238cc24dc8ab47a
sentinel known-sentinel mymaster 10.0.0.28 26379 50d92394837a013456af8a5b36931d37b2a5f0c5

#查看其状态
127.0.0.1:6379> INFO replication
# Replication
role:slave #变成了从节点
master_host:10.0.0.18 #主节点IP 
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_read_repl_offset:431834
slave_repl_offset:431834
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
min_slaves_good_slaves:0
master_failover_state:no-failover
master_replid:9d20221ff15a2033d123538a784a8265590f62db
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:431834
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:409238
repl_backlog_histlen:22597


#观察新master上的状态
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
slave0:ip=10.0.0.28,port=6379,state=online,offset=466831,lag=1
slave1:ip=10.0.0.8,port=6379,state=online,offset=466831,lag=1
master_failover_state:no-failover
master_replid:9d20221ff15a2033d123538a784a8265590f62db
master_replid2:785de2f5d49695821aa6153f2bfdf8aa3aa89d3a
master_repl_offset:466964
second_repl_offset:241513
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:466964

2 LVS常用模型工作原理及实现。

  • lvs-nat:本质是多目标IP的DNAT,通过将请求报文中的目标地址和目标端口修改为某挑出的RSRIPPORT实现转发
  • lvs-drDirect Routing,直接路由,LVS默认模式,应用最广泛,通过为请求报文重新封装一个MAC首部进行转发,源MACDIP所在的接口的MAC,目标MAC是某挑选出的RSRIP所在接口的MAC地址;源IP/PORT,以及目标IP/PORT均保持不变
  • lvs-tun:不修改请求报文的IP首部(源IP为CIP,目标IP为VIP),而在原IP报文之外再封装一个IP首部(源IP是DIP,目标IP是RIP),将报文发往挑选出的目标RS;RS直接响应给客户端(源IP是VIP,目标IP是CIP)
  • lvs-fullnat:修改请求报文的源和目标IP,默认内核不支持

3 LVS的负载策略有哪些,各应用在什么场景,通过LVS DR任意实现1-2种场景。

3.1 负载策略

3.1.1 静态方法

仅根据算法本身进行调度

  1. RRroundrobin,轮询,较常用,雨露均沾,大锅饭
  2. WRRWeighted RR,加权轮询,较常用
  3. SHSource Hashing,实现session sticky,源IP地址hash;将来自于同一个IP地址的请求始终发往第一次挑中的RS,从而实现会话绑定
  4. DHDestination Hashing;目标地址哈希,第一次轮询调度至RS,后续将发往同一个目标地址的请求始终转发至第一次挑中的RS,典型使用场景是正向代理缓存场景中的负载均衡,如: Web缓存

3.1.2 动态方法

主要根据每RS当前的负载状态及调度算法进行调度Overhead=value较小的RS将被调度

  1. LCleast connections适用于长连接应用

    Overhead=activeconns*256+inactiveconns
  2. WLCWeighted LC,默认调度方法,较常用

    Overhead=(activeconns*256+inactiveconns)/weight
  3. SEDShortest Expection Delay,初始连接高权重优先,只检查活动连接,而不考虑非活动连接

    Overhead=(activeconns+1)*256/weight
  4. NQNever Queue,第一轮均匀分配,后续SED

  5. LBLCLocality-Based LC,动态的DH算法,使用场景:根据负载状态实现正向代理,实现Web Cache等

  6. LBLCRLBLC with Replication,带复制功能的LBLC,解决LBLC负载不均衡问题,从负载重的复制到负载轻的RS,实现Web Cache等

3.2 DR案例

3.2.1 LVS-DR模式单网段案例

1 LVS的网络配置

#1 RS1的网络配置
##安装配置httpd
[16:05:40 root@centos7 ~]$ yum -y install httpd mod_ssl redis
[16:07:02 root@centos7 ~]$ systemctl enable --now httpd redis
[16:07:47 root@centos7 ~]$ hostnamectl set-hostname web1
[16:25:26 root@web1 ~]$ hostname > /var/www/html/index.html

[10:31:03 root@web1 /etc/sysconfig/network-scripts]$ cat ifcfg-eth0 
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
IPADDR=10.0.0.7
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT=yes
[10:38:28 root@web1 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.200      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0

#2 RS2的网络配置
[16:05:40 root@centos7 ~]$ yum -y install httpd mod_ssl redis
[16:07:02 root@centos7 ~]$ systemctl enable --now httpd redis
[16:07:47 root@centos7 ~]$ hostnamectl set-hostname web2
[16:25:26 root@web2 ~]$ hostname > /var/www/html/index.html
[11:25:20 root@web2 /etc/sysconfig/network-scripts]$ vi ifcfg-eth0 
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
IPADDR=10.0.0.17
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT="yes"
[11:25:53 root@web2 /etc/sysconfig/network-scripts]$ nmcli connection reload
[11:26:09 root@web2 /etc/sysconfig/network-scripts]$ nmcli connection up eth0
[11:26:21 root@web2 /etc/sysconfig/network-scripts]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.200      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0


#3 insternet网络配置
[10:55:23 root@rocky ~]$ hostnamectl set-hostname internet
[10:56:32 root@internet /etc/sysconfig/network-scripts]$ cat ifcfg-eth0 
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=192.168.10.6
PREFIX=24
GATEWAY=192.168.10.200
ONBOOT=yes
[10:57:13 root@internet ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.10.200  0.0.0.0         UG    100    0        0 eth0
192.168.10.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0

#4 router网络配置
##开启ip_forward
[10:50:21 root@router ~]$ echo 'net.ipv4.ip_forward=1' >>/etc/sysctl.conf 
[10:59:39 root@router ~]$ sysctl -p
net.ipv4.ip_forward = 1
[11:11:57 root@router netplan]$ cat 00-installer-config.yaml 
# This is the network config written by 'subiquity'
network: 
  version: 2 
  renderer: networkd
  ethernets:
    eth0:
      addresses:
        - 10.0.0.200/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 114.114.114.114]
    eth1:
      addresses:
        - 192.168.10.200/24
[11:12:00 root@router netplan]$ netplan apply
##测试网络通不通
[11:14:18 root@router ~]$ curl 10.0.0.7
web1
[11:14:26 root@router ~]$ curl 10.0.0.17
web2
[11:14:28 root@router ~]$ ping 192.168.10.6
PING 192.168.10.6 (192.168.10.6) 56(84) bytes of data.
64 bytes from 192.168.10.6: icmp_seq=1 ttl=64 time=0.526 ms
64 bytes from 192.168.10.6: icmp_seq=2 ttl=64 time=0.733 ms
64 bytes from 192.168.10.6: icmp_seq=3 ttl=64 time=0.924 ms


#5 LVS服务器的网络配置
[11:28:20 root@LVS network-scripts]$ vi ifcfg-eth0
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=10.0.0.8
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT=yes
[11:30:03 root@LVS network-scripts]$ nmcli connection reload
[11:30:20 root@LVS network-scripts]$ nmcli connection up eth0
[11:29:53 root@LVS network-scripts]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0

2 后端RS的IPVS配置

#RS1的IPVS配置
##下面是临时生效的,想要永久生效就得写到/etc/sysctl.conf中
[11:42:27 root@web1 ~]$ echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[11:43:58 root@web1 ~]$ echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[11:44:06 root@web1 ~]$ echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[11:44:15 root@web1 ~]$ echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
##加上VIP到回环网卡上
[11:47:17 root@web1 ~]$ ip a a 10.0.0.100/32 dev lo label lo:1
[11:47:46 root@web1 ~]$ ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:feba:abf1  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:ba:ab:f1  txqueuelen 1000  (Ethernet)
        RX packets 1585  bytes 134157 (131.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1262  bytes 135781 (132.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 144  bytes 12736 (12.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 144  bytes 12736 (12.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 10.0.0.100  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)

#RS2的IPVS配置
[11:44:46 root@web2 ~]$ echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[11:44:51 root@web2 ~]$ echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[11:44:59 root@web2 ~]$ echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[11:45:05 root@web2 ~]$ echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[11:46:23 root@web2 ~]$ ifconfig lo:1 10.0.0.100/32
[11:50:07 root@web2 ~]$ ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.17  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fef0:b9b8  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:f0:b9:b8  txqueuelen 1000  (Ethernet)
        RX packets 1319  bytes 112281 (109.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 949  bytes 111613 (108.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 53  bytes 4212 (4.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 53  bytes 4212 (4.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 10.0.0.100  netmask 0.0.0.0
        loop  txqueuelen 1000  (Local Loopback)

3 LVS主机的配置

#在LVS上添加VIP
[11:59:02 root@LVS ~]$ ifconfig lo:1 10.0.0.100/32
[12:00:58 root@LVS ~]$ ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.8  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:feb9:2bb4  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b9:2b:b4  txqueuelen 1000  (Ethernet)
        RX packets 180  bytes 15534 (15.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 141  bytes 12191 (11.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 12  bytes 1020 (1020.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 1020 (1020.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 10.0.0.100  netmask 0.0.0.0
        loop  txqueuelen 1000  (Local Loopback)

#添加LVS规则
[12:01:03 root@LVS ~]$ ipvsadm -A -t 10.0.0.100:80 -s rr
[12:03:13 root@LVS ~]$ ipvsadm -a -t 10.0.0.100:80 -r 10.0.0.7:80 -g
[12:03:41 root@LVS ~]$ ipvsadm -a -t 10.0.0.100:80 -r 10.0.0.17:80 -g
[12:03:45 root@LVS ~]$ ipvsadm -Ln 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 rr
  -> 10.0.0.7:80                   Route   1      0          0         
  -> 10.0.0.17:80                  Route   1      0          0 

4 测试访问

[12:04:55 root@internet ~]$ curl 10.0.0.100
web1
[12:05:57 root@internet ~]$ curl 10.0.0.100
web2
[12:05:58 root@internet ~]$ curl 10.0.0.100
web1
[12:05:59 root@internet ~]$ curl 10.0.0.100
web2

3.2.2 LVS-DR模式多网段案例

单网段的DR模式容易暴露后端RS服务器地址信息,可以使用跨网面的DR模型,实现更高的安全性

1 各个服务器的网络配置

#1 RS1的网络配置
##安装配置httpd
[16:05:40 root@centos7 ~]$ yum -y install httpd mod_ssl redis
[16:07:02 root@centos7 ~]$ systemctl enable --now httpd redis
[16:07:47 root@centos7 ~]$ hostnamectl set-hostname web1
[16:25:26 root@web1 ~]$ hostname > /var/www/html/index.html
[10:31:03 root@web1 /etc/sysconfig/network-scripts]$ cat ifcfg-eth0 
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
IPADDR=10.0.0.7
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT=yes
[10:38:28 root@web1 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.200      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0


#2 RS2的网络配置
[16:05:40 root@centos7 ~]$ yum -y install httpd mod_ssl redis
[16:07:02 root@centos7 ~]$ systemctl enable --now httpd redis
[16:07:47 root@centos7 ~]$ hostnamectl set-hostname web2
[16:25:26 root@web2 ~]$ hostname > /var/www/html/index.html
[11:25:20 root@web2 /etc/sysconfig/network-scripts]$ vi ifcfg-eth0 
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
IPADDR=10.0.0.17
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT="yes"
[11:25:53 root@web2 /etc/sysconfig/network-scripts]$ nmcli connection reload
[11:26:09 root@web2 /etc/sysconfig/network-scripts]$ nmcli connection up eth0
[11:26:21 root@web2 /etc/sysconfig/network-scripts]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.200      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0


#3 insternet网络配置
[10:55:23 root@rocky ~]$ hostnamectl set-hostname internet
[10:56:32 root@internet /etc/sysconfig/network-scripts]$ cat ifcfg-eth0 
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=192.168.10.6
PREFIX=24
GATEWAY=192.168.10.200
ONBOOT=yes
[10:57:13 root@internet ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.10.200  0.0.0.0         UG    100    0        0 eth0
192.168.10.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0

#4 router网络配置
##开启ip_forward
[10:50:21 root@router ~]$ echo 'net.ipv4.ip_forward=1' >>/etc/sysctl.conf 
[10:59:39 root@router ~]$ sysctl -p
net.ipv4.ip_forward = 1
[11:11:57 root@router netplan]$ cat 00-installer-config.yaml 
# This is the network config written by 'subiquity'
network: 
  version: 2 
  renderer: networkd
  ethernets:
    eth0:
      addresses:
        - 10.0.0.200/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 114.114.114.114]
    eth1:
      addresses:
        - 192.168.10.200/24
[11:12:00 root@router netplan]$ netplan apply
##在eth0上添加一个VIP
[16:37:22 root@router ~]$ ip a a 172.16.0.200/24 dev eth0 label eth0:1
[16:37:26 root@router ~]$ ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fedb:8e62  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:db:8e:62  txqueuelen 1000  (Ethernet)
        RX packets 9242  bytes 5198530 (5.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6292  bytes 472480 (472.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:db:8e:62  txqueuelen 1000  (Ethernet)

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.200  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::20c:29ff:fedb:8e6c  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:db:8e:6c  txqueuelen 1000  (Ethernet)
        RX packets 969  bytes 77498 (77.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 599  bytes 73578 (73.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1333  bytes 97114 (97.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1333  bytes 97114 (97.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
##测试网络通不通
[11:14:18 root@router ~]$ curl 10.0.0.7
web1
[11:14:26 root@router ~]$ curl 10.0.0.17
web2
[11:14:28 root@router ~]$ ping 192.168.10.6
PING 192.168.10.6 (192.168.10.6) 56(84) bytes of data.
64 bytes from 192.168.10.6: icmp_seq=1 ttl=64 time=0.526 ms
64 bytes from 192.168.10.6: icmp_seq=2 ttl=64 time=0.733 ms
64 bytes from 192.168.10.6: icmp_seq=3 ttl=64 time=0.924 ms


#5 LVS服务器的网络配置
[11:28:20 root@LVS network-scripts]$ vi ifcfg-eth0
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
IPADDR=10.0.0.8
PREFIX=24
GATEWAY=10.0.0.200
ONBOOT=yes
[11:30:03 root@LVS network-scripts]$ nmcli connection reload
[11:30:20 root@LVS network-scripts]$ nmcli connection up eth0
[11:29:53 root@LVS network-scripts]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0

2 后端RS的LVS配置

[16:55:48 root@web1 ~]$ cat lvs_DR_RS.sh
#!/bin/bash
#
#***********************************************************
#Author:            yanli
#url:               www.yanlinux.cn
#Date:              2022-10-29
#FileName:          lvs_DR_RS.sh
#Description:        
#***********************************************************
read -p "$(echo -e '\033[1;32m请输入要设置的VIP地址:\033[0m')" vip
mask='255.255.255.255'
dev=lo:1

case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $dev $vip netmask $mask
    echo "The RS Server is Ready!"
    ;;
stop)
    ifconfig $dev down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "The RS Server is Canceled!"
    ;;
*)
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac

#在两个RS服务器上运行这个脚本
[16:56:35 root@web1 ~]$ sh lvs_DR_RS.sh start
请输入要设置的VIP地址:172.16.0.100
The RS Server is Ready!
[16:58:15 root@web2 ~]$ sh lvs_DR_RS.sh start
请输入要设置的VIP地址:172.16.0.100
The RS Server is Ready!

3 LVS服务器配置

[17:03:15 root@LVS ~]$ cat lvs_DR_VS.sh 
#!/bin/bash
#
#***********************************************************
#Author:            yanli
#Date:              2022-10-29
#FileName:          lvs_DR_VS.sh
#Description:        
#***********************************************************
read -p "$(echo -e '\033[1;32m请输入要设置的VIP地址:\033[0m')" vip
iface='lo:1'
mask='255.255.255.255'
port='80'
rs1='10.0.0.7'
rs2='10.0.0.17'
scheduler='wrr'
type='-g'
rpm -q ipvsadm &> /dev/null || yum -y install ipvsadm &> /dev/null

case $1 in
start)
    ifconfig $iface $vip netmask $mask #broadcast $vip up
    iptables -F

    ipvsadm -A -t ${vip}:${port} -s $scheduler
    ipvsadm -a -t ${vip}:${port} -r ${rs1} $type -w 1
    ipvsadm -a -t ${vip}:${port} -r ${rs2} $type -w 1
    echo "The VS Server is Ready!"
    ;;
stop)
    ipvsadm -C
    ifconfig $iface down
    echo "The VS Server is Canceled!"
    ;;
*)
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac

#运行改脚本
[17:04:26 root@LVS ~]$ sh lvs_DR_VS.sh start
[17:04:59 root@LVS ~]$ ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.0.100:80 wrr
  -> 10.0.0.7:80                  Route   1      0          0         
  -> 10.0.0.17:80                 Route   1      0          0


#internet测试
[17:02:57 root@internet ~]$ while true;do curl 172.16.0.100;sleep 1;done
web2
web1
web2
web1
web2
web1
web2
web1
web2

4 web http协议通信过程,相关技术术语总结。

4.1 http协议通信过程

  • 建立TCP连接:在HTTP工作开始之前,web浏览器通过网络与服务器通过tcp建立连接,一般连接的端口号为80
  • web浏览器向服务器发送请求命令:GET/sample/heelo.jsp HTTP/1.1
  • web浏览器发送请求头信息:浏览器发送请求命令后,还要以头信息的形式向服务器发送一些信息,之后浏览器再发送一行空白行来通知服务器,它已经结束了头信息的发送
  • web服务器应答:HTTP/1.1 200 OK
  • web服务器发送应答头信息:像浏览器发送自身信息那样,服务器也会随同应答向客户端发送关于它自己的数据及请求的文档
  • web服务器想浏览器发送数据:发送头信息后,会发送一空白行表示头信息完毕,接着,就以Content-Type应答头信息所描述的格式发送客户端请求的实际数据
  • web服务器关闭连接:一般情况下,一旦web服务器向浏览器发送了请求的数据,就会关闭TCP连接。若浏览器或者服务器在头信息中加入了Connection:keep-alive,TCP连接在发送后仍将以打开状态,于是,浏览器可以继续通过相同的连接发送请求。

4.2 相关技术术语

4.2.1 WEB开发语言

  • htmlHyper Text Markup Language超文本标记语言,编程语言,主要负责实现页面的结构
  • cssCascading Style Sheet 层叠样式表, 定义了如何显示(装扮) HTML 元素,比如:字体大小和颜色属性等。样式通常保存在外部的 .css 文件中,用于存放一些HTML文件的公共属性,从而通过仅编辑一个简单的 CSS 文档,可以同时改变站点中所有页面的布局和外观
  • javascript:实现网页的动画效果,但是属于静态资源

4.2.2 MIME

MIME : Multipurpose Internet Mail Extensions多用途互联网邮件扩展

MIME 消息能包含文本、图像、音频、视频以及其他应用程序专用的数据。

4.2.3 URI和URL

  • URIUniform Resource Identifier 统一资源标识,分为URLURN

  • URNUniform Resource Naming,统一资源命名

  • 示例: P2P下载使用的磁力链接是URN的一种实现magnet:?xt=urn:btih:660557A6890EF888666

  • URLUniform Resorce Locator,统一资源定位符,用于描述某服务器某特定资源位置

两者区别:URN如同一个人的名称,而URL代表一个人的住址。换言之,URN定义某事物的身份,而URL提供查找该事物的方法URN仅用于命名,而不指定地址

4.2.4 网站访问量

网站访问量统计的重要指标

  • IP(独立IP):即Internet Protocol,指独立IP数。一天内来自相同客户机IP 地址只计算一次,记录远程客户机IP地址的计算机访问网站的次数,是衡量网站流量的重要指标
  • PV(访问量): 即Page View, 页面浏览量或点击量,用户每次刷新即被计算一次,PV反映的是浏览某网站的页面数,PV与来访者的数量成正比,PV并不是页面的来访者数量,而是网站被访问的页面数量
  • UV(独立访客):即Unique Visitor,访问网站的一台电脑为一个访客。一天内相同的客户端只被计算一次。可以理解成访问某网站的电脑的数量。网站判断来访电脑的身份是通过cookies实现的。如果更换了IP后但不清除cookies,再访问相同网站,该网站的统计中UV数是不变的

网站访问量

  • QPSrequest per second,每秒请求数
  • PVQPS和并发连接数换算公式
    • QPS= PV * 页面衍生连接次数/ 统计时间(86400)
    • 并发连接数 =QPS * http平均响应时间
  • 峰值时间:每天80%的访问集中在20%的时间里,这20%时间为峰值时间
  • 峰值时间每秒请求数(QPS)=( 总PV数 页面衍生连接次数)80% ) / ( 每天秒数 * 20% )

5 总结网络IO模型和nginx架构。

5.1 网络IO模型

5.1.1 阻塞型IO模型(blocking IO)

阻塞IO模型时最简单的I/O模型。

在应用调用recvfrom读取数据时,系统调用直到kernel将数据包准备好且拷贝到应用进程缓冲区或者发送错误时才返回,在此期间一直处于等待状态,进程在这段时间都是被阻塞的。

5.1.2 非阻塞型I/O模型(nonblocking IO)

非阻塞型IO是在应用调用recvfrom读取数据,如果该缓冲区没有数据的话,就会直接返回一个EWOULBLOCK错误,不会让应用一直等待。在没有数据的时候会即刻返回错误标识,那也意味着如果应用要读取数据就需要不断的调用recvfrom请求,直到读取到它数据要的数据为止。俗话说就是当应用发起读取数据申请的时候,若内核数据没有准备好会即刻告诉应用,不会让应用在那里等待。

5.1.3 I/O多路复用模型(I/O multiplexing)

进程通过将一个或多个fd传递给select,阻塞在select操作上,select帮我们侦测多个fd是否准备就绪,当有fd准备就绪时,select返回数据可读状态,应用程序再调用recvfrom读取数据。

复用IO的基本思路就是通过selectpollepoll来监控多fd ,不断的轮询所负责的所有socket,当某个socket有数据到达了,就通知用户进程。来达到不必为每个fd创建一个对应的监控线程,从而减少线程资源创建的目的。

5.1.4 信号驱动IO模型(signal-driven IO)

首先开启套接口信号驱动IO功能,并通过系统调用sigaction执行一个信号处理函数,此时请求即刻返回,当数据准备就绪时,就生成对应进程的SIGIO信号,通过信号回调通知应用线程调用recvfrom来读取数据。

IO复用模型里面的select虽然可以监控多个fd了,但select其实现的本质上还是通过不断的轮询fd来监控数据状态, 因为大部分轮询请求其实都是无效的,所以信号驱动IO意在通过这种建立信号关联的方式,实现了发出请求后只需要等待数据就绪的通知即可,这样就可以避免大量无效的数据状态轮询操作。

5.1.5 异步IO(asynchronous IO)

应用告知内核启动某个操作,并让内核在整个操作完成之后,通知应用,这种模型与信号驱动模型的主要区别在于,信号驱动IO只是由内核通知我们合适可以开始下一个IO操作,而异步IO模型是由内核通知我们操作什么时候完成。

异步IO的优化思路是解决了应用程序需要先后发送询问请求、发送接收数据请求两个阶段的模式,在异步IO的模式下,只需要向内核发送一次请求就可以完成状态询问和数拷贝的所有操作。

5.2 nginx架构

一个master进程,可生成一个或多个worker进程

  • master:负责加载分析配置文件、管理worker进程、平滑升级、…
  • worker:处理并响应用户请求

一个master有多个worker,每个worker可响应n个请求,每个worker有核心模块core和外围的诸多模块modules组成

nginx大致架构就是:

  • 1.Nginx启动后,会产生一个主进程,主进程执行一系列的工作后会产生一个或者多个工作进程;
  • 2.在客户端请求动态站点的过程中,Nginx服务器还涉及和后端服务器的通信。Nginx将接收到的Web请求通过代理转发到后端服务器,由后端服务器进行数据处理和组织;
  • 3.Nginx为了提高对请求的响应效率,降低网络压力,采用了缓存机制,将历史应答数据缓存到本地。保障对缓存文件的快速访问

6 nginx总结核心配置和优化。

6.1 全局配置优化项

user nginx nginx; #启动Nginx工作进程的用户和组,在创建nginx账号的时候,在所有集群上最好指定uid,实现所有集群上的对应关系

worker_processes [number | auto]; #启动Nginx工作进程的数量,一般设为和CPU核心数相同。为了防止不同机器CPU核数不同,若写死工作进程数的话,会导致有的机器资源浪费,所以可以不写死进程数,写上auto来自动匹配机器的CPU核数。

worker_cpu_affinity 00000001 00000010 00000100 00001000 | auto ; #将Nginx工作进程绑定到指定的CPU核心,默认Nginx是不进行进程绑定的,绑定并不是意味着当前nginx进程独占以一核心CPU,但是可以保证此进程不会运行在其他核心上,这就极大减少了nginx的工作进程在不同的cpu核心上的来回跳转,减少了CPU对进程的资源分配与回收以及内存管理等,因此可以有效的提升nginx服务器的性能。
CPU MASK: 00000001:0号CPU
          00000010:1号CPU
          10000000:7号CPU
#示例:
worker_cpu_affinity 0001 0010 0100 1000;第0号---第3号CPU

worker_priority 0; #工作进程优先级,-20~20(19)
worker_rlimit_nofile 65536; #所有worker进程能打开的文件数量上限,包括:Nginx的所有连接(例如与代理服务器的连接等),而不仅仅是与客户端的连接,另一个考虑因素是实际的并发连接数不能超过系统级别的最大打开文件数的限制.最好与ulimit -n 或者limits.conf的值保持一致。内核中也要改

events {
    worker_connections 65536; #设置单个工作进程的最大并发连接数
    accept_mutex on; #on为同一时刻一个请求轮流由work进程处理,而防止被同时唤醒所有worker,避免多个睡眠进程被唤醒的设置,默认为off,新请求会唤醒所有worker进程,此过程也称为"惊群",因此nginx刚安装完以后要进行适当的优化。建议设置为on
    multi_accept on; #on时Nginx服务器的每个工作进程可以同时接受多个新的网络连接,此指令默认为off,即默认为一个工作进程只能一次接受一个新的网络连接,打开后几个同时接受多个。建议设置为on
}

6.2 http配置及优化

http {
    include mime.types; #导入支持的文件类型,是相对于/apps/nginx/conf的目录
    default_type application/octet-stream; #除mime.types中文件类型外,设置其它文件默认类型,访问其它类型时会提示下载不匹配的类型文件

#日志配置部分
    #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    #                '$status $body_bytes_sent "$http_referer" '
    #                '"$http_user_agent" "$http_x_forwarded_for"';
    #access_log logs/access.log main;

#自定义优化参数
    sendfile on;
    #tcp_nopush on; #在开启了sendfile的情况下,合并请求后统一发送给客户端,必须开启sendfile
    #tcp_nodelay off; #在开启了keepalived模式下的连接是否启用TCP_NODELAY选项,当为off时,延迟0.2s发送,默认On时,不延迟发送,立即发送用户响应报文。

    #keepalive_timeout 0;
    keepalive_timeout 65 65; #设置会话保持时间,第二个值为响应首部:keep-Alived:timeout=65,可以和第一个值不同
    #gzip on; #开启文件压缩

    server {
        listen 80; #设置监听地址和端口
        server_name localhost; #设置server name,可以以空格隔开写多个并支持正则表达式,如:*.magedu.com www.magedu.* ~^www\d+\.magedu\.com$ default_server
        #charset koi8-r; #设置编码格式,默认是俄语格式,建议改为utf-8
        #access_log logs/host.access.log main;
        location / {
            root html;
            index index.html index.htm;
        }

        #error_page 404 /404.html;
        # redirect server error pages to the static page /50x.html
        #
        error_page 500 502 503 504 /50x.html; #定义错误页面
        location = /50x.html {
            root html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ { #以http的方式转发php请求到指定web服务器
        #     proxy_pass http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ { #以fastcgi的方式转发php请求到php处理
        #     root html;
        #     fastcgi_pass 127.0.0.1:9000;
        #      fastcgi_index index.php;
        #     fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
        #     include fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht { #拒绝web形式访问指定文件,如很多的网站都是通过.htaccess文件来改变自己的重定向等功能。
        #     deny all;
        #}
        location ~ /passwd.html {
        deny all;
        }
    }

    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server { #自定义虚拟server
    #     listen 8000;
    #     listen somename:8080;
    #     server_name somename alias another.alias;

    #     location / {
    #         root html;
    #         index index.html index.htm; #指定默认网页文件,此指令由ngx_http_index_module模块提供

    #     }
    #}

    # HTTPS server
    #
    #server { #https服务器配置
    #     listen 443 ssl;
    #      server_name localhost;

    #     ssl_certificate cert.pem;
    #     ssl_certificate_key cert.key;

    #     ssl_session_cache shared:SSL:1m;
    #     ssl_session_timeout 5m;

    #     ssl_ciphers HIGH:!aNULL:!MD5;
    #     ssl_prefer_server_ciphers on;

    #     location / {
    #         root html;
    #          index index.html index.htm;
    #     }
    #}

#是否在响应报文的Server首部显示nginx版本,建议设置成o
server_tokens on | off | build | string;

6.3 开启压缩功能

#启用或禁用gzip压缩,默认关闭
gzip on | off;

#压缩比由低到高从1到9,默认为1
gzip_comp_level level;

#禁用IE6 gzip功能
gzip_disable "MSIE [1-6]\.";

#gzip压缩的最小文件,小于设置值的文件将不会压缩
gzip_min_length 1k;

#启用压缩功能时,协议的最小版本,默认HTTP/1.1
gzip_http_version 1.0 | 1.1;

#指定Nginx服务需要向服务器申请的缓存空间的个数和大小,平台不同,默认:32 4k或者16 8k;
gzip_buffers number size;

#指明仅对哪些类型的资源执行压缩操作;默认为gzip_types text/html,不用显示指定,否则出错
gzip_types mime-type ...;

#如果启用压缩,是否在响应报文首部插入“Vary: Accept-Encoding”,一般建议打开
gzip_vary on | off;

#预压缩,即直接从磁盘找到对应文件的gz后缀的式的压缩文件返回给用户,无需消耗服务器CPU
#注意: 来自于ngx_http_gzip_static_module模块
gzip_static on | off;

6.4 自定义日志格式

Nginx 的默认访问日志记录内容相对比较单一,默认的格式也不方便后期做日志统计分析,生产环境中
通常将nginx日志转换为json日志,然后配合使用ELK做日志收集,统计和分析。

[17:13:05 root@rocky ~]$ vi /apps/nginx/conf/nginx.conf
http {
    include       mime.types;
    default_type  application/octet-stream;
    log_format  testlog '$remote_addr  [$time_local] "$request" $status "$http_user_agent" ';
    log_format  access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,' #总的处理时间
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'   #后端应用服务器处理时间
        '"http_host":"$host",'
        '"uri":"$uri",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"tcp_xff":"$proxy_protocol_addr",'
        '"http_user_agent":"$http_user_agent",'
        '"status":"$status"}';
......

[17:20:25 root@rocky ~]$ vi /apps/nginx/conf/conf.d/pc.conf
vhost_traffic_status_zone;
server {
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
    access_log /apps/nginx/logs/pc-access.log testlog;
    access_log /apps/nginx/logs/pc-access-json.log access_json; #此处
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
    }
    location = /nginx_status {
        stub_status;
    }
    location /echo {
        set $name yanli;
        echo $name;
        set $my_port $server_port;
        echo $my_port;
    }
}

[17:24:02 root@rocky ~]$ cat /apps/nginx/logs/pc-access-json.log |jq 
{
  "@timestamp": "2022-11-07T17:22:43+08:00",
  "host": "10.0.0.8",
  "clientip": "10.0.0.18",
  "size": 28,
  "responsetime": 0.000,
  "upstreamtime": "-",
  "upstreamhost": "-",
  "http_host": "www.yanlinux.org",
  "uri": "/index.html",
  "xff": "-",
  "referer": "-",
  "tcp_xff": "-",
  "http_user_agent": "curl/7.61.1",
  "status": "200"
}
{
  "@timestamp": "2022-11-07T17:22:44+08:00",
  "host": "10.0.0.8",
  "clientip": "10.0.0.18",
  "size": 28,
  "responsetime": 0.000,
  "upstreamtime": "-",
  "upstreamhost": "-",
  "http_host": "www.yanlinux.org",
  "uri": "/index.html",
  "xff": "-",
  "referer": "-",
  "tcp_xff": "-",
  "http_user_agent": "curl/7.61.1",
  "status": "200"
}
{
  "@timestamp": "2022-11-07T17:22:44+08:00",
  "host": "10.0.0.8",
  "clientip": "10.0.0.18",
  "size": 24,
  "responsetime": 0.000,
  "upstreamtime": "-",
  "upstreamhost": "-",
  "http_host": "www.yanlinux.org",
  "uri": "/echo",
  "xff": "-",
  "referer": "-",
  "tcp_xff": "-",
  "http_user_agent": "curl/7.61.1",
  "status": "200"
}

6.5 启用缓存功能

6.5.1 非缓存场景压力测试

#后端服务器准备
[17:03:54 root@rocky8 html]$ cat /var/log/messages > m.html

#代理配置
[16:49:33 root@rocky conf.d]$ vi pc.conf
server {
    listen 443 ssl;
    ssl_certificate /apps/nginx/ssl/www.yanlinux.org.crt;
    ssl_certificate_key /apps/nginx/ssl/www.yanlinux.org.key;
    ssl_session_cache shared:sslcache:20m;
    ssl_session_timeout 10m;
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
}
server {
    listen 80;
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
    access_log /apps/nginx/logs/www.yanlinux.org-access.log access_json;
    location / {                                                                                       
        proxy_pass http://10.0.0.28;
        proxy_connect_timeout 10s;
    }
}

#客户端进行压力测试
[17:08:32 root@rocky8 ~]$ ab -c 100 -n 2000 http://www.yanlinux.org/m.html
Server Software:        nginx/1.22.1
Server Hostname:        www.yanlinux.org
Server Port:            80

Document Path:          /m.html
Document Length:        605907 bytes

Concurrency Level:      100
Time taken for tests:   10.414 seconds
Complete requests:      2000
Failed requests:        0
Total transferred:      1212290000 bytes
HTML transferred:       1211814000 bytes
Requests per second:    192.05 [#/sec] (mean) #每秒请求数
Time per request:       520.699 [ms] (mean)
Time per request:       5.207 [ms] (mean, across all concurrent requests)
Transfer rate:          113681.48 [Kbytes/sec] received

6.5.2 准备缓存配置

#在主配置文件nginx.conf的http配置段加上下面一行信息,定义缓存
[17:12:06 root@rocky conf.d]$ vi ../nginx.conf
proxy_cache_path /data/nginx/proxycache levels=1:2:2 keys_zone=proxycache:20m inactive=120s max_size
=1g; #其中的/data/nginx/proxycache中的proxycache目录会自动生成

#修改子配置文件
server {
    listen 443 ssl;
    ssl_certificate /apps/nginx/ssl/www.yanlinux.org.crt;
    ssl_certificate_key /apps/nginx/ssl/www.yanlinux.org.key;
    ssl_session_cache shared:sslcache:20m;
    ssl_session_timeout 10m;
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
}
server {
    listen 80;
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
    access_log /apps/nginx/logs/www.yanlinux.org-access.log access_json;
    location / {
        proxy_pass http://10.0.0.28;
        proxy_connect_timeout 10s;
        proxy_cache proxycache; #启用缓存,缓存名跟上面定义的要一致
        proxy_cache_key $request_uri; 
        proxy_cache_valid any 5m; #必须指定哪些响应码的缓存,指定所有的响应码有效期五分钟
    }
}

#自动生成缓存目录
[17:22:42 root@rocky conf.d]$ ll /data/nginx/proxycache/
total 0

6.5.3 访问并验证缓存文件

[17:09:39 root@rocky8 ~]$ ab -c 100 -n 2000 http://www.yanlinux.org/m.html
Server Software:        nginx/1.22.1
Server Hostname:        www.yanlinux.org
Server Port:            80

Document Path:          /m.html
Document Length:        605907 bytes

Concurrency Level:      100
Time taken for tests:   5.465 seconds
Complete requests:      2000
Failed requests:        0
Total transferred:      1212290000 bytes
HTML transferred:       1211814000 bytes
Requests per second:    365.94 [#/sec] (mean) #性能提高将近一倍
Time per request:       273.269 [ms] (mean)
Time per request:       2.733 [ms] (mean, across all concurrent requests)
Transfer rate:          216613.56 [Kbytes/sec] received

#验证缓存目录结构及文件大小
[17:23:03 root@rocky conf.d]$ tree /data/nginx/proxycache/
/data/nginx/proxycache/
└── 7
    └── fd
        └── 7e
            └── 23dcf7c2b96327ee9899fc28a847efd7

3 directories, 1 file
[17:26:02 root@rocky conf.d]$ lh /data/nginx/proxycache/7/fd/7e/23dcf7c2b96327ee9899fc28a847efd7
-rw------- 1 nginx nginx 593K Nov  9 17:24 /data/nginx/proxycache/7/fd/7e/23dcf7c2b96327ee9899fc28a847efd7

7 使用脚本完成一键编译安装nginx任意版本。

#一键安装脚本
[16:26:15 root@rocky8 ~]$ cat install_nginx.sh 
#!/bin/bash
#
#***********************************************************
#Author:            yanli
#Date:              2022-11-01
#FileName:          install_nginx.sh
#Description:        
#***********************************************************

OS_TYPE=`awk -F'[ "]' '/^NAME/{print $2}' /etc/os-release`
OS_VERSION=`awk -F'[".]' '/^VERSION_ID/{print $2}' /etc/os-release`
CPU=`lscpu |awk '/^CPU\(s\)/{print $2}'`
SRC_DIR=/usr/local/src
read -p "$(echo -e '\033[1;32m请输入下载的版本号:\033[0m')" NUM
NGINX_FILE=nginx-${NUM}
NGINX_INSTALL_DIR=/apps/nginx

color () {
    RES_COL=60
    MOVE_TO_COL="echo -en \\033[${RES_COL}G"
    SETCOLOR_SUCCESS="echo -en \\033[1;32m"
    SETCOLOR_FAILURE="echo -en \\033[1;31m"
    SETCOLOR_WARNING="echo -en \\033[1;33m"
    SETCOLOR_NORMAL="echo -en \E[0m"
    echo -n "$1" && $MOVE_TO_COL
    echo -n "["
    if [ $2 = "success" -o $2 = "0" ] ;then
        ${SETCOLOR_SUCCESS}
        echo -n $"  OK  "
    elif [ $2 = "failure" -o $2 = "1"  ] ;then
        ${SETCOLOR_FAILURE}
        echo -n $"FAILED"
    else
        ${SETCOLOR_WARNING}
        echo -n $"WARNING"
    fi
    ${SETCOLOR_NORMAL}
    echo -n "]"
    echo
}

#下载源码
wget_package(){
    [ -e ${NGINX_INSTALL_DIR} ] && { color "nginx 已安装,请卸载后再安装" 1; exit; }
    cd ${SRC_DIR}
    if [ -e ${NGINX_FILE}.tar.gz ];then
        color "源码包已经准备好" 0
    else
        color "开始下载源码包" 0
        wget http://nginx.org/download/${NGINX_FILE}.tar.gz
        [ $? -ne 0 ] && { color "下载 ${NGINX_FILE}.tar.gz文件失败" 1; exit; }
    fi
}

#编译安装
install_nginx(){
    color "开始安装nginx" 0
    if id nginx &> /dev/null;then
        color "nginx用户已经存在" 1
    else
        useradd -s /sbin/nologin -r nginx
        color "nginx用户账号创建完成" 0
    fi

    color "开始安装nginx依赖包" 0
    if [ $OS_TYPE == "Centos" -a ${OS_VERSION} == '7' ];then
        yum -y install make gcc pcre-devel openssl-devel zlib-devel perl-ExtUtils-Embed
    elif [ $OS_TYPE == "Centos" -a ${OS_VERSION} == '8' ];then
        yum -y install make gcc-c++ libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel perl-ExtUtils-Embed
    elif [ $OS_TYPE == "Rocky" ];then
        yum -y install make gcc libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel perl-ExtUtils-Embed
    elif [ $OS_TYPE == "Ubuntu" ];then
        apt update
        apt -y install make gcc libpcre3 libpcre3-dev openssl libssl-dev zlib1g-dev 
    else
        color '不支持此系统!'  1
        exit
    fi

    #开始编译安装
    color "开始编译安装nginx" 0
    cd $SRC_DIR
    tar xf ${NGINX_FILE}.tar.gz
    cd ${SRC_DIR}/${NGINX_FILE}
    ./configure --prefix=${NGINX_INSTALL_DIR} --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module
    make -j ${CPU} && make install
    [ $? -eq 0 ] && color "nginx 编译安装成功" 0 ||  { color "nginx 编译安装失败,退出!" 1 ;exit; }
    echo "PATH=${NGINX_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/nginx.sh

    #创建service文件
    cat > /lib/systemd/system/nginx.service <<EOF
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=${NGINX_INSTALL_DIR}/logs/nginx.pid
ExecStartPre=/bin/rm -f ${NGINX_INSTALL_DIR}/logs/nginx.pid
ExecStartPre=${NGINX_INSTALL_DIR}/sbin/nginx -t
ExecStart=${NGINX_INSTALL_DIR}/sbin/nginx
ExecReload=/bin/kill -s HUP \$MAINPID
KillSignal=SIGQUIT
TimeoutStopSec=5
KillMode=process
PrivateTmp=true                                                                                        
LimitNOFILE=100000

[Install]
WantedBy=multi-user.target
EOF

    #启动服务
    systemctl enable --now nginx &> /dev/null
    systemctl is-active nginx &> /dev/null ||  { color "nginx 启动失败,退出!" 1 ; exit; }
    color "nginx 安装完成" 0

}

wget_package
install_nginx

#在rocky上执行
[16:18:19 root@rocky8 ~]$ sh install_nginx.sh 
请输入下载的版本号:1.20.2
开始下载源码包                                             [  OK  ]
--2022-11-01 16:18:54--  http://nginx.org/download/nginx-1.20.2.tar.gz
Resolving nginx.org (nginx.org)... 52.58.199.22, 3.125.197.172, 2a05:d014:edb:5704::6, ...
Connecting to nginx.org (nginx.org)|52.58.199.22|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://183.207.33.36:9011/nginx.org/c3pr90ntc0td/download/nginx-1.20.2.tar.gz [following]
--2022-11-01 16:18:54--  http://183.207.33.36:9011/nginx.org/c3pr90ntc0td/download/nginx-1.20.2.tar.gz
Connecting to 183.207.33.36:9011... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1062124 (1.0M) [application/octet-stream]
Saving to: ‘nginx-1.20.2.tar.gz’

nginx-1.20.2.tar.gz                                  100%[==============================================

2022-11-01 16:18:56 (684 KB/s) - ‘nginx-1.20.2.tar.gz’ saved [1062124/1062124]

开始安装nginx                                              [  OK  ]
nginx用户账号创建完成                                       [  OK  ]
开始安装nginx依赖包                                         [  OK  ]
Last metadata expiration check: 0:58:41 ago on Tue 01 Nov 2022 03:20:16 PM CST.
Package make-1:4.2.1-11.el8.x86_64 is already installed.
Package pcre-8.42-6.el8.x86_64 is already installed.
Package zlib-1.2.11-17.el8.x86_64 is already installed.
Package openssl-1:1.1.1k-4.el8.x86_64 is already installed.
Dependencies resolved.
========================================================================================================
 Package                                                  Architecture                        Version   
========================================================================================================
Installing:
 gcc                                                      x86_64                              8.5.0-10.1
 libtool                                                  x86_64                              2.4.6-25.e
 openssl-devel                                            x86_64                              1:1.1.1k-7
 pcre-devel                                               x86_64                              8.42-6.el8
 perl-ExtUtils-Embed                                      noarch                              1.34-421.e
 zlib-devel                                               x86_64                              1.2.11-19
 ......
 Complete!
开始编译安装nginx                                    [  OK  ]
checking for OS
 + Linux 4.18.0-348.el8.0.2.x86_64 x86_64
checking for C compiler ... found
 + using GNU C compiler
 + gcc version: 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC) 
checking for gcc -pipe switch ... found
checking for -Wl,-E switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for gcc builtin 64 bit byteswap ... found
......
test -d '/apps/nginx/logs' \
    || mkdir -p '/apps/nginx/logs'
make[1]: Leaving directory '/usr/local/src/nginx-1.20.2'
nginx 编译安装成功                                         [  OK  ]
nginx 安装完成                                            [  OK  ]
[16:24:30 root@rocky8 ~]$ ss -ntl
State      Recv-Q      Send-Q           Local Address:Port           Peer Address:Port     Process     
LISTEN     0           128                    0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0           128                    0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0           128                       [::]:22                     [::]:*                    
[16:24:53 root@rocky8 ~]$ systemctl status nginx.service 
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-11-01 16:24:30 CST; 33s ago
  Process: 16014 ExecStart=/apps/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 16003 ExecStartPre=/apps/nginx/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 15992 ExecStartPre=/bin/rm -f /apps/nginx/logs/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 16016 (nginx)
    Tasks: 2 (limit: 11218)
   Memory: 1.9M
   CGroup: /system.slice/nginx.service
           ├─16016 nginx: master process /apps/nginx/sbin/nginx
           └─16017 nginx: worker process

Nov 01 16:24:30 rocky8.yanlinux.cn systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 01 16:24:30 rocky8.yanlinux.cn nginx[16003]: nginx: the configuration file /apps/nginx/conf/nginx.>
Nov 01 16:24:30 rocky8.yanlinux.cn nginx[16003]: nginx: configuration file /apps/nginx/conf/nginx.conf>
Nov 01 16:24:30 rocky8.yanlinux.cn systemd[1]: Started The nginx HTTP and reverse proxy server.

#Ubuntu上执行
[16:19:52 root@ubuntu2004 ~]$ sh install_nginx.sh 
请输入下载的版本号:1.20.2
开始下载源码包                                             [  OK  ]
--2022-11-01 16:20:02--  http://nginx.org/download/nginx-1.20.2.tar.gz
Resolving nginx.org (nginx.org)... 3.125.197.172, 52.58.199.22, 2a05:d014:edb:5704::6, ...
Connecting to nginx.org (nginx.org)|3.125.197.172|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1062124 (1.0M) [application/octet-stream]
Saving to: ‘nginx-1.20.2.tar.gz’

nginx-1.20.2.tar.gz                                  100%[==============================================

2022-11-01 16:20:05 (657 KB/s) - ‘nginx-1.20.2.tar.gz’ saved [1062124/1062124]

开始安装nginx                                              [  OK  ]
nginx用户账号创建完成                                      [  OK  ]
开始安装nginx依赖包                                   [  OK  ]
Hit:1 https://mirrors.aliyun.com/ubuntu focal InRelease
Get:2 https://mirrors.aliyun.com/ubuntu focal-security InRelease [114 kB]
Get:3 https://mirrors.aliyun.com/ubuntu focal-updates InRelease [114 kB]
Get:4 https://mirrors.aliyun.com/ubuntu focal-backports InRelease [108 kB]
......
开始编译安装nginx                                     [  OK  ]
checking for OS
 + Linux 5.4.0-125-generic x86_64
checking for C compiler ... found
 + using GNU C compiler
 + gcc version: 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1) 
checking for gcc -pipe switch ... found
checking for -Wl,-E switch ... found
......
test -d '/apps/nginx/html' \
    || cp -R html '/apps/nginx'
test -d '/apps/nginx/logs' \
    || mkdir -p '/apps/nginx/logs'
make[1]: Leaving directory '/usr/local/src/nginx-1.20.2'
nginx 编译安装成功                                         [  OK  ]
nginx 安装完成                                         [  OK  ]
[16:30:37 root@ubuntu2004 ~]$ ss -ntl
State      Recv-Q     Send-Q           Local Address:Port            Peer Address:Port     Process     
LISTEN     0          511                    0.0.0.0:80                   0.0.0.0:*                    
[16:34:03 root@ubuntu2004 ~]$ systemctl status nginx.service 
● nginx.service - The nginx HTTP and reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-11-01 16:30:37 CST; 3min 44s ago
    Process: 9589 ExecStartPre=/bin/rm -f /apps/nginx/logs/nginx.pid (code=exited, status=0/SUCCESS)
    Process: 9594 ExecStartPre=/apps/nginx/sbin/nginx -t (code=exited, status=0/SUCCESS)
    Process: 9598 ExecStart=/apps/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
   Main PID: 9600 (nginx)
      Tasks: 2 (limit: 2236)
     Memory: 2.3M
     CGroup: /system.slice/nginx.service
             ├─9600 nginx: master process /apps/nginx/sbin/nginx
             └─9601 nginx: worker process

Nov 01 16:30:37 ubuntu2004 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 01 16:30:37 ubuntu2004 nginx[9594]: nginx: the configuration file /apps/nginx/conf/nginx.conf synt>
Nov 01 16:30:37 ubuntu2004 nginx[9594]: nginx: configuration file /apps/nginx/conf/nginx.conf test is >
Nov 01 16:30:37 ubuntu2004 systemd[1]: Started The nginx HTTP and reverse proxy server.

8 任意编译一个第3方nginx模块,并使用。

8.1 nginx-module-vts 模块实现流量监控

#下载模块并解压
[15:40:18 root@rocky src]$ wget https://github.com/vozlt/nginx-module-vts/archive/refs/tags/v0.2.1.tar.gz
[15:40:26 root@rocky src]$ tar xf v0.2.1.tar.gz 
[15:40:31 root@rocky src]$ ll
total 176
drwxr-xr-x 9 1001 1001    186 Nov  1 14:50 nginx-1.20.2
drwxrwxr-x 8 root root    165 Sep 17 04:42 nginx-module-vts-0.2.1

#查看安装nginx时的编译参数
[15:43:56 root@rocky nginx-1.22.1]$ nginx -V
nginx version: nginx/1.22.1
built by gcc 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC) 
built with OpenSSL 1.1.1k  FIPS 25 Mar 2021
TLS SNI support enabled
configure arguments: --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module

#利用上面参数,并加上--add-module选项来将该模块编译进去
[15:45:18 root@rocky nginx-1.22.1]$ ./configure --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --add-module=/usr/local/src/nginx-module-vts-0.2.1
[15:48:54 root@rocky nginx-1.22.1]$ make -j 2 && make install

[15:49:07 root@rocky nginx-1.22.1]$ ll /apps/nginx/sbin/
total 15580
-rwxr-xr-x 1 root root 8362544 Nov  7 15:48 nginx
-rwxr-xr-x 1 root root 7587592 Nov  2 10:26 nginx.old

#配置文件
[15:52:15 root@rocky ~]$ vi /apps/nginx/conf/conf.d/pc.conf
vhost_traffic_status_zone;
server {
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;                                                      
    }
    location = /nginx_status {
        stub_status;
    }
}
[15:54:37 root@rocky ~]$ systemctl restart nginx.service

8.2 echo模块实现信息显示

#下载模块并解压
[16:00:09 root@rocky src]$ wget https://github.com/openresty/echo-nginx-module/archive/refs/tags/v0.63.tar.gz
[16:00:20 root@rocky src]$ tar xf v0.63.tar.gz 
[16:00:31 root@rocky src]$ ll
total 232
drwxrwxr-x 5 root root    174 Aug  1 07:52 echo-nginx-module-0.63

[16:00:59 root@rocky ~]$ cd nginx-1.22.1/
[16:01:34 root@rocky nginx-1.22.1]$ ./configure --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --add-module=/usr/local/src/nginx-module-vts-0.2.1 --add-module=/usr/local/src/echo-nginx-module-0.63
[16:02:49 root@rocky nginx-1.22.1]$ make -j 2 && make install
[16:03:28 root@rocky nginx-1.22.1]$ nginx -V
nginx version: nginx/1.22.1
built by gcc 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC) 
built with OpenSSL 1.1.1k  FIPS 25 Mar 2021
TLS SNI support enabled
configure arguments: --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --add-module=/usr/local/src/nginx-module-vts-0.2.1 --add-module=/usr/local/src/echo-nginx-module-0.63

[16:07:09 root@rocky ~]$ vi /apps/nginx/conf/conf.d/pc.conf
vhost_traffic_status_zone;
server {
    server_name www.yanlinux.org;
    root /apps/nginx/html/pc;
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
    }
    location = /nginx_status {
        stub_status;
    }
    location /echo {
        echo "hello yanli";
        echo $remote_addr; #打印远程主机的ip
        echo $uri          #打印uri                                                                             
    }
}
[16:08:28 root@rocky ~]$ systemctl restart nginx.service


#测试
[15:15:40 root@rocky8 ~]$ curl www.yanlinux.org/echo
hello yanli
10.0.0.18
/echo

  转载请注明: 焱黎的博客 第八周作业

  目录