NGINX之负载均衡
1.准备环境
1.1系统环境
NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
1.2准备五台虚拟主机:
lb01 192.168.56.32 lb02 192.168.56.33
web01 192.168.56.34 web02 192.168.56.35 web03 192.168.56.36
关闭防火墙
systemctl stop firewalld.service systemctl disable firewalld.service
1.3安装nginx
#!/bin/bash yum install -y epel-release yum install -y pcre pcre-devel openssl openssl-devel mkdir -p /home/dilusense/tools cd /home/dilusense/tools yum install -y wget wget http://nginx.org/download/nginx-1.10.2.tar.gz tar xf nginx-1.10.2.tar.gz cd nginx-1.10.2/ useradd -s /sbin/nologin -M www ./configure --prefix=/usr/local/nginx-1.10.2 --user=www --group=www \ --with-http_stub_status_module --with-http_ssl_module make && make install ln -s /usr/local/nginx-1.10.2 /usr/local/nginx echo 'export PATH=$PATH:/usr/local/nginx/sbin/' >>/etc/profile . /etc/profile /usr/local/nginx/sbin/nginx
1.4web01 web02 wweb03nginx
配置文件
[root@web01 ~]# cat /usr/local/nginx/conf/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; server { listen 80; server_name bbs.openstarck.org; location / { root html/bbs; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 80; server_name blog.openstarck.org; location / { root html/blog; index index.html index.htm; } error_page 500 502 503 504 /50x.html; } }
1.5lb01 lb02 nginx
配置文件
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; upstream server_pools { server 192.168.56.34 weight=2; server 192.168.56.35 weight=1; server 192.168.56.36 weight=1; } server { listen 80; server_name bbs.openstack.org; location / { proxy_pass http://server_pools; } } }
检查语法并加载配置文件进行测试
[root@lb01 ~]# nginx -t nginx: the configuration file /usr/local/nginx-1.10.2/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx-1.10.2/conf/nginx.conf test is successful [root@lb01 ~]# nginx -s reload [root@lb01 conf]# for i in {1..30};do curl 192.168.56.32;sleep 1;done
2.Nginx负载均衡核心组件
2.1Nginx upstream模块
Nginx的负载均衡功能依赖于ngx_http_upstream_module模块,所支持的代理方式包括proxy_pass\fastcgi_pass\memcached_pass等
ngx_http_upstream_module模块允许Nginx定义一组或多组节点服务器组,使用时可以通过proxy_pass代理方式把网站的请求发送到事先定义好的对应Upstream组的名字上,具体写法为"proxy_pass http://www_server_pools"
其中www_server_pools就是一个Upstream节点服务器组名字。
ngx_http_upstream_module
模块官方地址:
http://nginx.org/en/docs/http/ngx_upstream_module.html
2.2upstream模块语法
http { upstream server_pools { server 192.168.56.34 weight=2 max_fails=1 fail_timeout=10s;#过10s再尝试一次 server 192.168.56.35 weight=1; server 192.168.56.36 weight=1 backup; server 192.168.56.37 weight=1 down; } server { listen 80; server_name bbs.openstck.org; location / { proxy_pass http://server_pools; proxy_set_header Host $host; #>==保持主机名host不变 proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP } } server { listen 80; server_name blog.openstck.org; location / { proxy_pass http://server_pools; proxy_set_header Host $host; #>==保持主机名host不变 proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP } } } #==>注释说明: 其中backup热备配置(RS节点的高可用)当前面激活的RS都失败后会自动启动热备RS,这标志着这个服务器作为备份服务器,若主服务器全 部宕机了,就会向它转发请求,注意,当负载调度算法为ip_hash时,后端服务器在负载均衡调度中的状态不能是weight和backup 其中down这个参数可配合ip_hash使用 ,当代码更新时可以使用down 用户无法请求 可以进行更新
3.upstream实现不同web服务器功能
一个虚拟主机多个条件:
http { upstream upload_servers { server 192.168.56.35:80; } upstream static_servers { server 192.168.56.36:80; } upstream default_servers { server 192.168.56.37:80; } server{ listen 80; server_name bbs.openstack.org; location /upload { proxy_pass http://upload_servers; proxy_set_header Host $host; #>==保持主机名host不变 proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP } location /static { proxy_pass http://static_servers; proxy_set_header Host $host; #>==保持主机名host不变 proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP } location / { proxy_pass http://default_servers; proxy_set_header Host $host; #>==保持主机名host不变 proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP } } }
负载均衡lb01中nginx.conf配置文件内容:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
upstream upload_servers {
server 192.168.56.35:80;
}
upstream static_servers {
server 192.168.56.36:80;
}
upstream default_servers {
server 192.168.56.37:80;
}
server {
listen 80;
server_name bbs.openstack.org;
location /upload {
proxy_pass http://upload_servers;
proxy_set_header Host $host; #>==保持主机名host不变
proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP
}
location /static {
proxy_pass http://static_servers;
proxy_set_header Host $host; #>==保持主机名host不变
proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP
}
location / {
proxy_pass http://default_servers;
proxy_set_header Host $host; #>==保持主机名host不变
proxy_set_header X-Forwarded-For $remote_addr; #==>记录用户真实IP
}
}
}
4.不同的浏览器访问不同的客户端
4.1根据客户端的设备(user_agent
)转发实践
企业中,为了让不同的客户端设备用户访问有更好的体验,需要在后端架设不同服务器来满足不同的客户端访问,例如:移动客户端访问网站,就需要部署单独的移动服务器及程序,体验才能更好,而且移动端还分苹果、安卓、IPAD等,在传统的情况下,可以用如下方法解决:
1)常规4层负载均衡解决方案架构
用户记录两个域名
根据http_user_agent
内容选择不同的服务器
在常规4层负载均衡架构下,可以使用不同的域名来实现这个需求,例如,人为分配好让移动客户端访问m.xionghaizei.com,PC客户端访问www.xionghaizei.com,通过不同的域名来引导用户到指定的后端服务器;
2)根据客户端设备(user_agent
)转发请求实践
用户只需记录一个域名
可以判断user_agent
来进行转发
为了方便测试,就不用不同的域名(网站)此次使用static_server、upload_server作为本次实验的后端的服务器池。
下面根据电脑客户端浏览器的不同设置对应的匹配规则
location / { if ($http_user_agent ~* "MSIE") { proxy_pass http://static_servers; #==>如果请求的是IE浏览器(MSIE)则让请求由static_server池处理 } if ($http_user_agent ~* "Chrome") { proxy_pass http://upload_servers; #==>如果请求的是谷歌浏览器(Chrome)则让请求由upload_server池处理 } proxy_pas http://default_servers; #==>其他客户端,由default_server池处理 }
完整配置文件
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
upstream upload_servers {
server 192.168.56.35:80;
}
upstream static_servers {
server 192.168.56.36:80;
}
upstream default_servers {
server 192.168.56.37:80;
}
server {
listen 80;
server_name bbs.openstack.org;
location / {
if ($http_user_agent ~* "MSIE")
{
proxy_pass http://static_servers; #==>如果请求的是IE浏览器(MSIE)则让请求由static_server池处理
}
if ($http_user_agent ~* "Chrome")
{
proxy_pass http://upload_servers; #==>如果请求的是谷歌浏览器(Chrome)则让请求由upload_server池处理
}
proxy_pass http://default_servers; #==>其他客户端,由default_server池处理
}
}
}
curl命令访问
curl -A chrome bbs.openstack.org/upload/index.html
5.高可用keepalived详解
安装并启动
下载阿里源 https://opsx.alibaba.com/mirror wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum makecache ##>==生成缓存 安装keepalived并启动 yum install -y keepalived systemctl start keepalived.service 注意:需要关闭防火墙及selinux
配置文件
[root@lb01 keepalived]# cat -n /etc/keepalived/keepalived.conf 1 global_defs { 2 router_id LVS_01 3 } 4 5 vrrp_instance VI_1 { 6 state MASTER 7 interface eth0 8 virtual_router_id 51 9 priority 150 10 advert_int 1 11 authentication { 12 auth_type PASS 13 auth_pass 1111 14 } 15 virtual_ipaddress { 16 192.168.0.3/24 dev eth0 label eth0:1 17 } 18 } 参数说明: 第5行表示定义一个vrrp_instance实例,名字是VI_1,每个vrrp_instance实例可以认为是keepalived服务的一个实例或者作为一个 业务服务,在keepalived服务配置中,这样的vrrp_instance实例可以有多个。注意,存在于主节点中的vrrp_instance实例在备节点中 也要存在,这样才能实现故障切换管理 第6行state MASTER表示当前实例VI_1的角色状态,当前角色为MASTER,这个状态只能有MASTER和BACKUP两种状态,并且需要大写这 些字符。其中MASTER为正式工作的状态,BACKUP为备用的状态。当MASTER所在的服务器故障或失效时,BACKUP所在的服务器会接管故障的 MASTER继续提供服务 第7行interface为网络通信接口,对外提供服务的网络接口,如eth0、eth1、 第8行virtual_router_id为虚拟路由ID标识,这个标识最好是一个数字,并且要在一个keepalive.conf配置文件中是唯一的,但是MASTER 和BACKUP配置文件中相同实例的virtual_route_id又必须一致,否则将出现裂脑问题 第9行为优先级 越多表示优先级越高 MASTER一般比BACKUP大50 第10行advert_int为同步通知间隔,MASTER与BACKUP之间通信检查的时间间隔,单位为秒,默认为1.
用户访问VIP的方法
①域名解析到VIP ②listen 监听vip
nginx负载均衡监听本地不存在的IP地址,解决办法:
echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf sysctl -p 配置文件 ll /proc/sys/net/ipv4/ip_nonlocal_bind -rw-r--r-- 1 root root 0 Apr 20 05:57 /proc/sys/net/ipv4/ip_nonlocal_bind
keepalived.conf配置文件使用脚本监控nginx服务状态
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf global_defs { router_id LVS_01 } vrrp_scripts chk_nginx_proxy { script "/scriptspath/chk_nginx_proxy.sh" #<=脚本检查nginx服务宕了,停止keepalived服务,脚本需要有执行权限 interval 2 #<=间隔2秒 weight } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.3/24 dev eth0 label eth0:1 } track_script { chk_nginx_proxy } }
lb01多实例配置文件
[root@lb01 keepalived]# cat -n keepalived.conf 1 global_defs { 2 router_id LVS_01 vrrp_mcast_group4 224.0.0.19 #<==指定多播地址的配置 ##多组keepalived服务器对通信冲突问题解决方案 (virtual route id多播224.0.0.18 ) 3 } 4 5 6 vrrp_scripts chk_nginx_proxy { 7 script "/scriptspath/chk_nginx_proxy.sh" #<=脚本检查nginx服务宕了,停止keepalived服务 8 interval 2 #<=间隔2秒 9 weight 2 10 } 11 12 vrrp_instance VI_1 { 13 state MASTER 14 interface eth0 15 virtual_router_id 51 16 priority 150 17 advert_int 1 18 authentication { 19 auth_type PASS 20 auth_pass 1111 21 } 22 virtual_ipaddress { 23 192.168.0.3/24 dev eth0 label eth0:1 24 } 25 track_script { 26 chk_nginx_proxy 27 } 28 29 } 30 31 vrrp_instance VI_2 { 32 state BACKUP 33 interface eth0 34 virtual_router_id 52 35 priority 100 36 advert_int 1 37 authentication { 38 auth_type PASS 39 auth_pass 1111 40 } 41 virtual_ipaddress { 42 192.168.0.4/24 dev eth0 label eth0:2 43 } 44 track_script { 45 chk_nginx_proxy 46 } 47 48 }
- 转载请注明来源:NGINX之负载均衡及高可用
- 本文永久链接地址:https://www.xionghaier.cn/archives/70.html