cluster(3)
(一) lvs-fullnat
fullnat模式的性能虽然不如dr模式,但是,fullnat支持多vlan,再rs是不同的vlan的情况下,明显使用fullnat是很好的选择。
fullnat的基本原理:假定用户端的ip是cip,vs上有两个ip,一个是vip,还有lip,lip可以是不同网段的一组ip,rs的ip为rip,当客户请求进来,经过vs的时候,再vs出发生一次snat,将cip转化成lip,再由vs到rs的时候又发生了一次dnat,将vip转化成rip,同样的请求返回时,也经历了两次nat,所以一共经历4次nat,从而达到了可以支持多vlan的情况。
client (cip) ---> vs (vip,lip) ---> rs (rip)
<--- <---
但是在这种情况下,只有一台vs,会导致该vs的负载过大,加入keepalived也只能提高稳定性,并不能解决高吞吐量的问题,因此一般会进行一个lvs集群,keepalived可以对rs进行健康检查,在lvs集群前面加上一个路由器或者交换机,使用ospf协议,用户请求到达路由器(或者交换机)后,通过原地址、端口和目的地址、端口的hash,将链接分配到集群中的某一台LVS上,LVS通过内网向后端转发请求,后端再将数据返回给用户,整个会话完成。rs之间的session表要定期同步,以防止其中一个rs出现问题而造成session丢失的情况。
fullnat模式要重新编译内核,下面是步骤:
一 编译内核:
1 从官网上下载rpm源码包,并用rpmbuild重新编译为二进制包
kernel-2.6.32-220.23.1.el6.src.rpm
Lvs-fullnat-synproxy.tar.gz
网址:kb.linuxvirtualserver.org
1 rpm -ivh kernel-2.6.32-220.23.1.el6.src.rpm ###安装rpm包###
2 cd rpmbuild/ ###安装完后会在当前目录下
3 yum install -y rpm-build
4 cd /root/rpmbuild/SPECS/
5 rpmbuild -bp kernel.spec ###解开源码打补丁###
**************************************************************************
6 yum install redhat-rpm-config patchutils xmlto asciidoc binutils-devel newt-devel python-devel perl-ExtUtils-Embed hMaccalc ###依赖性###
**************************************************************************
7 yum install asciidoc-8.4.5-4.1.el6.noarch.rpm newt-devel-0.52.11-3.el6.x86_64.rpm slang-devel-2.2.1-1.el6.x86_64.rpm
cd rpmbuild/SPECS/
**************************************************************************
8 rpmbuild -bp kernel.spec ###解决完依赖性后再次解开源码打补丁###
9 tar zxf Lvs-fullnat-synproxy.tar.gz
10 cd lvs-fullnat-synproxy/
11 cp lvs-2.6.32-220.23.1.el6.patch ~/rpmbuild/BUILD/kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/
12 cd ~/rpmbuild/BUILD/kernel-2.6.32-220.23.1.el6/linux-2.6.32-220.23.1.el6.x86_64/
13 patch -p1 < lvs-2.6.32-220.23.1.el6.patch ###打补丁###
14 vim Makefile
内容:
EXTRAVERSION = -220.23.1.el6 ###扩展版本,你所编译的版本###
15 make
16 make modules_install ###安装模块
17 make install ###安装的是引导文件,在/boot目录下就有你所编译的内核文件vmlinuz-2.6.32-220.23.1.el6###
18 vim /boot/grup/grub.conf ###引导文件要修改,因为你所编好的内核文件就加在原有版本的上面,因此default的要改成0,开机时才会进入你所编译的内核版本###
内容:
default=0 ###改成0才会在进入系统时访问你所编译的版本###
timeout=5
splashp_w_picpath=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-220.23.1.el6) ###你所编译的版本####
root (hd0,0)
kernel /vmlinuz-2.6.32-220.23.1.el6 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-220.23.1.el6.img
title Red Hat Enterprise Linux (2.6.32-431.el6.x86_64) ###原有的系统内核版本####
root (hd0,0)
kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-431.el6.x86_64.img
二 realserver kernel
步骤与内核编译相同,不过打补丁是打toa补丁,toa补丁是为让realserver拿到客户端的ip地址,如果不许要可以不编译
三 编译keepalived
1 cd lvs-fullnat-synproxy/
2 tar zxf lvs-tools.tar.gz
3 cd tools/keepalived/
4 ./configure --with-kernel-dir="/lib/modules/`uname -r`/build"
5 make
6 make install
###编译ipvsadm###
1 cd lvs-fullnat-synproxy/tools/ipvsadm/
2 make
3 make install
4 ipvsadm --help ###可以看见lvs的模式里多了一个fullnat###
5 ipvsadm -l ###(size=4194304)变为2的22次方,原来只有4096,一但是高吞吐量的时候就有可能发生丢包的现象####
(二)keepalived+Nginx
两个节点一样的操作:
1 vim /usr/local/lnmp/nginx/conf/nginx.conf
内容:
upstream westos {
server 172.25.78.3:80;
server 172.25.78.4:80;
}
server{
listen 80;
server_name www.westos.org;
location / {
proxy_pass Http://westos;
}
}
2 vim /opt/nginx_check.sh
#!/bin/bash
curl http://127.0.0.1/index.html -o /dev/null -s || nginx
if [ $? -ne 0 ]; then
/etc/init.d/keepalived stop &> /dev/null
fi
3 vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
vrrp_script nginx_check {
script /opt/nginx_check.sh
interval 2
}
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@server1 ###还有一个节点为server2####
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER ###还有一个节点写backup###
interface eth0
virtual_router_id 51
priority 100 ###还有一个节点的数字要比100低###
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.78.100
}
track_script {
nginx_check
}
}
4 /etc/init.d/keepalived start
5 nginx
测试:
1
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server4-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
2 当server1的keepalived停止了,server2接管资源
[root@server2 conf]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:ac:6c:6d brd ff:ff:ff:ff:ff:ff
inet 172.25.78.2/24 brd 172.25.78.255 scope global eth0
inet 172.25.78.100/32 scope global eth0
inet6 fe80::5054:ff:feac:6c6d/64 scope link
valid_lft forever preferred_lft forever
[root@foundation78 Desktop]# curl www.westos.org
<h1>server4-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server4-www.westos.org</h1>
3 当server1的nginx停止了
[root@server1 conf]# nginx -s stop
[root@foundation78 Desktop]# curl www.westos.org
<h1>server3-www.westos.org</h1>
[root@foundation78 Desktop]# curl www.westos.org
<h1>server4-www.westos.org</h1>
相关文章