greenplum6.7安装手册

2023-03-17 00:00:00 创建 修改 节点 设置 配置

1环境准备
1.1硬件配置
Master:
10.0.61.1
16C,64G,系统盘100G,数据盘1000G Centos 7.6 64位
Segment:
10.0.61.2
16C,64G,系统盘100G,数据盘1000G Centos 7.6 64位
Segment:
10.0.61.3
16C,64G,系统盘100G,数据盘1000G Centos 7.6 64位
GP版本:
greenplum-db-6.7.0-rhel7-x86_64.rpm


1.2关闭防火墙
在确定机器配置的时候,要保证所有机器的网络都是通的,并且每台机器的防火墙都是关闭的,避免存在网络不通的问题。
查看防火墙状态

sudo systemctl status firewalld

禁用防火墙

sudo systemctl disable firewalld

1.3关闭SElinux
修改 /etc/selinux/config 设置,使用sestatus检查是否关闭

sudo vim /etc/selinux/config

SELINUX=disabled

1.4禁用IPC
将logind.conf中RemoveIPC=no注释放开

sudo vim /etc/systemd/logind.conf

RemoveIPC=no

1.5禁用THP
自CentOS6版本开始引入了Transparent Huge Pages(THP),从CentOS7版本开始,该特性默认就会启用。
尽管THP的本意是为提升内存的性能,不过某些数据库厂商还是建议直接关闭THP(比如说ORACLE、MariaDB、MongoDB等),否则可能会导致性能出现下降。

首先检查THP的启用状态:

[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

这个状态就说明都是启用的。

我们这个时候当然可以逐个修改上述两文件,来禁用THP,但要想一劳永逸的令其永-久生效,还是参考下列的步骤。

编辑rc.local文件:

[root@localhost ~]# vim /etc/rc.d/rc.local

增加下列内容:

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

保存退出,然后赋予rc.local文件执行权限:

[root@localhost ~]# chmod +x /etc/rc.d/rc.local

后重启系统,以后再检查THP应该就是被禁用了

[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]

1.6配置hosts
在配置/etc/hosts时,习惯将Master机器叫做mdw,将Segment机器叫做sdw,配置好后,使用ping命令确定所有hostname都是通的。

10.0.61.1    mdw
10.0.61.2    sdw1
10.0.61.3    sdw2

复制hosts至另外两个节点(需要接收服务器root密码)

scp /etc/hosts sdw1:/etc
scp /etc/hosts sdw2:/etc

修改每台主机的主机名

sudo hostname mdw

sudo vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=mdw

1.7修改内核
GP5.0 之后给出了部分计算公式,可以根据机器性能调整配置,这里我们就先使用官方推荐配置

sudo vim /etc/sysctl.conf

kernel.shmmax = 5000000000
kernel.shmmni = 4096
kernel.shmall = 400000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2



设置完成后 重载参数

sudo sysctl -p

1.8系统资源限制
sudo vim /etc/security/limits.conf

增加以下参数:

* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072

sudo vim /etc/security/limits.d/20-nproc.conf

修改为:

* soft nproc 131072
root soft nproc unlimited

1.9磁盘I/O 设置
使用 lsblk 查看磁盘挂载情况

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─centos-root 253:0 0 91G 0 lvm /
└─centos-swap 253:1 0 8G 0 lvm [SWAP]
sdb 8:16 0 1000G 0 disk /data

sudo chmod 777 /etc/rc.d/rc.local
sudo echo '/sbin/blockdev --setra 16384 /dev/sda' >> /etc/rc.local
sudo echo '/sbin/blockdev --setra 16384 /dev/sdb' >> /etc/rc.local

磁盘I/O调度算法

sudo grubby --update-kernel=ALL --args="elevator=deadline"
grubby --info=ALL

1.10SSH连接阈值
sudo vim /etc/sshd_config
1
/etc/sshd_config 文件的 MaxStartups 和 MaxSessions 参数

MaxStartups 200
MaxSessions 200

重启sshd,使参数生效

service sshd restart

1.11同步集群时钟(NTP)
先修改master 服务器的时间到正确的时间,再修改其他节点的 /etc/ntp.conf,让他们跟随master服务器的时间。

sudo vi /etc/ntp.conf

server mdw prefer # 优先主节点
server smdw # 其次standby 节点,若没有standby ,可以配置成数据中心的时钟服务器
service ntpd restart # 修改完重启ntp服务

没有ntpd服务
1.安装

yum install ntp -y

2.启动

systemctl start ntpd; systemctl enable ntpd

1.12检查字符集
echo $LANG
en_US.UTF-8

1.13创建gpadmin用户
在每个节点上创建gpadmin用户,用于管理和运行gp集群,好给与sudo权限。
也可以先在主节点上创建,等到主节点gp安装完成后,使用gpssh 批量在其他节点上创建。

sudo groupadd gpadmin
sudo useradd gpadmin -r -m -g gpadmin

设置gpadmin密码为gpadmin

sudo passwd gpadmin

1.14重启系统
sudo reboot

2安装
2.1安装包
使用yum或者rpm安装,默认安装至/usr/local/目录下

sudo yum install -y ./greenplum-db-6.7.0-rhel7-x86_64.rpm

安装成功提示

Dependency Installed:
apr.x86_64 0:1.4.8-5.el7 apr-util.x86_64 0:1.5.2-6.el7
bzip2.x86_64 0:1.0.6-13.el7 keyutils-libs-devel.x86_64 0:1.5.8-3.el7
krb5-devel.x86_64 0:1.15.1-46.el7 libcom_err-devel.x86_64 0:1.42.9-17.el7
libevent.x86_64 0:2.0.21-4.el7 libkadm5.x86_64 0:1.15.1-46.el7
libselinux-devel.x86_64 0:2.5-15.el7 libsepol-devel.x86_64 0:2.5-10.el7
libverto-devel.x86_64 0:0.2.5-4.el7 pcre-devel.x86_64 0:8.32-17.el7
rsync.x86_64 0:3.1.2-10.el7 zip.x86_64 0:3.0-11.el7

Dependency Updated:
e2fsprogs.x86_64 0:1.42.9-17.el7 e2fsprogs-libs.x86_64 0:1.42.9-17.el7
krb5-libs.x86_64 0:1.15.1-46.el7 libcom_err.x86_64 0:1.42.9-17.el7
libselinux.x86_64 0:2.5-15.el7 libselinux-python.x86_64 0:2.5-15.el7
libselinux-utils.x86_64 0:2.5-15.el7 libss.x86_64 0:1.42.9-17.el7

Complete!


2.2创建hostfile_exkeys
all_host : 内容是集群所有主机名或ip,包含master,segment,standby等。
seg_host: 内容是所有 segment主机名或ip

sudo vim /usr/local/greenplum-db/all_host
mdw
sdw1
sdw2
sudo vim /usr/local/greenplum-db/seg_host
sdw1
sdw2
sudo chown -R gpadmin:gpadmin /usr/local/greenplum*

2.3设置免密登陆
在mdw节点生成公钥和私钥

ssh-keygen

公钥复制到各个节点机器的authorized_keys文件中
测试ssh是否可以连接成功

ssh sdw1

2.4使用gpssh-exkeys 工具,打通n-n的免密登陆
在/usr/local/greenplum-db目录下

source ./greenplum_path.sh
gpssh-exkeys -f all_host

执行结果

[STEP 1 of 5] create local ID and authorize on local host
... /home/longlele/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts
... send to sdw1
... send to sdw2

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
... finished key exchange with sdw1
... finished key exchange with sdw2

[INFO] completed successfully


2.5验证gpssh
gpssh -f /usr/local/greenplum-db/all_host -e 'ls /usr/local/'

[ mdw] ls /usr/local/
[ mdw] bin games greenplum-db-6.7.0 lib libexec share
[ mdw] etc greenplum-db include lib64 sbin src
[sdw1] ls /usr/local/
[sdw1] bin etc games include lib lib64 libexec sbin share src
[sdw2] ls /usr/local/
[sdw2] bin etc games include lib lib64 libexec sbin share src

2.6gpadmin设置
打通gpadmin 用户免密登录
su - gpadmin
source /usr/local/greenplum-db/greenplum_path.sh
ssh-keygen
ssh-copy-id sdw1
ssh-copy-id sdw2
gpssh-exkeys -f /usr/local/greenplum-db/all_host

设置环境变量
cat >> /home/gpadmin/.bash_profile << EOF
source /usr/local/greenplum-db/greenplum_path.sh
EOF

cat >> /home/gpadmin/.bashrc << EOF
source /usr/local/greenplum-db/greenplum_path.sh
EOF

分发环境变量

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bash_profile gpadmin@=:/home/gpadmin/.bash_profile
gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc


2.7部署gpsegment
# root 用户下执行
# 变量设置
link_name='greenplum-db' #软连接名
binary_dir_location='/usr/local' #安装路径
binary_dir_name='greenplum-db-6.7.0' #安装目录
binary_path='/usr/local/greenplum-db-6.7.0' #全目录
chown -R gpadmin:gpadmin $binary_path
rm -f ${binary_path}.tar; rm -f ${binary_path}.tar.gz
cd $binary_dir_location; tar cf ${binary_dir_name}.tar ${binary_dir_name}
gzip ${binary_path}.tar



需要root密码

source /usr/local/greenplum-db/greenplum_path.sh
gpssh -f ${binary_path}/seg_host -e "mkdir -p ${binary_dir_location};rm -rf ${binary_path};rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz"
gpscp -f ${binary_path}/seg_host ${binary_path}.tar.gz root@=:${binary_path}.tar.gz
gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};gzip -f -d ${binary_path}.tar.gz;tar xf ${binary_path}.tar"
gpssh -f ${binary_path}/seg_host -e "rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz;rm -f ${binary_dir_location}/${link_name}"
gpssh -f ${binary_path}/seg_host -e ln -fs ${binary_dir_location}/${binary_dir_name} ${binary_dir_location}/${link_name}
gpssh -f ${binary_path}/seg_host -e "chown -R gpadmin:gpadmin ${binary_dir_location}/${link_name};chown -R gpadmin:gpadmin ${binary_dir_location}/${binary_dir_name}"
gpssh -f ${binary_path}/seg_host -e "source ${binary_path}/greenplum_path"
gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};ll"

在没有root用户密码的情况下,我们把打好的greenplum-db-6.7.0.tar.gz传到segment主机上手动创建目录,或者将每台segment中/usr/local的权限设置为777执行上面的脚本

scp /usr/local/greenplum-db-6.7.0.tar.gz sdw1:greenplum-db-6.7.0.tar.gz
scp /usr/local/greenplum-db-6.7.0.tar.gz sdw2:greenplum-db-6.7.0.tar.gz

在两个segment主机解压文件

mkdir -p /usr/local
sudo mv greenplum-db-6.7.0.tar.gz /usr/local/
cd /usr/local
sudo gzip -f -d /usr/local/greenplum-db-6.7.0.tar.gz
sudo tar xf greenplum-db-6.7.0.tar

sudo rm -rf /usr/local/greenplum-db-6.7.0.tar;rm -rf /usr/local/greenplum-db-6.7.0.tar.gz;rm -f /usr/local/greenplum-db
sudo ln -fs /usr/local/greenplum-db-6.7.0 /usr/local/greenplum-db
sudo chown -R gpadmin:gpadmin /usr/local/greenplum-db;chown -R gpadmin:gpadmin /usr/local/greenplum-db-6.7.0
source /usr/local/greenplum-db-6.7.0/greenplum_path.sh

2.8创建数据存储
创建Master的存储目录,因为没有多余的服务器所以在data下建立一个standby文件夹作为备份

sudo mkdir -p /data/master
sudo chown gpadmin:gpadmin /data/*

创建segment的存储目录

sudo source /usr/local/greenplum-db/greenplum_path.sh
gpssh -f seg_host -e 'mkdir -p /data/primary'
gpssh -f seg_host -e 'mkdir -p /data/mirror'
gpssh -f seg_host -e 'chown -R gpadmin /data/*'

2.9初始化数据库
拷贝配置文件

mkdir -p /home/gpadmin/gpconfigs
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config
sudo vim /home/gpadmin/gpconfigs/gpinitsystem_config

修改配置文件中segment的DATA_DIRECTORY和MIRROR_DATA_DIRECTORY为我们配置的路径

declare -a DATA_DIRECTORY=(/data/primary /data/primary)


如果需要设置mirror需要将MIRROR_PORT_BASE注释放开

MIRROR_PORT_BASE = 7000
declare -a MIRROR_DATA_DIRECTORY=(/data/mirror /data/mirror )

注意:按照官方的配置当前设置的每台机器上是4个segment ,我们只需要每台机器上配两个所以修改为两个目录重复几次就是几个

2.10执行集群初始化
如配置了standby主机可以再在命令后增加-s standby_master_hostname -S配置

gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -D

gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -D \
-s standby_master_hostname -S

安装途中如出现权限问题就去segment下提升权限,需要确认的选择y
执行成功后有20200602:17:27:36:009347 gpstart:mdw:gpadmin-[INFO]:-Database successfully started提示

启动日志/home/gpadmin/gpAdminLogs/下,可以查看日志来针对问题进行处理

2.11设置环境变量
vi ~/.bashrc

source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export LD_PRELOAD=/lib64/libz.so.1 ps

source ~/.bashrc

如果配置有standby服务器需要将配置同步至standby

2.12standby配置
1.由于没有多余的服务器,这里我们用segment sdw1作为standby服务器。按照2.11配置环境变量
2.按照2.8创建master的数据存储目录
3.配置standby

gpinitstandby -s sdw1

查看standby是否配置成功

gpstate -f

standby切换master(不需要用)

gpactivatestandby -d $MASTER_DATA_DIRECTORY

2.13访问设置
允许外部连接

vi $MASTER_DATA_DIRECTORY/pg_hba.conf
host all gpadmin 0.0.0.0/0 trust # 新增规则允许任意ip 密码登陆

gpstop -u

vi $MASTER_DATA_DIRECTORY/postgresql.conf

至此我们的greenplum6.7就安装好了

2.14重新安装
使用命令gpdeletesystem将当前库全部删除才可以重新初始化安装

gpdeletesystem -d /data/master/gpseg-1 -f

-d 后面跟 MASTER_DATA_DIRECTORY(master 的数据目录),会清除master,segment所有的数据目录。
-f force, 终止所有进程,强制删除。示例:

3.性能测试
pg中有性能测试的工具gpcheckperf 官方文档上推荐在初始化系统之前做,我们没有做,现在我们使用它自带的工具测试一下硬件性能

3.1网络性能
gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp

/usr/local/greenplum-db/./bin/gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp

-------------------
-- NETPERF TEST
-------------------
NOTICE: -t is deprecated, and has no effect
NOTICE: -f is deprecated, and has no effect
NOTICE: -t is deprecated, and has no effect
NOTICE: -f is deprecated, and has no effect

====================
== RESULT 2020-06-03T11:16:59.999500
====================
Netperf bisection bandwidth test
sdw1 -> sdw2 = 1117.510000
sdw2 -> sdw1 = 1117.580000

Summary:
sum = 2235.09 MB/sec
min = 1117.51 MB/sec
max = 1117.58 MB/sec
avg = 1117.55 MB/sec
median = 1117.58 MB/sec


3.2磁盘IO和内存性能
gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D \
-d /data/data1/primary -d /data/data2/primary \
-d /data/data1/mirror -d /data/data2/mirror

全部测速度太慢了,因为我们机器的配置都是一样的就只测一个目录

gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D -d /data/data1/primary

--------------------
-- DISK WRITE TEST
--------------------

--------------------
-- DISK READ TEST
--------------------

--------------------
-- STREAM TEST
--------------------

====================
== RESULT 2020-06-04T12:05:33.590979
====================

disk write avg time (sec): 1487.68
disk write tot bytes: 269539934208
disk write tot bandwidth (MB/s): 179.19
disk write min bandwidth (MB/s): 72.66 [sdw2]
disk write max bandwidth (MB/s): 106.52 [sdw1]
-- per host bandwidth --
disk write bandwidth (MB/s): 106.52 [sdw1]
disk write bandwidth (MB/s): 72.66 [sdw2]


disk read avg time (sec): 1213.67
disk read tot bytes: 269539934208
disk read tot bandwidth (MB/s): 211.81
disk read min bandwidth (MB/s): 105.23 [sdw2]
disk read max bandwidth (MB/s): 106.58 [sdw1]
-- per host bandwidth --
disk read bandwidth (MB/s): 106.58 [sdw1]
disk read bandwidth (MB/s): 105.23 [sdw2]


stream tot bandwidth (MB/s): 26629.10
stream min bandwidth (MB/s): 12755.90 [sdw1]
stream max bandwidth (MB/s): 13873.20 [sdw2]
-- per host bandwidth --
stream bandwidth (MB/s): 12755.90 [sdw1]
stream bandwidth (MB/s): 13873.20 [sdw2]

本文来源:https://blog.csdn.net/qq_34386723/article/details/106470879

相关文章