TigerGraph集群安装

2022-04-15 00:00:00 节点 服务器 时间 配置文件 同步


## 1.安装必要软件
# 在每个节点执行
sudo su -

yum install -y tar curl cronie iproute util-linux-ng net-tools coreutils openssh-clients openssh-server sshpass

## 2.配置ssh免登陆
# 在个节点执行
ssh-keygen
# 之后一路回车
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
# 复制如下内容
cat /root/.ssh/id_rsa.pub

## 之后在其他两个节点
# 将上面的内容复制进去,并保存
vi /root/.ssh/authorized_keys

# 验证个节点是否能ssh登录到集群所有节点
[root@elk7001 ~]# ssh elk7001
The authenticity of host 'elk7001 (10.27.21.37)' can't be established.
ECDSA key fingerprint is SHA256:wEEJTkkNBIyCRMC4Lvi/HzGw+V3FqT/loIktEDFeWFQ.
ECDSA key fingerprint is MD5:c3:6c:57:d8:72:21:9e:0d:eb:97:b7:4d:eb:ef:6a:05.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elk7001,10.27.21.37' (ECDSA) to the list of known hosts.
Last login: Tue Jan 12 07:59:52 2021
[root@elk7001 ~]# exit
logout
Connection to elk7001 closed.
[root@elk7001 ~]# ssh elk7002
The authenticity of host 'elk7002 (10.27.21.38)' can't be established.
ECDSA key fingerprint is SHA256:V7bkpVRFSUmbc0jpRR0Jg8jBbimBbraE7xwVkzSvdM0.
ECDSA key fingerprint is MD5:aa:65:84:d4:17:de:49:dd:b2:19:ae:81:35:f8:92:50.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elk7002,10.27.21.38' (ECDSA) to the list of known hosts.
Last login: Tue Jan 12 07:59:56 2021
[root@elk7002 ~]# exit
logout
Connection to elk7002 closed.
[root@elk7001 ~]# ssh elk7003
The authenticity of host 'elk7003 (10.27.20.198)' can't be established.
ECDSA key fingerprint is SHA256:dazqXrBVb1fSEftw8TPX0Aa0YWZmxNporkbOHSDjMxA.
ECDSA key fingerprint is MD5:91:1f:3e:1f:8d:b5:97:2f:eb:34:9b:bc:f4:55:f1:61.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elk7003,10.27.20.198' (ECDSA) to the list of known hosts.
Last login: Tue Jan 12 07:59:57 2021
[root@elk7003 ~]# exit
logout
Connection to elk7003 closed.

## 3.ntp服务搭建
# 在某个ntp节点执行
yum install -y ntp

# 修改ntp配置文件
vim /etc/ntp.conf

把配置文件下面四行注释掉:
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

然后在下面添加这几行:
server 0.cn.pool.ntp.org iburst
server 1.cn.pool.ntp.org iburst
server 2.cn.pool.ntp.org iburst
server 3.cn.pool.ntp.org iburst

# 启动ntp服务,并开机自启动
systemctl start ntpd
systemctl enable ntpd

## 4.修改ntp配置文件,将上面的NTP服务器作为客户端同步NTP时间服务器
# 在TigerGraph集群的每个节点执行
yum install -y ntp
vim /etc/ntp.conf

# 注释掉其他时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

# 配置时间服务器为本地搭建的NTP Server服务器
server elk7001

# 配置允许NTP Server时间服务器主动修改本机的时间
restrict elk7001 nomodify notrap noquery

# 与NTP server服务器同步一下时间:
ntpdate -u elk7001

# 查看ntp同步状态
ntpq -p

## 5.确保所有节点的时间之间误差不超过1秒
# 与NTP服务器同步时间,可以使用crontab设置定时任务,定时去同步,以下以每5分钟同步一次为例。
crontab -e
# 在crontab配置文件中添加一行
*/5 * * * * root /usr/sbin/ntpdate elk7001
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

## 6.集群安装步骤
# 在个节点下载安装包
wget http://dl.tigergraph.com/tigergraph-freetrial-latest.tar.gz
tar xzf tigergraph-freetrial-latest.tar.gz
cd tigergraph-3.1.0-offline/
./install.sh

# 期间输入信息如下
[root@elk7001 tigergraph-3.1.0-offline]# ./install.sh

_______ ______ __
/_ __(_)___ ____ _____/ ____/________ _____ / /_
/ / / / __ `/ _ \/ ___/ / __/ ___/ __ `/ __ \/ __ \
/ / / / /_/ / __/ / / /_/ / / / /_/ / /_/ / / / /
/_/ /_/\__, /\___/_/ \____/_/ \__,_/ .___/_/ /_/
/____/ /_/


Welcome to the TigerGraph platform installer!

[PROGRESS]: 08:32:45 Fresh install TigerGraph platform ...
TigerGraph
Software Subscription Agreement

Do you accept the license agreement? (y/N): y

Do you have a customized enterprise license? (If you don’t have a customized License Key, Free default license key will be used.) (y/N)y
Please enter the license key, or hit ENTER to set it later:
The default user who will own and run TigerGraph platform: tigergraph. Please enter the new value, or hit ENTER to skip:
[NOTE ]: The TigerGraph user: tigergraph
The default TigerGraph user's password: tigergraph. Please enter the new value, or hit ENTER to skip:
[NOTE ]: Entered TigerGraph user password
The default AppRoot of TigerGraph platform: /home/tigergraph/tigergraph/app. Please enter the new value, or hit ENTER to skip: /data1/tigergraph/app
[NOTE ]: Input TigerGraph AppRoot: /data1/tigergraph/app
The default DataRoot of TigerGraph platform: /home/tigergraph/tigergraph/data. Please enter the new value, or hit ENTER to skip: /data1/tigergraph/data
[NOTE ]: Input TigerGraph DataRoot: /data1/tigergraph/data
The default LogRoot of TigerGraph platform: /home/tigergraph/tigergraph/log. Please enter the new value, or hit ENTER to skip: /data1/tigergraph/log
[NOTE ]: Input TigerGraph LogRoot: /data1/tigergraph/log
The default TempRoot of TigerGraph platform: /home/tigergraph/tigergraph/tmp. Please enter the new value, or hit ENTER to skip: /data1/tigergraph/tmp
[NOTE ]: Input TigerGraph TempRoot: /data1/tigergraph/tmp
The default ssh port used in TigerGraph platform: 22. Please enter the new value, or hit ENTER to skip:
[NOTE ]: Input SSH Port: 22

[PROGRESS]: 09:55:58 Input the platform node(s) information ...
Please enter the number of node(s) that the TigerGrpah platform will be installed on (e.g. 3): 3
[NOTE ]: Input number of nodes: 3
Please enter the IP address of node m1 (e.g. 192.168.1.1): 10.27.20.112
Please enter the IP address of node m2 (e.g. 192.168.1.1): 10.27.20.125
Please enter the IP address of node m3 (e.g. 192.168.1.1): 10.27.20.126

Please enter the sudo user of the nodes(s) (the same sudo user will be used to do installation on all nodes): root
[NOTE ]: Input sudo user: root
[NOTE ]: Checking if ssh password login is enabled
[NOTE ]: Detected that SSH with password is not allowed in your node(s), will use SSH with key file instead.
Please enter the path of ssh key file: /root/.ssh/id_rsa
Would you like to install high-availability cluster? (y/N): y
Please enter the replication factor in the range [2, 3]: 2

[NOTE ]: The installer will make the following changes to system:
(it is recommended to accept the changes, but the installation will continue if they are rejected.)
1. Set NTP system time synchronization, do you accept?
(If rejected, it is user's responsiblity to synchronize the system time among cluster nodes) (y/N): y
Accept
2. Set iptables (firewall) rules among cluster nodes, do you accept?
(If rejected, it is user's responsiblity to make tcp ports open among cluster nodes) (y/N): y
Accept

[PROGRESS]: 09:56:31 Checking the cluster/node environment and configuration ...
------------------------------------------------------------
[PROGRESS]: 09:56:32 Checking login and scp functionality in the cluster
------------------------------------------------------------
[PROGRESS]: 09:56:32 Waiting 'Checking login and scp functionality' to be done on nodes (m1,m2,m3), this may take a while ...
[NOTE ]: Job 'Checking login and scp functionality' on node m1 succeeded
[NOTE ]: Job 'Checking login and scp functionality' on node m2 succeeded
[NOTE ]: Job 'Checking login and scp functionality' on node m3 succeeded
------------------------------------------------------------
[PROGRESS]: 09:56:34 Prechecking each node in background concurrently ...
------------------------------------------------------------
[NOTE ]: Retrieve the internal IP of m1 (10.27.20.112)
[NOTE ]: Internal IP obtained: 10.27.20.112
[NOTE ]: Retrieve the internal IP of m2 (10.27.20.125)
[NOTE ]: Internal IP obtained: 10.27.20.125
[NOTE ]: Retrieve the internal IP of m3 (10.27.20.126)
[NOTE ]: Internal IP obtained: 10.27.20.126

[PROGRESS]: 09:56:35 Wait until Prechecking on each node to finish, this may take a while ...

[NOTE ]: Job Prechecking on node m1 done

[NOTE ]: Job Prechecking on node m2 done

[NOTE ]: Job Prechecking on node m3 done
------------------------------------------------------------
[NOTE ]: Prechecking on all nodes succeeded



[PROGRESS]: 09:56:45 Setup the firewall rules on m1 (10.27.20.112) in background, you may check the log at:
/root/tigergraph-3.1.0-offline/logs/setup_firewall.log.m1

[PROGRESS]: 09:56:45 Setup the firewall rules on m2 (10.27.20.125) in background, you may check the log at:
/root/tigergraph-3.1.0-offline/logs/setup_firewall.log.m2

[PROGRESS]: 09:56:45 Setup the firewall rules on m3 (10.27.20.126) in background, you may check the log at:
/root/tigergraph-3.1.0-offline/logs/setup_firewall.log.m3
[PROGRESS]: 09:56:45 Wait until setup firewall on each node to finish, this may take a while ...
[PROGRESS]: 09:56:51 Checking ports access in the cluster...
------------------------------------------------------------
[NOTE ]: Node m1 (10.27.20.112) port check passed
[NOTE ]: Node m2 (10.27.20.125) port check passed
[NOTE ]: Node m3 (10.27.20.126) port check passed
------------------------------------------------------------
[PROGRESS]: 09:56:56 Upload offline package to all cluster nodes
------------------------------------------------------------
[PROGRESS]: 09:56:56 Waiting 'uploading package' to be done on nodes (m1,m2,m3), this may take a while ...
[NOTE ]: Job 'uploading package' on node m1 succeeded
[NOTE ]: Job 'uploading package' on node m2 succeeded
[NOTE ]: Job 'uploading package' on node m3 succeeded
------------------------------------------------------------
[NOTE ]: Successfully uploaded offline package to all cluster nodes.

[PROGRESS]: 09:57:28 Installing TigerGraph platform on each node in background concurrently, this may take approximately 10 minutes...
------------------------------------------------------------
[NOTE ]: Installing TigerGraph platform on node m1 is done
[NOTE ]: Installing TigerGraph platform on node m2 is done
[NOTE ]: Installing TigerGraph platform on node m3 is done
------------------------------------------------------------
[NOTE ]: Installation on all nodes succeeded

===============================================================
Congratulations! Installation Finished!
===============================================================

[PROGRESS]: 10:01:31 Start tigergraph service, this may take a while ...
[ Info] Starting EXE
[ Info] Starting CTRL
[ Info] Generating config files to all machines
[ Info] Successfully applied configuration change. Please restart services to make it effective immediately.
[ Info] Initializing KAFKA
[ Info] Starting EXE
[ Info] Starting CTRL
[ Info] Starting ZK ETCD DICT KAFKA ADMIN GSE NGINX GPE RESTPP KAFKASTRM-LL KAFKACONN TS3SERV GSQL TS3 IFM GUI
[ Info] Applying config
[Warning] No difference from staging config, config apply is skipped.
[ Info] Successfully applied configuration change. Please restart services to make it effective immediately.
[ Info] Cluster is initialized successfully
===============================================================
TigerGraph is successfully started!
===============================================================

[NOTE ]: Time synchronization check passed
Thank you for using TigerGraph platform!
[NOTE ]: Please login to node m1 (10.27.20.112) of the platform to continue.


## 7.后续操作
# 浏览器访问http://m1_ip:14240
# m1_ip为安装节点的ip,例如http://tigergraph:14240/
# 如果不能访问可能为防火墙导致,需要关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

相关文章