MariaDB Galera 集群设置问题

2022-01-15 00:00:00 mariadb mysql galera

我正在尝试启动并运行 mariadb 集群,但它不适合我.现在我在 64 位红帽 ES6 机器上使用 MariaDB Galera 5.5.36.我在这里通过这个 repo 安装了 mariadb:

I am trying to get a mariadb cluster up and running but it is not working out for me. Right now I am using MariaDB Galera 5.5.36 on a 64 bit red hat ES6 machine. I installed mariadb through this repo here:

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5-galera/rhel6-amd64/
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

在 server.conf 中,我在服务器 1 中有以下内容:

In the server.conf I have the following in server 1:

[mariadb]
log_error=/var/log/mariadb.log
query_cache_size=0
query_cache_type=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.211.133
wsrep_cluster_name='cluster'
wsrep_node_address='192.168.211.132'
wsrep_node_name='cluster1'
wsrep_sst_method=rsync

在服务器 2 上我有

[mariadb]
log_error=/var/log/mariadb.log
query_cache_size=0
query_cache_type=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.211.132
wsrep_cluster_name='cluster'
wsrep_node_address='192.168.211.133'
wsrep_node_name='cluster2'
wsrep_sst_method=rsync

当我使用以下命令启动服务器 1 时:sudo service mysql start --wsrep-new-cluster 它启动得很好,如果我打开 mysql 并检查 wsrep 的状态,它说一切都已启动并正在运行很好,但是当我尝试在第二台服务器上执行 sudo service mysql start 时,我在错误日志中得到以下信息:

When I start server 1 with the following command: sudo service mysql start --wsrep-new-cluster it starts up just fine, if I open up mysql and check the status of wsrep it says everything is up and running which is good but when I try to do sudo service mysql start on the second server I get the following in the error logs:

140609 14:47:55 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140609 14:47:56 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.i5qfm2' --pid-file='/var/lib/mysql/localhost.localdomain-recover.pid'
140609 14:47:57 mysqld_safe WSREP: Recovered position 85448d73-ebe8-11e3-9c20-fbc1995fee11:0
140609 14:47:57 [Note] WSREP: wsrep_start_position var submitted: '85448d73-ebe8-11e3-9c20-fbc1995fee11:0'
140609 14:47:57 [Note] WSREP: Read nil XID from storage engines, skipping position init
140609 14:47:57 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
140609 14:47:57 [Note] WSREP: wsrep_load(): Galera 25.3.2(r170) by Codership Oy <info@codership.com> loaded successfully.
140609 14:47:57 [Note] WSREP: CRC-32C: using hardware acceleration.
140609 14:47:57 [Note] WSREP: Found saved state: 85448d73-ebe8-11e3-9c20-fbc1995fee11:-1
140609 14:47:57 [Note] WSREP: Passing config to GCS: base_host = 192.168.211.133; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.proto_max = 5
140609 14:47:57 [Note] WSREP: Assign initial position for certification: 0, protocol version: -1
140609 14:47:57 [Note] WSREP: wsrep_sst_grab()
140609 14:47:57 [Note] WSREP: Start replication
140609 14:47:57 [Note] WSREP: Setting initial position to 85448d73-ebe8-11e3-9c20-fbc1995fee11:0
140609 14:47:57 [Note] WSREP: protonet asio version 0
140609 14:47:57 [Note] WSREP: Using CRC-32C (optimized) for message checksums.
140609 14:47:57 [Note] WSREP: backend: asio
140609 14:47:57 [Note] WSREP: GMCast version 0
140609 14:47:57 [Note] WSREP: (0c085f34-efe5-11e3-9f6b-8bfd1706e2a4, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
140609 14:47:57 [Note] WSREP: (0c085f34-efe5-11e3-9f6b-8bfd1706e2a4, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
140609 14:47:57 [Note] WSREP: EVS version 0
140609 14:47:57 [Note] WSREP: PC version 0
140609 14:47:57 [Note] WSREP: gcomm: connecting to group 'cluster', peer '192.168.211.132:,192.168.211.134:'
140609 14:48:00 [Warning] WSREP: no nodes coming from prim view, prim not possible
140609 14:48:00 [Note] WSREP: view(view_id(NON_PRIM,0c085f34-efe5-11e3-9f6b-8bfd1706e2a4,1) memb {
        0c085f34-efe5-11e3-9f6b-8bfd1706e2a4,0
} joined {
} left {
} partitioned {
})
140609 14:48:01 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.50775S), skipping check
140609 14:48:31 [Note] WSREP: view((empty))
140609 14:48:31 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
         at gcomm/src/pc.cpp:connect():141
140609 14:48:31 [ERROR] WSREP: gcs/src/gcs_core.c:gcs_core_open():196: Failed to open backend connection: -110 (Connection timed out)
140609 14:48:31 [ERROR] WSREP: gcs/src/gcs.c:gcs_open():1291: Failed to open channel 'cluster' at 'gcomm://192.168.211.132,192.168.211.134': -110 (Connection timed out)
140609 14:48:31 [ERROR] WSREP: gcs connect failed: Connection timed out
140609 14:48:31 [ERROR] WSREP: wsrep::connect() failed: 7
140609 14:48:31 [ERROR] Aborting

140609 14:48:31 [Note] WSREP: Service disconnected.
140609 14:48:32 [Note] WSREP: Some threads may fail to exit.
140609 14:48:32 [Note] /usr/sbin/mysqld: Shutdown complete

140609 14:48:32 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid ended

我不知道为什么第二台服务器无法检测到集群已启动并正在运行.这些机器可以很好地相互通信,我可以从一台机器SSH到另一台机器,它们可以相互ping通.我尝试删除 galera 缓存,尝试降级我的 mariadb galera 版本,尝试禁用 SELinux,尝试以其他用户身份运行 mysql 服务,验证正确的端口已打开,尝试在具有不同 IP 地址的不同计算机上的 2 个 VM 上运行它们等.有没有人知道这里发生了什么,因为我一直在寻找 3 天试图解决这个问题,但似乎没有解决方案适合我.

I am at a loss as to why the second server cannot detect that a cluster is up and running. These machines can communicate with each other just fine, I can SSH from one to the other and they can ping each other. I tried deleted the galera cache, tried downgrading my version of mariadb galera, tried disabling SELinux, tried running the mysql service as a different user, verified that the correct ports are open, tried running them on 2 VMs on separate computers with different IP addresses, etc. Does anyone have any idea what is going on here because I have been searching for 3 days trying to fix this but no solution seems to work with me.

推荐答案

这是我解决类似问题的方法.

Here is how I fixed my similar issue.

CentOS 7 w/MariaDB Galera 10.1.

CentOS 7 w/ MariaDB Galera 10.1.

Node2 我看到了这个:

016-12-27 15:40:38 140703512762624 [Warning] WSREP: no nodes coming from prim view, prim not possible

做了一些阅读后,我尝试在 node1 上运行它.

After doing some reading, I tried running this on node1.

service mysql start --wsrep-new-cluster

但这失败了,在日志中,我发现了这个......

But this failed, and in the logs, I found this...

2016-12-27 15:44:08 140438853814528 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .

所以我编辑了文件 /var/lib/mysql/grastate.dat,将 safe_to_bootstrap 更改为 1.

So I edited the file /var/lib/mysql/grastate.dat, changing safe_to_bootstrap to 1.

然后我可以使用以下方法启动主节点:

I was then able to start the Primary node using:

service mysql start --wsrep-new-cluster

然后其他的,我就用了

service mysql start

注意:这是在演示预生产环境中.在通过同时重新启动所有服务器使所有服务器正常工作后,我立即破坏了它:P,但我知道没有写入,并且数据库是同步的.如果您正在生产并且发生这种情况,您可以使用以下内容来确定在哪个节点上运行new-cluster",这类似于说,让我成为主要的.

Note: This was in a demo pre-production environment. I promptly broke it after getting everything to work by rebooting all servers at the same time :P, but I knew there were no writes, and that the DB's were in sync. If you are in produciton and this happens, you can use the following to figure out which node to run "new-cluster" on, which is akin to saying, make me primary.

mysqld_safe --wsrep-recover

如果这是一个生产问题,我强烈建议阅读这篇文章并使用 CloneZilla 进行备份,然后再向损坏的客户端发出命令!

If this is a production issue, I highly reccomend reading this article and making a backup w/ CloneZilla before throwing commands at the broken clients!

https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

相关文章