技术控

    今日:25| 主题:49195
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] MariaDB High Availability: Replication Manager

[复制链接]
依恋ヽ那段情 发表于 2016-10-5 22:35:38
77 0

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
This is a follow-up blog post that expands on the subject of highly available cluster, discussed in    MariaDB MaxScale High Availability: Active-Standby Cluster.  
  MariaDB Replication Manager is a tool that manages MariaDB 10 clusters. It supports both interactive and automated failover of the master server. It verifies the integrity of the slave servers before promoting one of them as the replacement master and it also protects the slaves by automatically setting them into read-only mode. You can find more information on the replication-manager from the    replication-manager GitHub repository.  
  Using the MariaDB Replication Manager allows us to automate the replication failover. This reduces the amount of manual work required to adapt to changes in the cluster topology and makes for a more highly available database cluster.
  In this blog post, we'll cover the topic of backend database HA and we’ll use the MariaDB Replication Manager to create a complete HA solution. We build on the setup described in the earlier blog post and integrate the MariaDB Replication Manager (MRM) into it. We're using Centos 7 as our OS and we'll use the 0.7.0-rc2 version of the replication-manager.
  Setting Up MariaDB Replication Manager
  The Replication Manager allows us to manage the replication topology of the cluster without having to manually change it. The easiest way to integrate it to our Corosync setup is to build it from source and use the Systemd service file it provides.
  1. sudo yum install go git
  2. export GOPATH=~/gocode && mkdir ~/gocode && cd ~/gocode
  3. go get github.com/tanji/replication-manager
  4. go install github.com/tanji/replication-manager
  5. sudo cp src/github.com/tanji/replication-manager/service/replication-manager.service /lib/systemd/system/replication-manager.service
  6. sudo ln -s $GOPATH/bin/replication-manager /usr/bin/replication-manager
复制代码
Now, we should have a working replication-manager installation. The next step is to configure the servers that it manages. The replication-manager reads its configuration file from /etc/replication-manager/config.toml. Create the file and add the following lines to it.
  1. logfile = "/var/log/replication-manager.log"
  2. verbose = true
  3. hosts = "192.168.56.1:3000,192.168.56.1:3001,192.168.56.1:3002,192.168.56.1:3003"
  4. user = "maxuser:maxpwd"
  5. rpluser = "maxuser:maxpwd"
  6. interactive = false
  7. failover-limit = 0
复制代码
Once the configuration file is in place, we can add it as a resource and colocate it with the clusterip resource we created in the previous blog post.
  1. sudo pcs resource create replication-manager systemd:replication-manager op monitor interval=1s
  2. sudo pcs constraint colocation add replication-manager with clusterip INFINITY
复制代码
After the resource is added and configured, we see that it was added and stared on node1.
  1. [[email protected] ~]$ sudo pcs resource
  2. Clone Set: maxscale-clone [maxscale]
  3.      Started: [ node1 node2 ]
  4. clusterip        (ocf::heartbeat:IPaddr2):        Started node1
  5. replication-manager        (systemd:replication-manager):        Started node1
复制代码
Looking at the maxadmin output on the server where the active MaxScale is running, we see that the server at 192.168.56.1:3000 is currently the master.
  1. [[email protected] ~]$ sudo maxadmin list servers
  2. Servers.
  3. ---------+-----------------+-------+-------------+--------------
  4. Server   | Address         | Port  | Connections | Status              
  5. ---------+-----------------+-------+-------------+--------------
  6. server1  | 192.168.56.1    |  3000 |           0 | Master, Running
  7. server2  | 192.168.56.1    |  3001 |           0 | Slave, Running
  8. server3  | 192.168.56.1    |  3002 |           0 | Slave, Running
  9. server4  | 192.168.56.1    |  3003 |           0 | Slave, Running
  10. ---------+-----------------+-------+-------------+--------------
复制代码
Now if we kill the master, replication-manager should pick that up and perform a master failover. But first we need to insert some data to make sure the failover is performed correctly and to make it a bit of a challenge, we’ll do those inserts continuously. First, we create an extremely simple table.
  1. CREATE TABLE test.t1 (id INT);
复制代码
For this, a mysql client in a loop has been started on a remove server.
  1. i=0; while true; do mysql -ss -u maxuser -pmaxpwd -h 192.168.56.220 -P 4006 -e "INSERT INTO test.t1 VALUES (1);SELECT NOW()"; sleep 1; done
复制代码
Now, we’ll have a constant stream of inserts going to our cluster and we can see what happens when we kill the current master at 192.168.56.1:3000.
  1. 2016/10/01 19:56:05 WARN : Master Failure detected! Retry 1/5 2016/10/01 19:56:05 INFO : INF00001 Server 192.168.56.1:3000 is down 2016/10/01 19:56:07 WARN : Master Failure detected! Retry 2/5 2016/10/01 19:56:09 WARN : Master Failure detected! Retry 3/5 2016/10/01 19:56:11 WARN : Master Failure detected! Retry 4/5 2016/10/01 19:56:13 WARN : Master Failure detected! Retry 5/5 2016/10/01 19:56:13 WARN : Declaring master as failed 2016/10/01 19:56:13 INFO : Starting master switch 2016/10/01 19:56:13 INFO : Electing a new master 2016/10/01 19:56:13 INFO : Slave 192.168.56.1:3001 [0] has been elected as a new master 2016/10/01 19:56:13 INFO : Reading all relay logs on 192.168.56.1:3001 2016/10/01 19:56:13 INFO : Stopping slave thread on new master 2016/10/01 19:56:14 INFO : Resetting slave on new master and set read/write mode on 2016/10/01 19:56:14 INFO : Switching other slaves to the new master 2016/10/01 19:56:14 INFO : Change master on slave 192.168.56.1:3003 2016/10/01 19:56:14 INFO : Change master on slave 192.168.56.1:3002 2016/10/01 19:56:15 INFO : Master switch on 192.168.56.1:3001 complete
复制代码
The replication-manager successfully detected the failure of the master and performed a failover. MaxScale will detect this and adapt accordingly.
  1. [[email protected] ~]$ sudo maxadmin list servers
  2. Servers.
  3. ---------+---------------+-------+-------------+--------------
  4. Server   | Address       | Port  | Connections | Status              
  5. ---------+---------------+-------+-------------+--------------
  6. server1  | 192.168.56.1  |  3000 |           0 | Down
  7. server2  | 192.168.56.1  |  3001 |           0 | Master, Running
  8. server3  | 192.168.56.1  |  3002 |           0 | Slave, Running
  9. server4  | 192.168.56.1  |  3003 |           0 | Slave, Running
  10. ---------+---------------+-------+-------------+--------------
复制代码
From the remote server’s terminal, we can see that there was a small window where writes weren’t possible.
  1. 2016-10-01 19:56:01
  2. 2016-10-01 19:56:02
  3. 2016-10-01 19:56:03
  4. 2016-10-01 19:56:04
  5. ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query
  6. ERROR 1045 (28000): failed to create new session
  7. ERROR 1045 (28000): failed to create new session
  8. ERROR 1045 (28000): failed to create new session
  9. ERROR 1045 (28000): failed to create new session
  10. ERROR 1045 (28000): failed to create new session
  11. ERROR 1045 (28000): failed to create new session
  12. ERROR 1045 (28000): failed to create new session
  13. ERROR 1045 (28000): failed to create new session
  14. ERROR 1045 (28000): failed to create new session
  15. ERROR 1045 (28000): failed to create new session
  16. 2016-10-01 19:56:17
  17. 2016-10-01 19:56:18
  18. 2016-10-01 19:56:19
复制代码
Now our whole cluster is highly available and ready for all kinds of disasters.
  Summary
  After integrating the replication-manager into our Corosync/Pacemaker setup, our cluster is highly available. If the server where the replication-manager is running were to go down, it would be started up on another node. Database server outages will be managed by the replication-manager and the access to the cluster will be handled by MaxScale.
  As was mentioned in the previous blog post, high availability is a critical part of any modern system. Even a comfortable Saturday afternoon can    turn into a nightmarewhen a service isn't highly available.
友荐云推荐




上一篇:Tetros: Tetris that fits into the 512 byte MBR
下一篇:Network visualization – part 6: D3 and R (networkD3)
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表