mysql-mha

高可用基本原理

SQL线程执行完毕后,relay-log会被清空。

MHA环境

hostname  外网IP        内网IP         MySQL版本    系统版本
db01      10.0.0.51    172.16.1.51    5.6.40      CentOS7.5
db02      10.0.0.52    172.16.1.52    5.6.40      CentOS7.5
db03      10.0.0.53    172.16.1.53    5.6.40      CentOS7.5
db04      10.0.0.54    172.16.1.54    5.6.40      CentOS7.5

给所有从库,执行初始化
stop slave;
reset slave all;

基于GTID的主从复制

UUID:dbbf22c5-f830-11e8-afef-000c293e6a42  每个MySQL实例都有一个UUID
在数据目录的auto.conf下
TID:事务提交编号

传统的主从复制: 
主库上: 开启binlog  server-id
    master-log-file: mysql-bin.000001
    master-log-pos: 120
    创建一个主从复制账号

从库上: server-id和主库不同

GTID:3E11FA47-71CA-11E1-9E33-C80AA9429562:23

23是事物数量,提交一次会增长一次。

GTID主从复制是从MySQL5.6开始才有的。

如何开启GTID

mysql> show variables like '%gtid%';
+---------------------------------+-----------+
| Variable_name                   | Value     |
+---------------------------------+-----------+
| binlog_gtid_simple_recovery     | OFF       |
| enforce_gtid_consistency        | OFF       |
| gtid_executed                   |           |
| gtid_mode                       | OFF       |
| gtid_next                       | AUTOMATIC |
| gtid_owned                      |           |
| gtid_purged                     |           |
| simplified_binlog_gtid_recovery | OFF       |
+---------------------------------+-----------+
8 rows in set (0.00 sec)

[root@db01 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 1
log_bin = mysql-bin
gtid_mode=ON
enforce_gtid_consistency

[root@db01 ~]# less /usr/local/mysql/data/db01.err 
2018-12-06 10:18:31 8129 [ERROR] --gtid-mode=ON or UPGRADE_STEP_1 or UPGRADE_STEP_2 requires --log-bin a
nd --log-slave-updates
2018-12-06 10:18:31 8129 [ERROR] Aborting

[root@db01 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 1
log_bin = mysql-bin
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates

[root@db01 ~]# /etc/init.d/mysqld restart
mysql> show variables like '%gtid%';

mysql> show variables like '%gtid%';
+---------------------------------+-----------+
| Variable_name                   | Value     |
+---------------------------------+-----------+
| binlog_gtid_simple_recovery     | OFF       |
| enforce_gtid_consistency        | ON        |
| gtid_executed                   |           |
| gtid_mode                       | ON        |
| gtid_next                       | AUTOMATIC |
| gtid_owned                      |           |
| gtid_purged                     |           |
| simplified_binlog_gtid_recovery | OFF       |
+---------------------------------+-----------+
8 rows in set (0.00 sec)

MySQL5.6版本开启gtid,需要三个参数
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates

需要开启log-slave-updates的情况
    双主模式
    级联复制
    GTID

MySQL5.7开启GTID,不需要开启log-slave-updates

log-slave-updates让从库的binlog不断的更新


从库也要开启GTID,并重启
gtid_mode=ON
enforce_gtid_consistency
log-slave-updates
log_bin = mysql-bin

1主和三从全部开启GTID,在主库上新建一个账号
mysql> grant replication slave on *.* to slave@'172.16.1.%' identified by '1';
Query OK, 0 rows affected (0.00 sec)

在从库上配置连接主库信息
mysql> change master to 
    master_host='172.16.1.51',
    master_user='slave',
    master_password='1',
    master_auto_position=1;
mysql> start slave;
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.16.1.51
                  Master_User: slave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 407
               Relay_Log_File: db02-relay-bin.000002
                Relay_Log_Pos: 617
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ................
             Master_Server_Id: 1
                  Master_UUID: 6b2721ff-f7a1-11e8-ac47-000c29d7a44f
             Master_Info_File: /usr/local/mysql/data/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 6b2721ff-f7a1-11e8-ac47-000c29d7a44f:1
            Executed_Gtid_Set: 6b2721ff-f7a1-11e8-ac47-000c29d7a44f:1
                Auto_Position: 1
1 row in set (0.00 sec)


未开启GTID的功能,就会出现下面的错误。
mysql> change master to master_host='172.16.1.51',master_user='slave',master_password='1',master_auto_position=1;
ERROR 1777 (HY000): CHANGE MASTER TO MASTER_AUTO_POSITION = 1 can only be executed when @@GLOBAL.GTID_MODE = ON.


主库状态,增加一个gtid状态信息
mysql> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000002
         Position: 407
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 6b2721ff-f7a1-11e8-ac47-000c29d7a44f:1
1 row in set (0.00 sec)


跳过域名的反向解析,在mysqld配置文件添加,`skip_name_resolve`

基于GTID的主从复制做完之后,就可以开始部署MHA啦。
基于传统的方式可以做MHA

基于MHA的主从:
主库:
    开启binlog
    server-id = 5
    创建一个主从复制用户。

从库:
    开启binlog
    server-id != 5 建议大于5
    从库的之间server-id不能相同。因为从库可能会切换为主库的。
    也要创建一个主从复制用户。

## 关闭SSH解析加快速度

    [root@phpmyadmin ~]# grep "DNS" /etc/ssh/sshd_config
    #UseDNS yes
    [root@phpmyadmin ~]# vim /etc/ssh/sshd_config
    UseDNS no
    [root@phpmyadmin ~]# grep "DNS" /etc/ssh/sshd_config
    UseDNS no
    [root@phpmyadmin ~]# systemctl restart sshd

sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
systemctl restart sshd

部署MHA

0 建议的my.cnf配置

[root@db03 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 2
gtid_mode = ON
enforce_gtid_consistency
log_slave_updates
log_bin = mysql-bin
read_only = 1
skip_name_resolve
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

1 在所有节点(4台机器)上安装perl相关依赖

yum install perl-DBD-MySQL -y
...
Installed:
perl-DBD-MySQL.x86_64 0:4.023-6.el7                                                                   

Dependency Installed:
perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7        perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7       
perl-DBI.x86_64 0:1.627-4.el7                       perl-IO-Compress.noarch 0:2.061-2.el7             
perl-Net-Daemon.noarch 0:0.48-5.el7                 perl-PlRPC.noarch 0:0.2020-14.el7                 
...

2 在所有节点上安装node节点

rpm -ivh /root/mha4mysql-node-0.56-0.el6.noarch.rpm

3 在主库里创建MHA管理账号

mysql> GRANT ALL ON *.* TO mha@'172.16.1.%' IDENTIFIED BY 'mha';

因为已经搭建好GTID主从复制,因此从库也会自动创建上述账号。

4 做命令的软链接4台机器都要做

默认yum安装MySQL,命令会安装到/usr/bin下,因此我们需要做软链接指向。

ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog

5 部署管理节点(在这里我们装在4号机器上)

需要EPEL源支持,用来装依赖。manager节点依赖node节点,需要先安装。

yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
rpm -ivh /root/mha4mysql-manager-0.56-0.el6.noarch.rpm

6 创建目录和配置文件

配置文件(配置文件的结尾最好不要有空格)

[root@db04 ~]# mkdir /etc/mha
日志目录
[root@db04 ~]# mkdir -p /var/log/mha/app1

[root@db04 ~]# cat /etc/mha/app1.cnf
[server default]
manager_log=/var/log/mha/app1/manager.log
manager_workdir=/var/log/mha/app1
master_binlog_dir=/usr/local/mysql/data
user=mha
password=mha
repl_user=slave
ping_interval=2
ssh_user=root
repl_password=1

[server1]
hostname=172.16.1.51
port=3306

[server2]
#candidate_master=1
#check_repl_delay=0
hostname=172.16.1.52
port=3306

[server3]
hostname=172.16.1.53
port=3306

[server4]
hostname=172.16.1.54
port=3306

defalut是全局配置文件 ,剩下的是局部配置文件,如果重复,局部配置文件优先。

7 配置免密登录

四台机器互相免密登录,包括自己对自己也需要免密登录。

yum install -y sshpass
下面这种方式不需要敲回车,直接就可以。
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa &> /dev/null  

sshpass -p 123456 ssh-copy-id -i /root/.ssh/id_dsa.pub -o "StrictHostKeyChecking no" root@172.16.1.51
sshpass -p 123456 ssh-copy-id -i /root/.ssh/id_dsa.pub -o "StrictHostKeyChecking no" root@172.16.1.52
sshpass -p 123456 ssh-copy-id -i /root/.ssh/id_dsa.pub -o "StrictHostKeyChecking no" root@172.16.1.53
sshpass -p 123456 ssh-copy-id -i /root/.ssh/id_dsa.pub -o "StrictHostKeyChecking no" root@172.16.1.54


验证是否能正确免密连接

ssh 172.16.1.51 "echo ok"
ssh 172.16.1.52 "echo ok"
ssh 172.16.1.53 "echo ok"
ssh 172.16.1.54 "echo ok"

8 启动之前进行最后的检测

SSH互相免密登录检查

[root@db04 ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf

MHA主从复制健康检查

[root@db04 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
[root@db04 ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
Thu Dec  6 12:11:11 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Dec  6 12:11:11 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Thu Dec  6 12:11:11 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
Thu Dec  6 12:11:11 2018 - [info] Starting SSH connection tests..
.....................
Thu Dec  6 12:11:13 2018 - [debug]  Connecting via SSH from root@172.16.1.54(172.16.1.54:22) to root@172.16.1.52(172.16.1.52:22)..
Thu Dec  6 12:11:14 2018 - [debug]   ok.
Thu Dec  6 12:11:14 2018 - [debug]  Connecting via SSH from root@172.16.1.54(172.16.1.54:22) to root@172.16.1.53(172.16.1.53:22)..
Thu Dec  6 12:11:15 2018 - [debug]   ok.
Thu Dec  6 12:11:15 2018 - [info] All SSH connection tests passed successfully.

[root@db04 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
Thu Dec  6 12:16:22 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Dec  6 12:16:22 2018 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Thu Dec  6 12:16:22 2018 - [info] Reading server configuration from /etc/mha/app1.cnf..
Thu Dec  6 12:16:22 2018 - [info] MHA::MasterMonitor version 0.56.
Thu Dec  6 12:16:23 2018 - [info] GTID failover mode = 1
Thu Dec  6 12:16:23 2018 - [info] Dead Servers:
Thu Dec  6 12:16:23 2018 - [info] Alive Servers:
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.51(172.16.1.51:3306)
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.52(172.16.1.52:3306)
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.53(172.16.1.53:3306)
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.54(172.16.1.54:3306)
Thu Dec  6 12:16:23 2018 - [info] Alive Slaves:
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.52(172.16.1.52:3306)  Version=5.6.40-log (oldest major version between slaves) log-bin:enabled
Thu Dec  6 12:16:23 2018 - [info]     GTID ON
Thu Dec  6 12:16:23 2018 - [info]     Replicating from 172.16.1.51(172.16.1.51:3306)
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.53(172.16.1.53:3306)  Version=5.6.40-log (oldest major version between slaves) log-bin:enabled
Thu Dec  6 12:16:23 2018 - [info]     GTID ON
Thu Dec  6 12:16:23 2018 - [info]     Replicating from 172.16.1.51(172.16.1.51:3306)
Thu Dec  6 12:16:23 2018 - [info]   172.16.1.54(172.16.1.54:3306)  Version=5.6.40-log (oldest major version between slaves) log-bin:enabled
Thu Dec  6 12:16:23 2018 - [info]     GTID ON
Thu Dec  6 12:16:23 2018 - [info]     Replicating from 172.16.1.51(172.16.1.51:3306)
Thu Dec  6 12:16:23 2018 - [info] Current Alive Master: 172.16.1.51(172.16.1.51:3306)
Thu Dec  6 12:16:23 2018 - [info] Checking slave configurations..
Thu Dec  6 12:16:23 2018 - [info]  read_only=1 is not set on slave 172.16.1.52(172.16.1.52:3306).
Thu Dec  6 12:16:23 2018 - [info]  read_only=1 is not set on slave 172.16.1.53(172.16.1.53:3306).
Thu Dec  6 12:16:23 2018 - [info]  read_only=1 is not set on slave 172.16.1.54(172.16.1.54:3306).
Thu Dec  6 12:16:23 2018 - [info] Checking replication filtering settings..
Thu Dec  6 12:16:23 2018 - [info]  binlog_do_db= , binlog_ignore_db= 
Thu Dec  6 12:16:23 2018 - [info]  Replication filtering check ok.
Thu Dec  6 12:16:23 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Dec  6 12:16:23 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Dec  6 12:16:23 2018 - [info] HealthCheck: SSH to 172.16.1.51 is reachable.
Thu Dec  6 12:16:23 2018 - [info] 
172.16.1.51(172.16.1.51:3306) (current master)
 +--172.16.1.52(172.16.1.52:3306)
 +--172.16.1.53(172.16.1.53:3306)
 +--172.16.1.54(172.16.1.54:3306)

Thu Dec  6 12:16:23 2018 - [info] Checking replication health on 172.16.1.52..
Thu Dec  6 12:16:23 2018 - [info]  ok.
Thu Dec  6 12:16:23 2018 - [info] Checking replication health on 172.16.1.53..
Thu Dec  6 12:16:23 2018 - [info]  ok.
Thu Dec  6 12:16:23 2018 - [info] Checking replication health on 172.16.1.54..
Thu Dec  6 12:16:23 2018 - [info]  ok.
Thu Dec  6 12:16:23 2018 - [warning] master_ip_failover_script is not defined.
Thu Dec  6 12:16:23 2018 - [warning] shutdown_script is not defined.
Thu Dec  6 12:16:23 2018 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

9 启动MHA

[root@db04 ~]# nohup masterha_manager \
--conf=/etc/mha/app1.cnf \
--remove_dead_master_conf \
--ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
[1] 3160
检查启动是否成功
[root@db04 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:3160) is running(0:PING_OK), master:172.16.1.51

OR

**启动**

nohup masterha_manager  --conf=/etc/mha/app1.cnf  --remove_dead_master_conf  --ignore_last_failover &>> /var/log/mha/app1/manager.log &

**停止**

masterha_stop --conf=/etc/mha/app1.cnf

nohup 详解  https://www.cnblogs.com/jinxiao-pu/p/9131057.html

MHA使用

MHA启动时读取配置文件规则

从上到下,按行读取。切换后,摘除崩掉的主机,然后把配置文件重写一遍(注释直接清除)

MHA恢复步骤

1 修改DOWN机的主库(旧主库)
2 在MHA日志中找到 change master to 语句
    grep -i 'change master to' /var/log/mha/app1/manager.log
3 启动旧主库,执行 change master to 语句连接到新主库
4 打开IO、SQL线程(start slave)
5 把旧主库的server标签在MHA配置文件中添加回来
6 启动MHA

MHA主库崩了,MHA软件自动退出。

MHA的切换机制
1 切换后会生成一个临时文件,在MHA的工作目录下
2 下一次切换之前,先检查是否存在这个临时文件
3 如果存在,则不做切换
4 如果不存在,则做切换
5 该文件,存在时间8小时 (八小时之内不再做二次切换)

MHA恢复模拟

1 在MHA软件那台机器运行MHA状态检查脚本查看当前MHA集群状态

[root@db04 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
....
172.16.1.51(172.16.1.51:3306) (current master)
 +--172.16.1.52(172.16.1.52:3306)
 +--172.16.1.53(172.16.1.53:3306)
 +--172.16.1.54(172.16.1.54:3306)
...

可以到当前,51是主库,52,53,54是从库。

2 模拟主库故障。

[root@db01 ~]# /etc/init.d/mysqld stop

3 查看MHA日志信息

[root@db04 ~]# less /var/log/mha/app1/manager.log 
....
From:
172.16.1.51(172.16.1.51:3306) (current master)
 +--172.16.1.52(172.16.1.52:3306)
 +--172.16.1.53(172.16.1.53:3306)
 +--172.16.1.54(172.16.1.54:3306)

To:
172.16.1.52(172.16.1.52:3306) (new master)
 +--172.16.1.53(172.16.1.53:3306)
 +--172.16.1.54(172.16.1.54:3306)
.....

可以看到现在MHA集群已经自动发送主从切换,52成为新的主库,53和54指向52作为从库。

再等待一会我们会发现,MHA软件已经自动退出,并且MHA配置文件中,旧的主库配置信息(也就是51)已经被自动移除。

[root@db04 ~]# cat /etc/mha/app1.cnf 
[server default]
manager_log=/var/log/mha/app1/manager.log
manager_workdir=/var/log/mha/app1
master_binlog_dir=/usr/local/mysql/data
password=mha
ping_interval=2
repl_password=1
repl_user=slave
ssh_user=root
user=mha

[server2]
hostname=172.16.1.52
port=3306

[server3]
hostname=172.16.1.53
port=3306

[server4]
hostname=172.16.1.54
port=3306
[root@db04 ~]# ps -ef | grep mha
root       9343   2596  0 16:21 pts/1    00:00:00 grep --color=auto mha

4 现在我们人工把崩掉的51旧主库修复

修复旧主库之后,再手动把51指向52,然后人工把配置文件补全,最后启动MHA软件,查看新的MHA集群状态。

让51作为52的从库的语句,可以在MHA日志中找到。

[root@db04 ~]# grep -i 'change master to' /var/log/mha/app1/manager.log 
Thu Dec  6 16:15:22 2018 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.16.1.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='slave', MASTER_PASSWORD='xxx';

[root@db01 ~]# /etc/init.d/mysqld start
[root@db01 ~]# mysql
mysql> CHANGE MASTER TO MASTER_HOST='172.16.1.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='slave', MASTER_PASSWORD='1';
mysql> start slave;
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.16.1.52
                  Master_User: slave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 191
               Relay_Log_File: db01-relay-bin.000002
                Relay_Log_Pos: 361
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

5 补全MHA配置文件如下

[root@db04 ~]# cat /etc/mha/app1.cnf
[server default]
manager_log=/var/log/mha/app1/manager.log
manager_workdir=/var/log/mha/app1
master_binlog_dir=/usr/local/mysql/data
password=mha
ping_interval=2
repl_password=1
repl_user=slave
ssh_user=root
user=mha

[server1]
hostname=172.16.1.51
port=3306

[server2]
hostname=172.16.1.52
port=3306

[server3]
hostname=172.16.1.53
port=3306

[server4]
hostname=172.16.1.54
port=3306

可以看到,崩掉的51配置,被我重新补全了。

6 重新启动MHA软件

[root@db04 ~]# nohup masterha_manager \
--conf=/etc/mha/app1.cnf \
--remove_dead_master_conf \
--ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
[1] 3160

检查启动是否成功

[root@db04 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:9375) is running(0:PING_OK), master:172.16.1.52

检查日志状态,查看新的MHA集群结构

[root@db04 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
172.16.1.52(172.16.1.52:3306) (current master)
 +--172.16.1.51(172.16.1.51:3306)
 +--172.16.1.53(172.16.1.53:3306)
 +--172.16.1.54(172.16.1.54:3306)

可以看到,我们已经彻底成功切换为新的MHA结构了。

四台机器设置关闭自动删除relay-log功能

#禁用自动删除relay log 功能
mysql> set global relay_log_purge = 0;
#设置只读
mysql> set global read_only=1;
#编辑配置文件
[root@mysql-db02 ~]# vim /etc/my.cnf
#在mysqld标签下添加
[mysqld]
#禁用自动删除relay log 永久生效
relay_log_purge = 0

MHA在没有数据量的情况下,如何切换?

配置文件里的server标签越小,优先级越高,在没有数量的情况下优先切换。

基于传统的主从复制MHA

1 确认每台机器的数据库配置文件

[root@db01 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 1
log_bin = mysql-bin
read_only=1
skip_name_resolve

[root@db02 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 2
log_bin = mysql-bin
read_only=1
skip_name_resolve

[root@db03 ~]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 3
gtid_mode=ON
log_bin = mysql-bin
read_only=1
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

[root@db04 app1]# egrep -v '^$|^#' /etc/my.cnf
[mysqld]
server_id = 4
log_bin = mysql-bin
read_only = 1
skip_name_resolve
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

2 把52,53,54配置为从库,51是主库。

change master to 
    master_host='172.16.1.51',
    master_user='slave',
    master_password='1',
    master_log_file='mysql-bin.000002',
    master_log_pos=120;
start slave;

3 启动MHA软件

[root@db04 ~]# nohup masterha_manager \
--conf=/etc/mha/app1.cnf \
--remove_dead_master_conf \
--ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

4 检查状态

[root@db04 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:9375) is running(0:PING_OK), master:172.16.1.52
[root@db04 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf