MHA+Keepalived构建高可用Mysql集群
环境:Centos7.4 mariadb5.5 keepalived1.3 mha4mysql0.56
一、环境准备
1、主机信息
主机名 | ip地址 | 所需要软件包 | 角色 |
---|---|---|---|
master | 172.20.10.7 | mariadb mha-node keepalived | mysql master |
manager | 172.20.10.8 | mariadb mha-node keepalived mha-manager | mysql slave mha manager |
slave | 172.20.10.9 | mariadb mha-node | mysql slave |
2、修改主机名 (对照上表修改)
echo "master" >/etc/hostname echo "manager" >/etc/hostname echo "slave" >/etc/hostname
3、配置hosts(每台主机都要添加)
# vim /etc/hosts 172.20.10.7 master 172.20.10.8 manager 172.20.10.9 slave
4、配置免密钥登陆
# ssh-keygen
# ssh-copy-id master
# ssh-copy-id manager
# ssh-copy-id slave
注:每台主机都需要执行,且主机本身也需要执行(重要),否则后面环境检测会报错
二、配置Mysql集群主从
1、各节点安装mariadb
# yum -y install mariadb mariadb-server
2、节点配置
- master节点配置
[root@master ~]# vim /etc/my.cnf [mysqld] server-id=1 log-bin=master-bin relay-log=relay-log
- slave节点配置
[root@manager ~]# vim /etc/my.cnf [mysqld] server-id=2 log-bin=master-log relay-log=relay-log relay_log_purge=0 read_only=1 skip_name_resolve=1 innodb_file_per_table=1
[root@slave ~]# vim /etc/my.cnf [mysqld] server-id=3 log-bin=master-log relay-log=relay-log relay_log_purge=0 read_only=1 skip_name_resolve=1 innodb_file_per_table=1
3、各节点启动服务
# systemctl start mariadb && systemctl enable mariadb
4、授权
- master节点
MariaDB [(none)]> grant replication slave,replication client on *.* to 'repluser'@'172.20.10.%' identified by '123456' ; #授权主从复制用户-repluser
MariaDB [(none)]> grant all on *.* to 'mhaadmin'@'172.20.10.%' identified by 'mhapass' ; #授权MHA管理用户-mhaadmin
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> show master status; +-------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +-------------------+----------+--------------+------------------+ | master-bin.000001 | 245 | | | +-------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
- slave节点
MariaDB [(none)]> change master to master_host='172.20.10.7',master_user='repluser',master_password='123456',master_log_file='master-bin.000001',master_log_pos=245;
MariaDB [(none)]> start slave;
MariaDB [(none)]> show slave status\G Relay_Master_Log_File: master-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes
三、配置keepalived高可用
1、安装keepalived(master节点和manage节点上安装)
# yum -y install keepalived
2、修改配置
- 主节点
[root@master ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id mysql vrrp_mcast_group4 224.0.88.88 #组播地址 } vrrp_script chk_mysqld { script "/etc/keepalived/check_mysql.sh" #测试脚本 interval 1 weight -5 fall 2 } vrrp_instance mysql{ state BACKUP interface eno16777736 #心跳,网络监控端口 virtual_router_id 51 priority 100 #权重,manager上要略低于100 advert_int 1 nopreempt #不抢占模式,从节点上不必配置此项 authentication { auth_type PASS auth_pass 1111 } track_script { chk_mysqld } virtual_ipaddress { 172.20.10.10/24 dev eno16777736 #VIP地址 } }
- 备用节点
[root@manager ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id mysql vrrp_mcast_group4 224.0.88.88 #组播地址 } vrrp_script chk_mysqld { script "/etc/keepalived/check_mysql.sh" #测试脚本 interval 1 weight -5 fall 2 } vrrp_instance mysql{ state BACKUP interface eno16777736 #心跳,网络监控端口 virtual_router_id 51 priority 90 #权重,manager上要略低于100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_mysqld } virtual_ipaddress { 172.20.10.10/24 dev eno16777736 #VIP地址 } }
3、检测脚本(master和manager节点)
# vim /etc/keepalived/check_mysql.sh #!/bin/sh stat=$(ps -C mysqld --no-header | wc -l) if [ $stat -eq 0 ];then killall keepalived &>/dev/null fi
4、启动keepalived集群
# systemctl start keepalived && systemctl status keepalived
注:先启动主节点,在启动从节点
5、查看vip是否生效
[root@master ~]# ip add inet 172.20.10.10/24 scope global secondary eno16777736
四、配置MHA
MHA项目地址:https://github.com/yoshinorim/mha4mysql-manager
本项目安装包链接: https://pan.baidu.com/s/1W74oIbVp2y91MClI–Ge8w 密码: bw9t
1、安装依懒
yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes perl-CPAN perl-devel -y
2、安装mha-node (所有节点)
tar xf mha4mysql-node-0.56.tar.gz && cd mha4mysql-node-0.56/ perl Makefile.PL make && make install
3、manager节点安装mha-manager管理
tar xf mha4mysql-manager-0.56.tar.gz && cd mha4mysql-manager-0.56/ perl Makefile.PL make && make install
4、创建配置文件 (manager节点)
[root@manager ~]# mkdir /etc/masterha
[root@manager ~]# mkdir -p /data/masterha/app1
[root@manager ~]# vim /etc/masterha/app1.cnf user=mhaadmin #管理用户 password=mhapass #管理密码 manager_workdir=/data/masterha/app1 #工作目录 manager_log=/data/masterha/app1/manager.log #工作日志 remote_workdir=/data/masterha/app1 ssh_user=root #ssh登陆用户 repl_user=repluser #主从复制用户 repl_password=123456 ping_interval=1 [server1] hostname=172.20.10.7 candidate_master=1 [server2] hostname=172.20.10.8 candidate_master=1 [server3] hostname=172.20.10.9
5、测试mha集群
• 检查主机间SSH通讯及健康状态
[root@manager ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf Thu Jun 21 00:15:49 2018 - [info] All SSH connection tests passed successfully.
注:检查主机之间ssh通讯状态,状态必须为All SSH connection tests passed successfully.才能进行后面操作;
• 检查集群状态
[root@manager ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf MySQL Replication Health is OK.
注:状态必须为MySQL Replication Health is OK.才能进行后面操作;
6、启动MHA
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &
7、查看运行状态
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:12264) is running(0:PING_OK), master:172.20.20.7
8、故障测试
• 关闭master的mysql进程
[root@master ~]# systemctl stop mariadb
• 查看VIP是否转移到manager
[root@manager ~]#ip add inet 172.20.10.10/24 scope global secondary eno16777736
• 查看masterha_manager的日志信息
[root@manager ~]# tail /data/masterha/app1/manager.log Started automated(non-interactive) failover. The latest slave 172.20.10.8(172.20.10.8:3306) has all relay logs for recovery. Selected 172.20.10.8(172.20.10.8:3306) as a new master. 172.20.10.8(172.20.10.8:3306): OK: Applying all logs succeeded. 172.20.10.9(172.20.10.9:3306): This host has the latest relay log events. Generating relay diff files from the latest slave succeeded. 172.20.10.9(172.20.10.9:3306): OK: Applying all logs succeeded. Slave started, replicating from 172.20.10.8(172.20.10.8:3306) 172.20.10.8(172.20.10.8:3306): Resetting slave info succeeded. Master failover to 172.20.10.8(172.20.10.8:3306) completed successfully
温馨提示:如无特殊说明,本站文章均为作者原创,转载时请注明出处及相应链接!