
向MGR中添加节点的处理过程:
在生产环境中,通过在线的方式向MGR集群中添加节点,需要涉及到几大步骤:
当前测试版本MySQL5.7.18
1、备份源生产数据库
2、在目标生产环境恢复数据
3、修改目标库相关参数
4、新节点添加到MGR并追平数据
源库数据完全及增量备份:
在源生产库进行数据全备,然后进行一次增量备份。
innobackupex --defaults-file=/etc/my.cnf --no-timestamp --user=root --password=rootroot /msdata/xtrabackup/full
innobackupex --defaults-file=/etc/my.cnf --incremental /msdata/xtrabackup/inc1 --no-timestamp --incremental-basedir=/msdata/xtrabackup/full --user=root --password=rootroot
关闭目标数据库并复制备份数据:
service mysqld stop
scp -r * 192.168.234.140:/msdata/xtrabackup
cd /msdata
mv datas datas.bk
目标数据库数据恢复:
将日志在目标数据库恢复。
innobackupex --apply-log --redo-only /msdata/xtrabackup/full --user=root --password=rootroot
innobackupex --apply-log --redo-only /msdata/xtrabackup/full --incremental-dir=/msdata/xtrabackup/inc1 --user=root --password=rootroot
innobackupex --defaults-file=/etc/my.cnf --copy-back /msdata/xtrabackup/full
修改uuid:
vi /msdata/datas/auto.cnf
server-uuid=aaaaaaaa-bbbb-cccc-dddd-000000000140
修改参数文件:
原有的三个节点分别是:192.168.234.136:1361,192.168.234.137:1371,192.168.234.138:1381
[client]
port=1400
[mysqld]
server-id=140
port=1400
# mgr
loose-group_replication_local_address="192.168.234.140:1401"
loose-group_replication_group_seeds="192.168.234.136:1361,192.168.234.137:1371,192.168.234.138:1381,192.168.234.140:1401"
启动目标数据库:
service mysqld start
查看备份信息:
more /msdata/xtrabackup/full/xtrabackup_binlog_info
binlog.000006 14284 aaaaaaaa-bbbb-cccc-dddd-000000000136:1-32,
aaaabbbb-cccc-dddd-eeee-111122223333:1-121:1000072-1000078
添加到MGR:
数据恢复起点GLOBAL.GTID_PURGED以Xtrabackup中的数据恢复信息为准,如果以gtid或show master status来查看会发现信息不准确,以此追加数据会出现数据问题。这应该是个Bug,当前版本是5.7.18
mysql>
reset master;
SET @@GLOBAL.GTID_PURGED='aaaaaaaa-bbbb-cccc-dddd-000000000136:1-32,aaaabbbb-cccc-dddd-eeee-111122223333:1-121:1000072-1000078';
change master to master_user='rpl_user', master_password='rpl_passIEkdfiwekadfkIEFKA' for channel 'group_replication_recovery';
start group_replication;
添加完成以后,如果需要追加的数据比较多,则节点会出现Recover的状态,时间长短视需要追加的数据多少。
查看当前信息:
mysql> show master status\G;
*************************** 1. row ***************************
File: binlog.000001
Position: 11420
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: aaaaaaaa-bbbb-cccc-dddd-000000000136:1-36,
aaaabbbb-cccc-dddd-eeee-111122223333:1-162:1000072-1000078
1 row in set (0.00 sec)
节点状态:
[root@mic5 datas]# mgr memstat
CHANNEL_NAME: group_replication_applier
VIEW_ID: 15077880673075621:36
MEMBER_ID: aaaaaaaa-bbbb-cccc-dddd-000000000140
COUNT_TRANSACTIONS_IN_QUEUE: 0
COUNT_TRANSACTIONS_CHECKED: 0
COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-bbbb-cccc-dddd-000000000136:1-32,
aaaabbbb-cccc-dddd-eeee-111122223333:1-162:1000072-1000078
LAST_CONFLICT_FREE_TRANSACTION:
# mgr mem
CHANNEL_NAME: group_replication_applier
MEMBER_ID: aaaaaaaa-bbbb-cccc-dddd-000000000136
MEMBER_HOST: mic1
MEMBER_PORT: 1360
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: aaaaaaaa-bbbb-cccc-dddd-000000000137
MEMBER_HOST: mic2
MEMBER_PORT: 1370
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: aaaaaaaa-bbbb-cccc-dddd-000000000138
MEMBER_HOST: mic3
MEMBER_PORT: 1380
MEMBER_STATE: ONLINE
*************************** 4. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: aaaaaaaa-bbbb-cccc-dddd-000000000140
MEMBER_HOST: mic5
MEMBER_PORT: 1400
MEMBER_STATE: ONLINE





