現状、まだLab版ですが、MySQL5.7の追加のプラグインとして、マルチマスタまたはアクティブ/アクティブレプリケーションをサポートする
同期レプリケーション型のグループレプリケーションが準備されています。まだ、Lab版という事もあり、機能追加やバグ対応などがまだまだ必要な段階ですが、LAB版→DR版→RC版→GA版と段々と安定してくると思いますので、次のLab版がリリースされたら是非検証環境で試してみて頂ければと思います。
マスターサーバーのHA対応やスレーブが多い環境で、マスターサーバーのレプリケーション負荷分散等に活用出来そうです。
Group Replication関連参考ブログを見て頂けると、基本的なインストール方法が書かれていますので試される場合は、此方を参考にして下さい。
http://mysqlhighavailability.com/getting-started-with-mysql-group-replication/
NODE1にてグループレプリケーション開始
※オプションファイルに書いておくことで、SETコマンドの実行は不要です。
※XCOMで通信する為のポートは、通常のMySQL PORT3306とは別にしてください。
root@localhost [mysql]> SET GLOBAL group_replication_group_name= "00000000-1111-2222-3333-123456789ABC";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_bootstrap_group= 1;
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_local_address="192.168.56.101:13001";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_peer_addresses= "192.168.56.101:13001,192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_recovery_user='rpl_user';
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_recovery_password='rpl_pass';
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.59 sec)
root@localhost [mysql]> SET GLOBAL group_replication_bootstrap_group= 0;
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:1-4
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [mysql]>
NODE2をGRに参加してみます。
root@localhost [mysql]> SET GLOBAL group_replication_group_name= "00000000-1111-2222-3333-123456789ABC";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_local_address="192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_peer_addresses= "192.168.56.101:13001,192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_recovery_user='rpl_user';
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> SET GLOBAL group_replication_recovery_password='rpl_pass';
Query OK, 0 rows affected (0.00 sec)
root@localhost [mysql]> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.04 sec)
root@localhost [mysql]> SELECT * FROM performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 29ea17bc-3848-11e6-9900-0800279ca844
MEMBER_HOST: misc01
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: 5b07d5d8-4057-11e6-a315-0800279cea3c
MEMBER_HOST: misc02
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
2 rows in set (0.01 sec)
root@localhost [mysql]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [mysql]>

MEMBER_STATEが共にONLINEになっているので、DDL、DMLを処理して同期されているか確認して見ます
先ずは、NODE1でデータベース、テーブルを作成してからデータを1件入れてみます。
NODE1で作成したオブジェクトやデータはNODE2でも確認出来ます。
また、同様にNODE2で入れたデータは、NODE1で確認する事が出来ます。
root@localhost [mysql]> CREATE DATABASE GR_TEST;
Query OK, 1 row affected (0.03 sec)
root@localhost [mysql]> use GR_TEST;
Database changed
root@localhost [GR_TEST]> CREATE TABLE GR_TEST.T01 (
-> ID INT NOT NULL PRIMARY KEY,
-> MEMO varchar(30) COLLATE utf8_bin NOT NULL DEFAULT ''
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Query OK, 0 rows affected (0.07 sec)
root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (1,@@hostname);
Query OK, 1 row affected (0.07 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
+----+--------+
1 row in set (0.01 sec)
root@localhost [GR_TEST]>
NODE2でデータを確認してみます。
root@localhost [mysql]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| GR_TEST |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.00 sec)
root@localhost [mysql]> use GR_TEST
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
+----+--------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (2,@@hostname);
Query OK, 1 row affected (0.04 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
+----+--------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01 | 3306 | ONLINE |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [GR_TEST]>
NODE2で入れたデータはNODE1でも確認出来ました。
これで、双方向にレプリケーションが張られている事が確認出来ました。
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
+----+--------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01 | 3306 | ONLINE |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [GR_TEST]>
GTIDの状態を確認
自分で更新したデータに関しては、RECEIVED_TRANSACTION_SETには反映されないので、@@GLOBAL.GTID_EXECUTEDで何処まで適用されているか確認。
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7:9
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-10 |
+-------------------------------------------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]>
NODE1とNODE2の間で、トランザクションの競合が発生した場合
(同時に同じデータを更新しようとした場合)
NODE1で先ずは、トランザクションを張って処理を実行してみます。そして、Commitを行う前に、NODE2で同じデータを更新処理してみます。
最初に処理を開始した、NODE1は問題無く処理出来てますが、NODE2のCommit処理はエラーで終了しています。
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
| 3 | misc01 |
| 4 | misc02 |
+----+--------+
4 rows in set (0.00 sec)
root@localhost [GR_TEST]> start transaction;update T01 set MEMO = @@hostname where ID = 4;
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (0.03 sec)
Rows matched: 1 Changed: 1 Warnings: 0
root@localhost [GR_TEST]> commit;
Query OK, 0 rows affected (0.01 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
| 3 | misc01 |
| 4 | misc01 |
+----+--------+
4 rows in set (0.00 sec)
root@localhost [GR_TEST]>
NODE2は、ERROR 1180 (HY000): Got error 149 during COMMITでエラーになっています。
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
| 3 | misc01 |
| 4 | misc02 |
+----+--------+
4 rows in set (0.00 sec)
root@localhost [GR_TEST]> start transaction;update T01 set MEMO = 'MISC02' where ID = 4;
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
root@localhost [GR_TEST]> commit;
ERROR 1180 (HY000): Got error 149 during COMMIT
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO |
+----+--------+
| 1 | misc01 |
| 2 | misc02 |
| 3 | misc01 |
| 4 | misc01 |
+----+--------+
4 rows in set (0.00 sec)
root@localhost [GR_TEST]>

ONLINE DDLでALTERを実行した場合
結果としては、問題無くDDLも実行可能で伝搬されます。但し、現状のGRの仕様としては、オンラインスキーマ変更は推奨されていないので、
1)BINLOGをOFFにしてDDL実行 2) 他のノードも同様にBINLOGをOFFにしてDDL実行 3) 最後にアプリケーションを変更し変更を反映させるのが良さそうです。
NODE1にてDDLを実行して列を追加してみます。
root@localhost [GR_TEST]> ALTER TABLE T01 add column created_time datetime DEFAULT CURRENT_TIMESTAMP;
Query OK, 0 rows affected (0.12 sec)
Records: 0 Duplicates: 0 Warnings: 0
root@localhost [GR_TEST]> desc T01;
+--------------+-------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+-------------------+-------+
| ID | int(11) | NO | PRI | NULL | |
| MEMO | varchar(30) | NO | | | |
| created_time | datetime | YES | | CURRENT_TIMESTAMP | |
+--------------+-------------+------+-----+-------------------+-------+
3 rows in set (0.03 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO | created_time |
+----+--------+---------------------+
| 1 | misc01 | 2016-07-17 15:40:29 |
| 2 | misc02 | 2016-07-17 15:40:29 |
| 3 | misc01 | 2016-07-17 15:40:29 |
| 4 | misc02 | 2016-07-17 15:40:29 |
+----+--------+---------------------+
4 rows in set (0.00 sec)
root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (5,@@hostname);
Query OK, 1 row affected (0.01 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO | created_time |
+----+--------+---------------------+
| 1 | misc01 | 2016-07-17 15:40:29 |
| 2 | misc02 | 2016-07-17 15:40:29 |
| 3 | misc01 | 2016-07-17 15:40:29 |
| 4 | misc02 | 2016-07-17 15:40:29 |
| 5 | misc01 | 2016-07-17 15:42:33 |
+----+--------+---------------------+
5 rows in set (0.00 sec)
root@localhost [GR_TEST]>
NODE2でNODE1で実行されたDDLの結果を確認して、NODE2からデータを追加してみます。
root@localhost [GR_TEST]> desc T01;
+--------------+-------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+-------------------+-------+
| ID | int(11) | NO | PRI | NULL | |
| MEMO | varchar(30) | NO | | | |
| created_time | datetime | YES | | CURRENT_TIMESTAMP | |
+--------------+-------------+------+-----+-------------------+-------+
3 rows in set (0.01 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO | created_time |
+----+--------+---------------------+
| 1 | misc01 | 2016-07-17 15:40:29 |
| 2 | misc02 | 2016-07-17 15:40:29 |
| 3 | misc01 | 2016-07-17 15:40:29 |
| 4 | misc02 | 2016-07-17 15:40:29 |
| 5 | misc01 | 2016-07-17 15:42:33 |
+----+--------+---------------------+
5 rows in set (0.00 sec)
root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (6,@@hostname);
Query OK, 1 row affected (0.01 sec)
root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO | created_time |
+----+--------+---------------------+
| 1 | misc01 | 2016-07-17 15:40:29 |
| 2 | misc02 | 2016-07-17 15:40:29 |
| 3 | misc01 | 2016-07-17 15:40:29 |
| 4 | misc02 | 2016-07-17 15:40:29 |
| 5 | misc01 | 2016-07-17 15:42:33 |
| 6 | misc02 | 2016-07-17 15:44:03 |
+----+--------+---------------------+
6 rows in set (0.00 sec)
root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-15 |
+-------------------------------------------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7:9:11:13-14
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01 | 3306 | ONLINE |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]>
NODE2でデータを入れたあとにNODE1の状況を確認してみます。
特に問題無く、データが反映されている事が確認出来ます。
レプリケーションもRAWベースなので特に時間関連の関数なども気にしなくて良いです。
root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO | created_time |
+----+--------+---------------------+
| 1 | misc01 | 2016-07-17 15:40:29 |
| 2 | misc02 | 2016-07-17 15:40:29 |
| 3 | misc01 | 2016-07-17 15:40:29 |
| 4 | misc02 | 2016-07-17 15:40:29 |
| 5 | misc01 | 2016-07-17 15:42:33 |
| 6 | misc02 | 2016-07-17 15:44:03 |
+----+--------+---------------------+
6 rows in set (0.01 sec)
root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-15 |
+-------------------------------------------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:1-4:8:10:12:15
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)
root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01 | 3306 | ONLINE |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)
root@localhost [GR_TEST]>
余談: Group Replicationは全てマスターなので、Show Slave Statusは不要ですね。
/*** NODE1 ***/
root@localhost [GR_TEST]> show slave status\G
Empty set (0.00 sec)
root@localhost [GR_TEST]> show master status;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000001 | 4391 | | | 00000000-1111-2222-3333-123456789abc:1-15 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]>
/*** NODE2 ***/
root@localhost [GR_TEST]> show slave status\G
Empty set (0.02 sec)
root@localhost [GR_TEST]> show master status;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000001 | 4391 | | | 00000000-1111-2222-3333-123456789abc:1-15 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)
root@localhost [GR_TEST]>
参考パラメータ
root@localhost [GR_TEST]> show global variables like '%group_repli%';
+---------------------------------------------------+-------------------------------------------+
| Variable_name | Value |
+---------------------------------------------------+-------------------------------------------+
| group_replication_allow_local_lower_version_join | OFF |
| group_replication_auto_increment_increment | 7 |
| group_replication_bootstrap_group | OFF |
| group_replication_components_stop_timeout | 31536000 |
| group_replication_gcs_engine | xcom |
| group_replication_group_name | 00000000-1111-2222-3333-123456789ABC |
| group_replication_local_address | 192.168.56.101:13001 |
| group_replication_peer_addresses | 192.168.56.101:13001,192.168.56.102:13001 |
| group_replication_pipeline_type_var | STANDARD |
| group_replication_recovery_complete_at | TRANSACTIONS_CERTIFIED |
| group_replication_recovery_password | |
| group_replication_recovery_reconnect_interval | 120 |
| group_replication_recovery_retry_count | 2 |
| group_replication_recovery_ssl_ca | |
| group_replication_recovery_ssl_capath | |
| group_replication_recovery_ssl_cert | |
| group_replication_recovery_ssl_cipher | |
| group_replication_recovery_ssl_crl | |
| group_replication_recovery_ssl_crlpath | |
| group_replication_recovery_ssl_key | |
| group_replication_recovery_ssl_verify_server_cert | OFF |
| group_replication_recovery_use_ssl | OFF |
| group_replication_recovery_user | rpl_user |
| group_replication_start_on_boot | OFF |
+---------------------------------------------------+-------------------------------------------+
24 rows in set (0.01 sec)
root@localhost [GR_TEST]>
【参考】
LABサイト
http://labs.mysql.com/
Group Replication関連参考ブログ
http://mysqlhighavailability.com/getting-started-with-mysql-group-replication/
MySQLセミナー資料
http://downloads.mysql.com/presentations/20160510_06_MySQL_57_ReplicationEnhancements.pdf
Auto Incrementの値
http://mysqlhighavailability.com/mysql-group-replication-auto-increment-configuration-handling/