InnoDB Clusterの設定を行い、Group ReplicationはシングルマスターモードがDefaultなので、
Auto_Incrementの値も普段使いなれている値の1に設定し直して利用する事にしました。

Note: InnoDB Cluster = MySQL Group Replication + MySQL Router + MySQL Shell

もし、Group Replicationをシングルマスターモードで利用する予定の場合は、
実際にサーバーの初期設定時の段階で予め変更しておくと良いかと思います。
マルチマスターモードの場合は、ぶつからないように設定しておく必要があるので、
先ずは、Default設定の7で利用するのが良いでしょう。

念の為、構成がシングルマスターモードになっているか確認
Confirm is Group Replication configured as Single Master Mode.


mysql> show variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON    |
+---------------------------------------+-------+
1 row in set (0.01 sec)

mysql> show variables like 'group_replication_enforce_update_everywhere_checks';
+----------------------------------------------------+-------+
| Variable_name                                      | Value |
+----------------------------------------------------+-------+
| group_replication_enforce_update_everywhere_checks | OFF   |
+----------------------------------------------------+-------+
1 row in set (0.00 sec)


mysql> SELECT * FROM performance_schema.global_status WHERE VARIABLE_NAME='group_replication_primary_member';
+----------------------------------+--------------------------------------+
| VARIABLE_NAME                    | VARIABLE_VALUE                       |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | bc653b5a-3b8b-11e7-94cd-080027d65c57 |
+----------------------------------+--------------------------------------+
1 row in set (0.00 sec)

mysql> 

現在のグループ構成
Current Group Configuration.


-bash-4.2$ ./2_gr_status.sh 
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------------------+--------------------------------------+--------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+--------------+-------------+--------------+
| group_replication_applier | bc653b5a-3b8b-11e7-94cd-080027d65c57 | replications |       63301 | ONLINE       |
| group_replication_applier | c68819f0-3b8b-11e7-958b-080027d65c57 | replications |       63302 | ONLINE       |
| group_replication_applier | d0a3d2c8-3b8b-11e7-97ef-080027d65c57 | replications |       63303 | ONLINE       |
+---------------------------+--------------------------------------+--------------+-------------+--------------+
-bash-4.2$ 

Enterprise Monitorのレプリケーショントポロジービューでの確認

DefaultでAuto_Incrementが7になっている事も確認出来る

Group Replication設定時のDefault値
マルチマスターモードであれば、このまま利用するが個人的には、
auto_incrementの値は1つずつ増えて欲しいので通常のMySQLの設定に変更。

mysql> show variables like '%increment%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| auto_increment_increment                   | 7     |
| auto_increment_offset                      | 1     |
| div_precision_increment                    | 4     |
| group_replication_auto_increment_increment | 7     |
| innodb_autoextend_increment                | 64    |
+--------------------------------------------+-------+
5 rows in set (0.00 sec)

値を変更:group_replication_auto_increment_increment=1
Change group_replication_auto_increment_increment for change auto increment vaule will be + 1.


mysql> show variables like '%increment%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| auto_increment_increment                   | 1     |
| auto_increment_offset                      | 1     |
| div_precision_increment                    | 4     |
| group_replication_auto_increment_increment | 1     |
| innodb_autoextend_increment                | 64    |
+--------------------------------------------+-------+
5 rows in set (0.01 sec)

設定変更後はMySQL Enterprise Monitorで確認しても、もちろんauto_incrementの値が1になっている

設定変更後のauto_incrementの動作確認
Confirm after change configuration.


mysql> CREATE TABLE `T_MEMO` (
    -> `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
    -> `comment` varchar(100) NOT NULL,
    -> PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
Query OK, 0 rows affected (0.39 sec)

mysql> insert into T_MEMO(comment) values('Change group_replication_auto_increment_increment from 7 to 1');
Query OK, 1 row affected (0.77 sec)

mysql> select * from T_MEMO;
+----+---------------------------------------------------------------+
| id | comment                                                       |
+----+---------------------------------------------------------------+
|  1 | Change group_replication_auto_increment_increment from 7 to 1 |
+----+---------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> insert into T_MEMO(comment) values('Change id 1 possible only on Singale Master Mode');
Query OK, 1 row affected (0.11 sec)

mysql> select * from T_MEMO;
+----+---------------------------------------------------------------+
| id | comment                                                       |
+----+---------------------------------------------------------------+
|  1 | Change group_replication_auto_increment_increment from 7 to 1 |
|  2 | Change id 1 possible only on Singale Master Mode              |
+----+---------------------------------------------------------------+
2 rows in set (0.00 sec)

【メモ】シングルマスターモードなので、トランザクション分離レベルはREPEATABLE-READのままでOK。
Node: Since this is single master mode, user can keep transactiton isolation level as REPEATABLE-READ.


mysql> show variables like 'tx_isolation';
+---------------+-----------------+
| Variable_name | Value           |
+---------------+-----------------+
| tx_isolation  | REPEATABLE-READ |
+---------------+-----------------+
1 row in set (0.01 sec)


MySQL Group Replicationの監視に関しては、Performance_schemaからレプリケーションの状態を確認して、モニタリングする事が可能ですが、MySQL Enterprise Monitor3.4ではGroup ReplicationのトポロジーViewやAdvisor等で、モニタリングを簡素化して、システムの安定稼働と運用負荷を軽減してくれるようになりました。自作でモニタリングツールを作る事も可能ですが、ツールのアップデート等に工数がかかるので、出来るだけ既存のツールを利用したい場合は有用かと思います。

Group Replicationステータスモニタリング用オブジェクト
(Group Replication Status Monitoring MySQL Objects)

performance_schema.replication_group_member_stats
performance_schema.replication_group_members
performance_schema.replication_connection_status
performance_schema.replication_applier_status

ステータスモニタリング例

グループ・レプリケーションのグループメンバーの状況を確認
(Can Confirm Group member status)

mysql> SELECT * FROM performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 698f11c8-0397-11e7-aae1-080027d65c57
 MEMBER_HOST: replications
 MEMBER_PORT: 63301
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 713ad572-0397-11e7-aca3-080027d65c57
 MEMBER_HOST: replications
 MEMBER_PORT: 63302
MEMBER_STATE: ONLINE

グループによってコミットされたトランザクション、キューの増加状況、競合の検出数、検査されたトランザクションの数等を確認。
(We can confirm Committed Transaction, Queue, Conflict and so on.)

mysql> SELECT * FROM performance_schema.replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14896410386000092:7
                         MEMBER_ID: 78b1d98a-0397-11e7-aef2-080027d65c57
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 2
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 00000000-1111-2222-3333-123456789abc:1-29
    LAST_CONFLICT_FREE_TRANSACTION: 00000000-1111-2222-3333-123456789abc:29
1 row in set (0.00 sec)

CHANNEL名やグループから受信してアプライアキュー(リレーログ)に入れられたトランザクションを確認
(We can confirm Applier Queue and channel names.)

mysql> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:1-29
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************

Enterprise Monitorでの監視例
(How to monitor group replication by using MySQL Enterprise Monitor)
Group Replicationの状態を可視化して、詳細を管理ノードで一元管理する事が可能。
障害発生時は、メール若しくはSNMPでトラップ送信して知らせる事も可能。

グループトポロジー
(Replication Topology and status under normal condition.)

グループトポロジー(障害発生時)
(Replication Topology and status during system trouble.)

グループレプリケーションステータス
(Group Replication Status over view)

その他、詳細情報
(Group Replication Status and other replication related detail information.)

エラーが発生した場合のログ確認
(Group Replication Error Logs)

Nice to watch for catching up with Group Replication on YouTube.

詳細情報:
https://dev.mysql.com/doc/mysql-monitor/3.4/en/mem-replication.html

https://dev.mysql.com/doc/mysql-monitor/3.4/en/mem-replication-dashboard-ui-ref.html#fig-mem-group-replication-topology-single

MySQL Enterprise 試用版のダウンロード
https://www.mysql.com/jp/trials/


現状、まだLab版ですが、MySQL5.7の追加のプラグインとして、マルチマスタまたはアクティブ/アクティブレプリケーションをサポートする
同期レプリケーション型のグループレプリケーションが準備されています。まだ、Lab版という事もあり、機能追加やバグ対応などがまだまだ必要な段階ですが、LAB版→DR版→RC版→GA版と段々と安定してくると思いますので、次のLab版がリリースされたら是非検証環境で試してみて頂ければと思います。
マスターサーバーのHA対応やスレーブが多い環境で、マスターサーバーのレプリケーション負荷分散等に活用出来そうです。

Group Replication関連参考ブログを見て頂けると、基本的なインストール方法が書かれていますので試される場合は、此方を参考にして下さい。

http://mysqlhighavailability.com/getting-started-with-mysql-group-replication/

NODE1にてグループレプリケーション開始
※オプションファイルに書いておくことで、SETコマンドの実行は不要です。
※XCOMで通信する為のポートは、通常のMySQL PORT3306とは別にしてください。


root@localhost [mysql]> SET GLOBAL group_replication_group_name= "00000000-1111-2222-3333-123456789ABC";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_bootstrap_group= 1;
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_local_address="192.168.56.101:13001";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_peer_addresses= "192.168.56.101:13001,192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_recovery_user='rpl_user';
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_recovery_password='rpl_pass';
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.59 sec)

root@localhost [mysql]> SET GLOBAL group_replication_bootstrap_group= 0;
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:1-4
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [mysql]> 


NODE2をGRに参加してみます。


root@localhost [mysql]> SET GLOBAL group_replication_group_name= "00000000-1111-2222-3333-123456789ABC";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_local_address="192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_peer_addresses= "192.168.56.101:13001,192.168.56.102:13001";
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_recovery_user='rpl_user';
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> SET GLOBAL group_replication_recovery_password='rpl_pass';
Query OK, 0 rows affected (0.00 sec)

root@localhost [mysql]> START GROUP_REPLICATION;
Query OK, 0 rows affected (3.04 sec)

root@localhost [mysql]> SELECT * FROM performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 29ea17bc-3848-11e6-9900-0800279ca844
 MEMBER_HOST: misc01
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 5b07d5d8-4057-11e6-a315-0800279cea3c
 MEMBER_HOST: misc02
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
2 rows in set (0.01 sec)

root@localhost [mysql]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [mysql]> 



group_members

MEMBER_STATEが共にONLINEになっているので、DDL、DMLを処理して同期されているか確認して見ます
先ずは、NODE1でデータベース、テーブルを作成してからデータを1件入れてみます。
NODE1で作成したオブジェクトやデータはNODE2でも確認出来ます。
また、同様にNODE2で入れたデータは、NODE1で確認する事が出来ます。


root@localhost [mysql]> CREATE DATABASE GR_TEST;
Query OK, 1 row affected (0.03 sec)

root@localhost [mysql]> use GR_TEST;
Database changed
root@localhost [GR_TEST]> CREATE TABLE GR_TEST.T01 (
    -> ID INT NOT NULL PRIMARY KEY,
    -> MEMO varchar(30) COLLATE utf8_bin NOT NULL DEFAULT ''
    -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Query OK, 0 rows affected (0.07 sec)

root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (1,@@hostname);
Query OK, 1 row affected (0.07 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
+----+--------+
1 row in set (0.01 sec)

root@localhost [GR_TEST]> 

NODE2でデータを確認してみます。


root@localhost [mysql]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| GR_TEST            |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

root@localhost [mysql]> use GR_TEST
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
+----+--------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (2,@@hostname);
Query OK, 1 row affected (0.04 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
+----+--------+
2 rows in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01      |        3306 | ONLINE       |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [GR_TEST]> 

NODE2で入れたデータはNODE1でも確認出来ました。
これで、双方向にレプリケーションが張られている事が確認出来ました。


root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
+----+--------+
2 rows in set (0.00 sec)


root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01      |        3306 | ONLINE       |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [GR_TEST]> 

GTIDの状態を確認
自分で更新したデータに関しては、RECEIVED_TRANSACTION_SETには反映されないので、@@GLOBAL.GTID_EXECUTEDで何処まで適用されているか確認。


root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7:9
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED                    |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-10 |
+-------------------------------------------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> 

NODE1とNODE2の間で、トランザクションの競合が発生した場合
(同時に同じデータを更新しようとした場合)

NODE1で先ずは、トランザクションを張って処理を実行してみます。そして、Commitを行う前に、NODE2で同じデータを更新処理してみます。
最初に処理を開始した、NODE1は問題無く処理出来てますが、NODE2のCommit処理はエラーで終了しています。

root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
|  3 | misc01 |
|  4 | misc02 |
+----+--------+
4 rows in set (0.00 sec)

root@localhost [GR_TEST]> start transaction;update T01 set MEMO = @@hostname where ID = 4;
Query OK, 0 rows affected (0.00 sec)

Query OK, 1 row affected (0.03 sec)
Rows matched: 1  Changed: 1  Warnings: 0

root@localhost [GR_TEST]> commit;
Query OK, 0 rows affected (0.01 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
|  3 | misc01 |
|  4 | misc01 |
+----+--------+
4 rows in set (0.00 sec)

root@localhost [GR_TEST]> 

NODE2は、ERROR 1180 (HY000): Got error 149 during COMMITでエラーになっています。

root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
|  3 | misc01 |
|  4 | misc02 |
+----+--------+
4 rows in set (0.00 sec)

root@localhost [GR_TEST]> start transaction;update T01 set MEMO = 'MISC02' where ID = 4;
Query OK, 0 rows affected (0.00 sec)

Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

root@localhost [GR_TEST]> commit;
ERROR 1180 (HY000): Got error 149 during COMMIT
root@localhost [GR_TEST]> select * from T01;
+----+--------+
| ID | MEMO   |
+----+--------+
|  1 | misc01 |
|  2 | misc02 |
|  3 | misc01 |
|  4 | misc01 |
+----+--------+
4 rows in set (0.00 sec)

root@localhost [GR_TEST]> 

group_replication_tran_conf

ONLINE DDLでALTERを実行した場合
結果としては、問題無くDDLも実行可能で伝搬されます。但し、現状のGRの仕様としては、オンラインスキーマ変更は推奨されていないので、
1)BINLOGをOFFにしてDDL実行 2) 他のノードも同様にBINLOGをOFFにしてDDL実行 3) 最後にアプリケーションを変更し変更を反映させるのが良さそうです。

NODE1にてDDLを実行して列を追加してみます。

root@localhost [GR_TEST]> ALTER TABLE T01 add column created_time datetime DEFAULT CURRENT_TIMESTAMP;
Query OK, 0 rows affected (0.12 sec)
Records: 0  Duplicates: 0  Warnings: 0

root@localhost [GR_TEST]> desc T01;
+--------------+-------------+------+-----+-------------------+-------+
| Field        | Type        | Null | Key | Default           | Extra |
+--------------+-------------+------+-----+-------------------+-------+
| ID           | int(11)     | NO   | PRI | NULL              |       |
| MEMO         | varchar(30) | NO   |     |                   |       |
| created_time | datetime    | YES  |     | CURRENT_TIMESTAMP |       |
+--------------+-------------+------+-----+-------------------+-------+
3 rows in set (0.03 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO   | created_time        |
+----+--------+---------------------+
|  1 | misc01 | 2016-07-17 15:40:29 |
|  2 | misc02 | 2016-07-17 15:40:29 |
|  3 | misc01 | 2016-07-17 15:40:29 |
|  4 | misc02 | 2016-07-17 15:40:29 |
+----+--------+---------------------+
4 rows in set (0.00 sec)

root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (5,@@hostname);
Query OK, 1 row affected (0.01 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO   | created_time        |
+----+--------+---------------------+
|  1 | misc01 | 2016-07-17 15:40:29 |
|  2 | misc02 | 2016-07-17 15:40:29 |
|  3 | misc01 | 2016-07-17 15:40:29 |
|  4 | misc02 | 2016-07-17 15:40:29 |
|  5 | misc01 | 2016-07-17 15:42:33 |
+----+--------+---------------------+
5 rows in set (0.00 sec)

root@localhost [GR_TEST]> 

NODE2でNODE1で実行されたDDLの結果を確認して、NODE2からデータを追加してみます。

root@localhost [GR_TEST]> desc T01;
+--------------+-------------+------+-----+-------------------+-------+
| Field        | Type        | Null | Key | Default           | Extra |
+--------------+-------------+------+-----+-------------------+-------+
| ID           | int(11)     | NO   | PRI | NULL              |       |
| MEMO         | varchar(30) | NO   |     |                   |       |
| created_time | datetime    | YES  |     | CURRENT_TIMESTAMP |       |
+--------------+-------------+------+-----+-------------------+-------+
3 rows in set (0.01 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO   | created_time        |
+----+--------+---------------------+
|  1 | misc01 | 2016-07-17 15:40:29 |
|  2 | misc02 | 2016-07-17 15:40:29 |
|  3 | misc01 | 2016-07-17 15:40:29 |
|  4 | misc02 | 2016-07-17 15:40:29 |
|  5 | misc01 | 2016-07-17 15:42:33 |
+----+--------+---------------------+
5 rows in set (0.00 sec)

root@localhost [GR_TEST]> INSERT INTO GR_TEST.T01(ID,MEMO) VALUES (6,@@hostname);
Query OK, 1 row affected (0.01 sec)

root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO   | created_time        |
+----+--------+---------------------+
|  1 | misc01 | 2016-07-17 15:40:29 |
|  2 | misc02 | 2016-07-17 15:40:29 |
|  3 | misc01 | 2016-07-17 15:40:29 |
|  4 | misc02 | 2016-07-17 15:40:29 |
|  5 | misc01 | 2016-07-17 15:42:33 |
|  6 | misc02 | 2016-07-17 15:44:03 |
+----+--------+---------------------+
6 rows in set (0.00 sec)


root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED                    |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-15 |
+-------------------------------------------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:4-7:9:11:13-14
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01      |        3306 | ONLINE       |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

root@localhost [GR_TEST]> 

NODE2でデータを入れたあとにNODE1の状況を確認してみます。
特に問題無く、データが反映されている事が確認出来ます。
レプリケーションもRAWベースなので特に時間関連の関数なども気にしなくて良いです。


root@localhost [GR_TEST]> select * from T01;
+----+--------+---------------------+
| ID | MEMO   | created_time        |
+----+--------+---------------------+
|  1 | misc01 | 2016-07-17 15:40:29 |
|  2 | misc02 | 2016-07-17 15:40:29 |
|  3 | misc01 | 2016-07-17 15:40:29 |
|  4 | misc02 | 2016-07-17 15:40:29 |
|  5 | misc01 | 2016-07-17 15:42:33 |
|  6 | misc02 | 2016-07-17 15:44:03 |
+----+--------+---------------------+
6 rows in set (0.01 sec)

root@localhost [GR_TEST]> SELECT @@GLOBAL.GTID_EXECUTED;
+-------------------------------------------+
| @@GLOBAL.GTID_EXECUTED                    |
+-------------------------------------------+
| 00000000-1111-2222-3333-123456789abc:1-15 |
+-------------------------------------------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 00000000-1111-2222-3333-123456789ABC
              SOURCE_UUID: 00000000-1111-2222-3333-123456789ABC
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 00000000-1111-2222-3333-123456789abc:1-4:8:10:12:15
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

root@localhost [GR_TEST]> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 29ea17bc-3848-11e6-9900-0800279ca844 | misc01      |        3306 | ONLINE       |
| group_replication_applier | 5b07d5d8-4057-11e6-a315-0800279cea3c | misc02      |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

root@localhost [GR_TEST]> 

余談: Group Replicationは全てマスターなので、Show Slave Statusは不要ですね。


/*** NODE1 ***/
root@localhost [GR_TEST]> show slave status\G
Empty set (0.00 sec)

root@localhost [GR_TEST]> show master status;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                         |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000001 |     4391 |              |                  | 00000000-1111-2222-3333-123456789abc:1-15 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> 


/*** NODE2 ***/
root@localhost [GR_TEST]> show slave status\G
Empty set (0.02 sec)

root@localhost [GR_TEST]> show master status;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                         |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000001 |     4391 |              |                  | 00000000-1111-2222-3333-123456789abc:1-15 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)

root@localhost [GR_TEST]> 

参考パラメータ


root@localhost [GR_TEST]> show global variables like '%group_repli%';
+---------------------------------------------------+-------------------------------------------+
| Variable_name                                     | Value                                     |
+---------------------------------------------------+-------------------------------------------+
| group_replication_allow_local_lower_version_join  | OFF                                       |
| group_replication_auto_increment_increment        | 7                                         |
| group_replication_bootstrap_group                 | OFF                                       |
| group_replication_components_stop_timeout         | 31536000                                  |
| group_replication_gcs_engine                      | xcom                                      |
| group_replication_group_name                      | 00000000-1111-2222-3333-123456789ABC      |
| group_replication_local_address                   | 192.168.56.101:13001                      |
| group_replication_peer_addresses                  | 192.168.56.101:13001,192.168.56.102:13001 |
| group_replication_pipeline_type_var               | STANDARD                                  |
| group_replication_recovery_complete_at            | TRANSACTIONS_CERTIFIED                    |
| group_replication_recovery_password               |                                           |
| group_replication_recovery_reconnect_interval     | 120                                       |
| group_replication_recovery_retry_count            | 2                                         |
| group_replication_recovery_ssl_ca                 |                                           |
| group_replication_recovery_ssl_capath             |                                           |
| group_replication_recovery_ssl_cert               |                                           |
| group_replication_recovery_ssl_cipher             |                                           |
| group_replication_recovery_ssl_crl                |                                           |
| group_replication_recovery_ssl_crlpath            |                                           |
| group_replication_recovery_ssl_key                |                                           |
| group_replication_recovery_ssl_verify_server_cert | OFF                                       |
| group_replication_recovery_use_ssl                | OFF                                       |
| group_replication_recovery_user                   | rpl_user                                  |
| group_replication_start_on_boot                   | OFF                                       |
+---------------------------------------------------+-------------------------------------------+
24 rows in set (0.01 sec)

root@localhost [GR_TEST]> 

【参考】

LABサイト
http://labs.mysql.com/

Group Replication関連参考ブログ
http://mysqlhighavailability.com/getting-started-with-mysql-group-replication/

MySQLセミナー資料
http://downloads.mysql.com/presentations/20160510_06_MySQL_57_ReplicationEnhancements.pdf

Auto Incrementの値
http://mysqlhighavailability.com/mysql-group-replication-auto-increment-configuration-handling/