麻豆一二三-麻豆一区二区-麻豆影院成人在线观看-麻豆尤物网国产高潮抽搐免费喷水-麻豆真实国产乱xxx-麻豆综合网

相關(guān)欄目
新聞資訊 >>
合作媒體 >>
展會知識 >>
當前位置:首頁 >

干貨滿滿(深入剖析典型案例心得體會)深入剖析典型案例分析,


原標題:一個案例摸透MGR故障檢測流程

作者介紹

陳臣,甲骨文MySQL首席解決方案工程師,公眾號《MySQL實戰(zhàn)》作者,有大規(guī)模的MySQL,Redis,MongoDB,ES的管理和維護經(jīng)驗,擅長MySQL數(shù)據(jù)庫的性能優(yōu)化及日常操作的原理剖析。

故障檢測(Failure Detection)是 Group Replication 的一個核心功能模塊,通過它可以及時識別集群中的故障節(jié)點,并將故障節(jié)點從集群中剔除掉。如果不將故障節(jié)點及時剔除的話,一方面會影響集群的性能,另一方面還會阻止集群拓撲的變更。

下面結(jié)合一個具體的案例,分析 Group Replication 的故障檢測流程。

除此之外,本文還會分析以下問題:

當出現(xiàn)網(wǎng)絡(luò)分區(qū)時,對于少數(shù)派節(jié)點,會有什么影響? 什么是 XCom Cache?如何預(yù)估 XCom Cache 的大??? 在線上,為什么 group_replication_member_expel_timeout 不宜設(shè)置過大?

一、案例

以下是測試集群的拓撲,多主模式。

干貨滿滿(深入剖析典型案例心得體會)深入剖析典型案例分析,(圖1)

本次測試主要包括兩步:

模擬網(wǎng)絡(luò)分區(qū),看它對集群各節(jié)點的影響。 恢復(fù)網(wǎng)絡(luò)連接,看看各節(jié)點又是如何反應(yīng)的。

1、模擬網(wǎng)絡(luò)分區(qū)

首先模擬網(wǎng)絡(luò)分區(qū)故障,在 node3 上執(zhí)行。

# iptables -A INPUT -p tcp -s 192.168.244.10 -j DROP

# iptables -A OUTPUT -p tcp -d 192.168.244.10 -j DROP

# iptables -A INPUT -p tcp -s 192.168.244.20 -j DROP

# iptables -A OUTPUT -p tcp -d 192.168.244.20 -j DROP

# date "+%Y-%m-%d %H:%M:%S"

2022-07-31 13:03:01

其中,iptables 命令會斷開 node3 與 node1、node2 之間的網(wǎng)絡(luò)連接。date 記錄了命令執(zhí)行的時間。

命令執(zhí)行完 5s(這個時間是固定的,在源碼中通過 DETECTOR_LIVE_TIMEOUT 指定),各個節(jié)點開始響應(yīng)(從各節(jié)點的日志中可以觀察到這一點)

首先看看 node1 的日志及集群狀態(tài)。

2022-07-31T13:03:07.582519-00:00 0 [Warning] [MY-011493] [Repl] Plugin group_replication reported: Member with address 192.168.244.30:3306 has become unreachable.

mysql> select member_id,member_host,member_port,member_state,member_role from performance_schema.replication_group_members;

+--------------------------------------+----------------+-------------+--------------+-------------+

member_id | member_host | member_port | member_state | member_role |

+--------------------------------------+----------------+-------------+--------------+-------------+

207db264-0192-11ed-92c9-02001700754e | 192.168.244.10 | 3306 | ONLINE | PRIMARY |

2cee229d-0192-11ed-8eff-02001700f110 | 192.168.244.20 | 3306 | ONLINE | PRIMARY |

4cbfdc79-0192-11ed-8b01-02001701bd0a | 192.168.244.30 | 3306 | UNREACHABLE | PRIMARY |

+--------------------------------------+----------------+-------------+--------------+-------------+

3 rows in set (0.00 sec)

從 node1,node2 的角度來看,此時 node3 處于 UNREACHABLE 狀態(tài)。

接下來看看 node3 的。

2022-07-31T13:03:07.690416-00:00 0 [Warning] [MY-011493] [Repl] Plugin group_replication reported: Member with address 192.168.244.10:3306 has become unreachable.

2022-07-31T13:03:07.690492-00:00 0 [Warning] [MY-011493] [Repl] Plugin group_replication reported: Member with address 192.168.244.20:3306 has become unreachable.

2022-07-31T13:03:07.690504-00:00 0 [ERROR] [MY-011495] [Repl] Plugin group_replication reported: This server is not able to reach a majority of members in the group. This server will now block all updates. The server will remain blocked until contact with the majority is restored. It is possible to use group_replication_force_members to force a new group membership.

mysql> select member_id,member_host,member_port,member_state,member_role from performance_schema.replication_group_members;

+--------------------------------------+----------------+-------------+--------------+-------------+

member_id | member_host | member_port | member_state | member_role |

+--------------------------------------+----------------+-------------+--------------+-------------+

207db264-0192-11ed-92c9-02001700754e | 192.168.244.10 | 3306 | UNREACHABLE | PRIMARY |

2cee229d-0192-11ed-8eff-02001700f110 | 192.168.244.20 | 3306 | UNREACHABLE | PRIMARY |

4cbfdc79-0192-11ed-8b01-02001701bd0a | 192.168.244.30 | 3306 | ONLINE | PRIMARY |

+--------------------------------------+----------------+-------------+--------------+-------------+

3 rows in set (0.00 sec)

從 node3 的角度來看,此時 node1,node2 處于 UNREACHABLE 狀態(tài)。

三個節(jié)點,只有一個節(jié)點處于 ONLINE 狀態(tài),不滿足組復(fù)制的多數(shù)派原則。此時,node3 只能查詢,寫操作會被阻塞。

mysql> select * from slowtech.t1 where id=1;

+----+------+

id | c1 |

+----+------+

1 | a |

+----+------+

1 row in set (0.00 sec)

mysql> delete from slowtech.t1 where id=1;

阻塞中。。。

又過了 16s(這里的 16s,實際上與 group_replication_member_expel_timeout 參數(shù)有關(guān)),node1、node2 會將 node3 驅(qū)逐出(expel)集群。此時,集群只有兩個節(jié)點組成。

看看 node1 的日志及集群狀態(tài)。

2022-07-31T13:03:23.576960-00:00 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: Members removed from the group: 192.168.244.30:3306

2022-07-31T13:03:23.577091-00:00 0 [System] [MY-011503] [Repl] Plugin group_replication reported: Group membership changed to 192.168.244.10:3306, 192.168.244.20:3306 on view 16592724636525403:3.

mysql> select member_id,member_host,member_port,member_state,member_role from performance_schema.replication_group_members;

+--------------------------------------+----------------+-------------+--------------+-------------+

member_id | member_host | member_port | member_state | member_role |

+--------------------------------------+----------------+-------------+--------------+-------------+

207db264-0192-11ed-92c9-02001700754e | 192.168.244.10 | 3306 | ONLINE | PRIMARY |

2cee229d-0192-11ed-8eff-02001700f110 | 192.168.244.20 | 3306 | ONLINE | PRIMARY |

+--------------------------------------+----------------+-------------+--------------+-------------+

2 rows in set (0.00 sec)

再來看看 node3 的,日志沒有新的輸出,節(jié)點狀態(tài)也沒變化。

mysql> select member_id,member_host,member_port,member_state,member_role from performance_schema.replication_group_members;

+--------------------------------------+----------------+-------------+--------------+-------------+

member_id | member_host | member_port | member_state | member_role |

+--------------------------------------+----------------+-------------+--------------+-------------+

207db264-0192-11ed-92c9-02001700754e | 192.168.244.10 | 3306 | UNREACHABLE | PRIMARY |

2cee229d-0192-11ed-8eff-02001700f110 | 192.168.244.20 | 3306 | UNREACHABLE | PRIMARY |

4cbfdc79-0192-11ed-8b01-02001701bd0a | 192.168.244.30 | 3306 | ONLINE | PRIMARY |

+--------------------------------------+----------------+-------------+--------------+-------------+

3 rows in set (0.00 sec)

2、恢復(fù)網(wǎng)絡(luò)連接

接下來我們恢復(fù) node3 與 node1、node2 之間的網(wǎng)絡(luò)連接。

# iptables -F

# date "+%Y-%m-%d %H:%M:%S"

2022-07-31 13:07:30

首先看看 node3 的日志

2022-07-31T13:07:30.464179-00:00 0 [Warning] [MY-011494] [Repl] Plugin group_replication reported: Member with address 192.168.244.10:3306 is reachable again.

2022-07-31T13:07:30.464226-00:00 0 [Warning] [MY-011494] [Repl] Plugin group_replication reported: Member with address 192.168.244.20:3306 is reachable again.

2022-07-31T13:07:30.464239-00:00 0 [Warning] [MY-011498] [Repl] Plugin group_replication reported: The member has resumed contact with a majority of the members in the group. Regular operation is restored and transactions are unblocked.

2022-07-31T13:07:37.458761-00:00 0 [ERROR] [MY-011505] [Repl] Plugin group_replication reported: Member was expelled from the group due to network failures, changing member status to ERROR.

2022-07-31T13:07:37.459011-00:00 0 [Warning] [MY-011630] [Repl] Plugin group_replication reported: Due to a plugin error, some transactions were unable to be certified and will now rollback.

2022-07-31T13:07:37.459037-00:00 0 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: The server was automatically set into read only mode after an error was detected.

2022-07-31T13:07:37.459431-00:00 31 [ERROR] [MY-011615] [Repl] Plugin group_replication reported: Error while waiting for conflict detection procedure to finish on session 31

2022-07-31T13:07:37.459478-00:00 31 [ERROR] [MY-010207] [Repl] Run function before_commit in plugin group_replication failed

2022-07-31T13:07:37.459811-00:00 33 [System] [MY-011565] [Repl] Plugin group_replication reported: Setting super_read_only=ON.

2022-07-31T13:07:37.465738-00:00 34 [System] [MY-013373] [Repl] Plugin group_replication reported: Started auto-rejoin procedure attempt 1 of 3

2022-07-31T13:07:37.496466-00:00 0 [System] [MY-011504] [Repl] Plugin group_replication reported: Group membership changed: This member has left the group.

2022-07-31T13:07:37.498813-00:00 36 [System] [MY-010597] [Repl] CHANGE MASTER TO FOR CHANNEL group_replication_applier executed. Previous state master_host=<NULL>, master_port= 0, master_log_file=, master_log_pos= 351, master_bind=. New state master_host=<NULL>, master_port= 0, master_log_file=, master_log_pos= 4, master_bind=.

2022-07-31T13:07:39.653028-00:00 34 [System] [MY-013375] [Repl] Plugin group_replication reported: Auto-rejoin procedure attempt 1 of 3 finished. Member was able to join the group.

2022-07-31T13:07:40.653484-00:00 0 [System] [MY-013471] [Repl] Plugin group_replication reported: Distributed recovery will transfer data using: Incremental recovery from a group donor

2022-07-31T13:07:40.653822-00:00 0 [System] [MY-011503] [Repl] Plugin group_replication reported: Group membership changed to 192.168.244.10:3306, 192.168.244.20:3306, 192.168.244.30:3306 on view 16592724636525403:4.

2022-07-31T13:07:40.670530-00:00 46 [System] [MY-010597] [Repl] CHANGE MASTER TO FOR CHANNEL group_replication_recovery executed. Previous state master_host=<NULL>, master_port= 0, master_log_file=, master_log_pos= 4, master_bind=. New state master_host=192.168.244.20, master_port= 3306, master_log_file=, master_log_pos= 4, master_bind=.

2022-07-31T13:07:40.682990-00:00 47 [Warning] [MY-010897] [Repl] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the START SLAVE Syntax in the MySQL Manual for more information.

2022-07-31T13:07:40.687566-00:00 47 [System] [MY-010562] [Repl] Slave I/O thread for channel group_replication_recovery: connected to master [email protected]:3306,replication started in log FIRST at position 4

2022-07-31T13:07:40.717851-00:00 46 [System] [MY-010597] [Repl] CHANGE MASTER TO FOR CHANNEL group_replication_recovery executed. Previous state master_host=192.168.244.20, master_port= 3306, master_log_file=, master_log_pos= 4, master_bind=. New state master_host=<NULL>, master_port= 0, master_log_file=, master_log_pos= 4, master_bind=.

2022-07-31T13:07:40.732297-00:00 0 [System] [MY-011490] [Repl] Plugin group_replication reported: This server was declared online within the replication group.

2022-07-31T13:07:40.732511-00:00 53 [System] [MY-011566] [Repl] Plugin group_replication reported: Setting super_read_only=OFF.

日志的輸出包括兩部分,以空格為分界線。

當網(wǎng)絡(luò)連接恢復(fù)后,node3 與 node1、node2 重新建立起了連接,發(fā)現(xiàn)自己已經(jīng)被集群驅(qū)逐,于是節(jié)點進入到 ERROR 狀態(tài)。

mysql> select member_id,member_host,member_port,member_state,member_role from performance_schema.replication_group_members;

+--------------------------------------+----------------+-------------+--------------+-------------+

member_id | member_host | member_port | member_state | member_role |

+--------------------------------------+----------------+-------------+--------------+-------------+

4cbfdc79-0192-11ed-8b01-02001701bd0a | 192.168.244.30 | 3306 | ERROR | |

+--------------------------------------+----------------+-------------+--------------+-------------+

1 row in set (0.00 sec)

節(jié)點進入到 ERROR 狀態(tài),會自動設(shè)置為只讀,即日志中看到的 super_read_only=ON。注意,ERROR 狀態(tài)的節(jié)點設(shè)置為只讀是默認行為,與后面提到的 group_replication_exit_state_action 參數(shù)無關(guān)。

如果group_replication_autorejoin_tries不為 0,對于 ERROR 狀態(tài)的節(jié)點,會自動重試,重新加入集群(auto-rejoin)。重試的次數(shù)由 group_replication_autorejoin_tries 決定,從 MySQL 8.0.21 開始,默認為 3。重試的時間間隔是 5min。重試成功后,會進入到分布式恢復(fù)階段。

接下來看看 node1 的日志。

2022-07-31T13:07:39.555613-00:00 0 [System] [MY-011503] [Repl] Plugin group_replication reported: Group membership changed to 192.168.244.10:3306, 192.168.244.20:3306, 192.168.244.30:3306 on view 16592724636525403:4.

2022-07-31T13:07:40.732568-00:00 0 [System] [MY-011492] [Repl] Plugin group_replication reported: The member with address 192.168.244.30:3306 was declared online within the replication group.

node3 又重新加入到集群中。

二、故障檢測流程

結(jié)合上面的案例,我們來看看 Group Repliction 的故障檢測流程。

1)集群中每個節(jié)點都會定期(每秒 1 次)向其它節(jié)點發(fā)送心跳信息。如果在 5s 內(nèi)(固定值,無參數(shù)調(diào)整)沒有收到其它節(jié)點的心跳信息,則會將該節(jié)點標記為可疑節(jié)點,同時會將該節(jié)點的狀態(tài)設(shè)置為 UNREACHABLE 。如果集群中有等于或超過 1/2 的節(jié)點顯示為 UNREACHABLE ,則該集群不能對外提供寫服務(wù)。

2)如果在group_replication_member_expel_timeout(從 MySQL 8.0.21 開始,該參數(shù)的默認值為 5,單位 s,最大可設(shè)置值為3600,即 1 小時)時間內(nèi),可疑節(jié)點恢復(fù)正常,則會直接應(yīng)用 XCom Cache 中的消息。XCom Cache 的大小由group_replication_message_cache_size 決定,默認是 1G。

3)如果在group_replication_member_expel_timeout時間內(nèi),可疑節(jié)點沒有恢復(fù)正常,則會被驅(qū)逐出集群。

4)而少數(shù)派節(jié)點呢,不會自動離開集群,它會一直維持當前的狀態(tài),直到:

網(wǎng)絡(luò)恢復(fù)正常。 達到 group_replication_unreachable_majority_timeout 的限制。注意,該參數(shù)的起始計算時間是連接斷開 5s 之后,不是可疑節(jié)點被驅(qū)逐出集群的時間。該參數(shù)默認為 0。

5)無論哪種情況,都會觸發(fā):

節(jié)點狀態(tài)從 ONLINE 切換到 ERROR 。 回滾當前被阻塞的寫操作。

mysql> delete from slowtech.t1 where id=1;

ERROR 3100 (HY000): Error on observer while running replication hook before_commit.

6)ERROR 狀態(tài)的節(jié)點會自動設(shè)置為只讀。

7)如果group_replication_autorejoin_tries不為 0,對于 ERROR 狀態(tài)的節(jié)點,會自動重試,重新加入集群(auto-rejoin)。

8)如果group_replication_autorejoin_tries為 0 或重試失敗,則會執(zhí)行 group_replication_exit_state_action 指定的操作。可選的操作有:

READ_ONLY:只讀模式。在這種模式下,會將 super_read_only 設(shè)置為 ON。默認值。 OFFLINE_MODE:離線模式。在這種模式下,會將 offline_mode 和 super_read_only 設(shè)置為 ON,此時,只有CONNECTION_ADMIN(SUPER)權(quán)限的用戶才能登陸,普通用戶不能登錄。

# mysql -h 192.168.244.3. -P 3306 -ut1 -p123456

ERROR 3032 (HY000): The server is currently in offline mode

ABORT_SERVER:關(guān)閉實例。

三、XCom Cache

XCom Cache 是 XCom 使用的消息緩存,用來緩存集群節(jié)點之間交換的消息。緩存的消息是共識協(xié)議的一部分。如果網(wǎng)絡(luò)不穩(wěn)定,可能會出現(xiàn)節(jié)點失聯(lián)的情況。

如果節(jié)點在一定時間(由 group_replication_member_expel_timeout 決定)內(nèi)恢復(fù)正常,它會首先應(yīng)用 XCom Cache 中的消息。如果 XCom Cache 沒有它需要的所有消息,這個節(jié)點會被驅(qū)逐出集群。驅(qū)逐出集群后,如果 group_replication_autorejoin_tries 不為 0,它會重新加入集群(auto-rejoin)。

重新加入集群會使用 Distributed Recovery 補齊差異數(shù)據(jù)。相比較直接使用 XCom Cache 中的消息,通過 Distributed Recovery 加入集群需要的時間相對較長,過程也較復(fù)雜,并且集群的性能也會受到影響。

所以,我們在設(shè)置 XCom Cache 的大小時,需預(yù)估 group_replication_member_expel_timeout + 5s 這段時間內(nèi)的內(nèi)存使用量。如何預(yù)估,后面會介紹相關(guān)的系統(tǒng)表。

下面我們模擬下 XCom Cache 不足的場景。

1)將group_replication_message_cache_size調(diào)整為最小值(128 MB),重啟組復(fù)制使其生效。

mysql> set global group_replication_message_cache_size=134217728;

Query OK, 0 rows affected (0.00 sec)

mysql> stop group_replication;

Query OK, 0 rows affected (4.15 sec)

mysql> start group_replication;

Query OK, 0 rows affected (3.71 sec)

2)將group_replication_member_expel_timeout調(diào)整為 3600。這樣,我們才有充足的時間進行測試。

mysql> set global group_replication_member_expel_timeout=3600;

Query OK, 0 rows affected (0.01 sec)

3)斷開 node3 與node1、node2 之間的網(wǎng)絡(luò)連接。

# iptables -A INPUT -p tcp -s 192.168.244.10 -j DROP

# iptables -A OUTPUT -p tcp -d 192.168.244.10 -j DROP

# iptables -A INPUT -p tcp -s 192.168.244.20 -j DROP

# iptables -A OUTPUT -p tcp -d 192.168.244.20 -j DROP

4)反復(fù)執(zhí)行大事務(wù)。

mysql> insert into slowtech.t1(c1) select c1 from slowtech.t1 limit 1000000;

Query OK, 1000000 rows affected (10.03 sec)

Records: 1000000 Duplicates: 0 Warnings: 0

5)觀察錯誤日志。

如果 node1 或 node2 的錯誤日志中提示以下信息,則意味著 node3 需要的消息已經(jīng)從 XCom Cache 中逐出了。

[Warning] [MY-011735] [Repl] Plugin group_replication reported: [GCS] Messages that are needed to recover node 192.168.244.30:33061 have been evicted from the message cache. Consider resizing the maximum size of the cache by setting group_replication_message_cache_size.

6)查看系統(tǒng)表。

除了錯誤日志,我們還可以通過系統(tǒng)表來判斷 XCom Cache 的使用情況。

mysql> select * from performance_schema.memory_summary_global_by_event_name where event_name like "%GCS_XCom::xcom_cache%"\G

*************************** 1. row ***************************

EVENT_NAME: memory/group_rpl/GCS_XCom::xcom_cache

COUNT_ALLOC: 23678

COUNT_FREE: 22754

SUM_NUMBER_OF_BYTES_ALLOC: 154713397

SUM_NUMBER_OF_BYTES_FREE: 28441492

LOW_COUNT_USED: 0

CURRENT_COUNT_USED: 924

HIGH_COUNT_USED: 20992

LOW_NUMBER_OF_BYTES_USED: 0

CURRENT_NUMBER_OF_BYTES_USED: 126271905

HIGH_NUMBER_OF_BYTES_USED: 146137294

1 row in set (0.00 sec)

其中,

COUNT_ALLOC:緩存過的消息數(shù)量。 COUNT_FREE:從緩存中刪除的消息數(shù)量。 CURRENT_COUNT_USED:當前正在緩存的消息數(shù)量,等于 COUNT_ALLOC - COUNT_FREE。 SUM_NUMBER_OF_BYTES_ALLOC:分配的內(nèi)存大小。 SUM_NUMBER_OF_BYTES_FREE:被釋放的內(nèi)存大小。 CURRENT_NUMBER_OF_BYTES_USED:當前正在使用的內(nèi)存大小,等于 SUM_NUMBER_OF_BYTES_ALLOC - SUM_NUMBER_OF_BYTES_FREE。 LOW_COUNT_USED,HIGH_COUNT_USED:CURRENT_COUNT_USED 的歷史最小值和最大值。 LOW_NUMBER_OF_BYTES_USED,HIGH_NUMBER_OF_BYTES_USED:CURRENT_NUMBER_OF_BYTES_USED 的歷史最小值和最大值。

如果斷開連接之后,在反復(fù)執(zhí)行大事務(wù)的過程中,發(fā)現(xiàn) COUNT_FREE 發(fā)生了變化,同樣意味著 node3 需要的消息已經(jīng)從 XCom Cache 中驅(qū)逐了。

7)恢復(fù) node3 與 node1、node2 之間的網(wǎng)絡(luò)連接。

在 group_replication_member_expel_timeout 期間,網(wǎng)絡(luò)恢復(fù)了,而 node3 需要的消息在 XCom Cache 中不存在了,則 node3 同樣會被驅(qū)逐出集群。以下是這種場景下 node3 的錯誤日志。

[ERROR] [MY-011735] [Repl] Plugin group_replication reported: [GCS] Node 0 is unable to get message {4aec99ca 7562 0}, since the group is too far ahead. Node will now exit.

[ERROR] [MY-011505] [Repl] Plugin group_replication reported: Member was expelled from the group due to network failures, changing member status to ERROR.

[ERROR] [MY-011712] [Repl] Plugin group_replication reported: The server was automatically set into read only mode after an error was detected.

[System] [MY-011565] [Repl] Plugin group_replication reported: Setting super_read_only=ON.

[System] [MY-013373] [Repl] Plugin group_replication reported: Started auto-rejoin procedure attempt 1 of 3

四、注意事項

如果集群中存在 UNREACHABLE 的節(jié)點,會有以下限制和不足:

不能調(diào)整集群的拓撲,包括添加和刪除節(jié)點。 在單主模式下,如果 Primary 節(jié)點出現(xiàn)故障了,無法選擇新主。 如果 Group Replication 的一致性級別等于 AFTER 或 BEFORE_AND_AFTER,則寫操作會一直等待,直到 UNREACHABLE 節(jié)點 ONLINE 并應(yīng)用該操作。 集群吞吐量會下降。如果是單主模式,可將 group_replication_paxos_single_leader (MySQL 8.0.27 引入的)設(shè)置為 ON 解決這個問題。

所以,在線上 group_replication_member_expel_timeout 不宜設(shè)置過大。

>>>>參考資料

Extending replication instrumentation: account for memory used in XCom https://dev.mysql.com/blog-archive/extending-replication-instrumentation-account-for-memory-used-in-xcom/ MySQL Group Replication - Default response to network partitions has changed https://dev.mysql.com/blog-archive/mysql-group-replication-default-response-to-network-partitions-has-changed/ No Ping Will Tear Us Apart - Enabling member auto-rejoin in Group Replication https://dev.mysql.com/blog-archive/no-ping-will-tear-us-apart-enabling-member-auto-rejoin-in-group-replication/

作者丨陳臣

來源丨公眾號:MySQL實戰(zhàn)(ID:MySQLInAction)

dbaplus社群歡迎廣大技術(shù)人員投稿,投稿郵箱:[email protected]

關(guān)于我們

dbaplus社群是圍繞Database、BigData、AIOps的企業(yè)級專業(yè)社群。資深大咖、技術(shù)干貨,每天精品原創(chuàng)文章推送,每周線上技術(shù)分享,每月線下技術(shù)沙龍,每季度Gdevops&DAMS行業(yè)大會。

關(guān)注公眾號【dbaplus社群】,獲取更多原創(chuàng)技術(shù)文章和精選工具下載返回搜狐,查看更多

責(zé)任編輯:

注明:本文章來源于互聯(lián)網(wǎng),如侵權(quán)請聯(lián)系客服刪除!