MySQL 6.0 体验 集群(cluster)+复制(replicate) -- 简介、准备工
本文主要是尝试MySQL 6.0.3-beta下的集群+复制,没有做深入的测试,仅可作安装配置参考。
2. 准备工作如果只是做普通的集群,那么一般只需要4台机器,甚至只需要3台,即把管理节点和SQL节点放在同一个机器上,不过不推荐这么做。如果是集群+复制,则机器数量翻番。
2.1. 普通集群共有4个节点,如下图所示:
共有2个集群,分别作为master和slave,如下图:
MySQL的安装路径默认为:/usr/local/mysql。
3.1. 管理节点首先,创建数据文件主目录 /usr/local/mysql/data,然后编辑配置文件:
vi /usr/local/mysql/config.ini
#定义默认参数[ndbd default]#设置集群中每个表保存的副本数#这里有2个数据节点,那么每个节点则保存一个副本NoOfReplicas=2#设置用于保存数据的内存大小DataMemory=1G#设置用于保存哈希索引的内存大小IndexMemory=1G#设定管理节点相关参数[ndb_mgmd]#指定ID号id = 1#指定管理节点主机hostname=192.168.0.1#指定管理节点的数据文件主目录datadir=/usr/local/mysql/data#设定数据节点相关参数[ndbd]id = 3hostname=192.168.0.3datadir=/usr/local/mysql/data[ndbd]id = 4hostname=192.168.0.4datadir=/usr/local/mysql/data#设定SQL节点相关参数[mysqld]id = 2hostname=192.168.0.2#增加一个API节点,API节点视需求增加[mysqld]id = 53.2. SQL节点
在SQL节点中,除了作为正常的mysqld服务在运行而设定的常规参数外,想要加入集群中,只需要增加几行即可:
vi /etc/my.cnf
# mysql cluster#告诉mysqld,要启用ndbcluster引擎ndbcluster#指定管理节点ndb-connectstring=192.168.0.1[mysql_cluster] ndb-connectstring=192.168.0.13.3. 数据节点
数据节点配置很简单,只需要指定管理节点IP即可:
vi /etc/my.cnf
# mysql cluster#告诉mysqld,要启用ndbcluster引擎ndbcluster#指定管理节点ndb-connectstring=192.168.0.1[mysql_cluster] ndb-connectstring=192.168.0.14. 启动4.1. 启动管理节点
很简单,直接运行命令:
/usr/local/mysql/bin/ndb_mgmd –f /usr/local/mysql/config.ini4.2. 启动数据节点
数据节点有些特殊,如果是第一次启动,则需要增加参数 –initial,如果不是,直接启动即可:
第一次启动:
/usr/local/mysql/bin/ndbd --initial
非第一次启动:
/usr/local/mysql/bin/ndbd
如果你不是把 my.cnf 放在 /etc/ 下,则需要指定实际位置所在,比如:
/usr/local/mysql/bin/ndbd –defaults-file=/usr/local/mysql/my.cnf --initial
或
/usr/local/mysql/bin/ndbd –defaults-file=/usr/local/mysql/my.cnf4.3. 启动SQL节点
正常的mysqld启动方式启动即可,比如我习惯使用 /etc/init.d/mysql 来启动:
service mysql start
或
/etc/init.d/mysql start
5. 管理
在管理节点上,运行客户端工具 ndb_mgm:
/usr/local/mysql/bin/ndb_mgm-- NDB Cluster -- Management Client --
#查看集群状态
ndb_mgm> showCluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=3 @192.168.0.3 (mysql-6.0.3 ndb-6.2.6, Nodegroup: 0, Master)id=4 @192.168.0.4 (mysql-6.0.3 ndb-6.2.6, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.1 (mysql-6.0.3 ndb-6.2.6)[mysqld(API)] 2 node(s)id=2 @192.168.0.2 (mysql-6.0.3 ndb-6.2.6)id=5 (not connected, accepting connect from any host)
#查看全部数据节点状态
ndb_mgm> all statusNode 3: started (mysql-6.0.3 ndb-6.2.6)Node 4: started (mysql-6.0.3 ndb-6.2.6)
#报告数据节点的内存使用情况
ndb_mgm> 3 report memoryusageNode 3: Data usage is 6%(2154 32K pages of total 32768)Node 3: Index usage is 2%(3009 8K pages of total 131104)
#重启第3个节点
ndb_mgm> 3 restartNode 3: Node shutdown initiatedNode 3: Node shutdown completed, restarting, no start.Node 3 is being restartedNode 3: Started (version 6.2.6)6. 备份、恢复6.1. 备份
集群的备份很简单,只需在管理节点上执行START BACKUP 即可:
ndb_mgm> START BACKUPWaiting for completed, this may take several minutesNode 3: Backup 4 started from node 1Node 3: Backup 4 started from node 1 completed StartGCP: 7847 StopGCP: 7850 #Records: 1736024 #LogRecords: 0 Data: 41649824 bytes Log: 0 bytes6.2. 恢复
来做个实际的测试,在SQL节点上先制造一些数据出来:
mysql> show tables;+----------------+| Tables_in_test |+----------------+| a |+----------------+1 row in set (0.01 sec)mysql> show create table a;+-------+-------------------------------------------------------------------------------------------+| Table | Create Table |+-------+-------------------------------------------------------------------------------------------+| a | CREATE TABLE `a` ( `id` int(11) DEFAULT NULL) ENGINE=ndbcluster DEFAULT CHARSET=latin1 |+-------+-------------------------------------------------------------------------------------------+1 row in set (0.01 sec)mysql> select count(*) from a;+----------+| count(*) |+----------+| 1733965 |+----------+1 row in set (0.03 sec)mysql> checksum table a;+--------+------------+| Table | Checksum |+--------+------------+| test.a | 1641855839 |+--------+------------+1 row in set (11.61 sec)
然后用上面提到的方法进行备份后,删除该表的数据:
mysql> truncate table a;Query OK, 0 rows affected (0.68 sec)
接下来在2个数据节点上分别执行以下命令恢复数据:
/usr/local/mysql/bin/ndb_restore -n 3 -b 5 -r data/BACKUP/BACKUP-3Nodeid = 3Backup Id = 5backup path = data/BACKUP/BACKUP-5Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.3.ctl'File size 8120 bytesNdb version in backup files: ndb-6.2.6Connected to ndb!!Opening file 'data/BACKUP/BACKUP-5/BACKUP-5-0.3.Data'File size 20809008 bytes_____________________________________________________Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 0_____________________________________________________Processing data in table: test/def/a(5) fragment 0_____________________________________________________Processing data in table: mysql/def/ndb_apply_status(4) fragment 0_____________________________________________________Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 0_____________________________________________________Processing data in table: sys/def/SYSTAB_0(0) fragment 0_____________________________________________________Processing data in table: mysql/def/ndb_schema(2) fragment 0Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.3.log'File size 44 bytesRestored 866287 tuples and 0 log entriesNDBT_ProgramExit: 0 - OK
再在SQL节点上查一下:
mysql> select count(*) from a;+----------+| count(*) |+----------+| 866287 |+----------+1 row in set (0.00 sec)
然后在另一个数据节点上执行恢复。
/usr/local/mysql/bin/ndb_restore -n 4 -b 5 -r data/BACKUP/BACKUP-3Nodeid = 4Backup Id = 5backup path = data/BACKUP/BACKUP-5Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.4.ctl'File size 8120 bytesNdb version in backup files: ndb-6.2.6Connected to ndb!!Opening file 'data/BACKUP/BACKUP-5/BACKUP-5-0.4.Data'File size 20841232 bytes_____________________________________________________Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 1_____________________________________________________Processing data in table: test/def/a(5) fragment 1_____________________________________________________Processing data in table: mysql/def/ndb_apply_status(4) fragment 1_____________________________________________________Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 1_____________________________________________________Processing data in table: sys/def/SYSTAB_0(0) fragment 1_____________________________________________________Processing data in table: mysql/def/ndb_schema(2) fragment 1Opening file 'data/BACKUP/BACKUP-5/BACKUP-5.4.log'File size 44 bytesRestored 867678 tuples and 0 log entriesNDBT_ProgramExit: 0 – OK
再来查一次看看:
mysql> checksum table a; +--------+------------+| Table | Checksum |+--------+------------+| test.a | 1641855839 |+--------+------------+1 row in set (11.62 sec)
恢复完毕。
7. 集群下的复制7.1. 简述从MySQL 5.1 开始,就支持集群+复制了,这对于想要构建一个高可用方案的用户来说,无疑是个惊喜。在这种模式下,既有主从的实时备份,又有基于集群的负载均衡,不足指出在于,从我的测试结果来看,这种方案下的性能还不是太高,仍有待改进。
集群+复制的配置其实很简单,就是配置好2个独立的集群后,把其中一个的SQL节点作为另一个集群SQL节点的slave即可。甚至可以使用下面几种架构:
3个集群,6个SQL节点,形成一个3个点环形的复制。
3个集群,6个SQL节点,形成一个6个点环形的复制,把另一个SQL节点也利用起来。
由于集群下的复制是基于row-based复制的,因此需要设置logbin-format的格式为:ROW 或者 MIXED。
相对于普通的mysqld服务器配置,只需要增加类似如下2行:
server-id = 1binlog_format = "ROW" #or MIXED7.2.2. slave上的配置
新版本的MySQL已经不再通过 my.cnf 来指定master相关的信息了,而是通过 CHANGE MASTER 指令来管理。因此,slave上只需简单增加类似如下几行:
server-id = 2relay-log-purge=1skip-slave-startreplicate-ignore-db=mysql7.3. 启动slave
先按照正常的方式启动master和slave上的mysqld后,执行SHOW PROCESSLIST 可以看到2个mysqld都只有2个进程:
mysql> SHOW PROCESSLIST;+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+| Id | User | Host | db | Command | Time | State | Info |+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+| 1 | system user | | | Daemon | 0 | Waiting for event from ndbcluster | NULL || 2 | root | localhost | NULL | Query | 0 | NULL | show processlist |+----+-------------+-----------+------+---------+------+-----------------------------------+------------------+2 rows in set (0.00 sec)
在slave上执行 SHOW SLAVE STATUS 再看看:
mysql> show slave status\GEmpty set (0.00 sec)
可以看到,现在还没有任何复制相关的配置。因此,我们需要先在master上添加一个账户用于复制:
mysql> GRANT REPLICATION SLAVE, GRANT REPLICATION CLIENT ON *.* TO rep@’192.168.1.2’ IDENTIFIED BY ‘rep’;Query OK, 0 rows affected (0.00 sec)
然后,我们用 CHANGE MASTER 指令来指定一下 master 相关的参数:
mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.2', MASTER_USER='rep', MASTER_PASSWORD='rep'; Query OK, 0 rows affected (0.01 sec)mysql> SHOW SLAVE STATUS\G*************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.0.2 Master_User: rep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: mysql Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 106 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULLMaster_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)
执行 START SLAVE 来启动它。
mysql> start slave;Query OK, 0 rows affected (0.00 sec)
再来看看:
mysql> SHOW SLAVE STATUS\G*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.2 Master_User: rep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 256 Relay_Log_File: slave-relay-bin.000002 Relay_Log_Pos: 398 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: mysql Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 256 Relay_Log_Space: 557 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)7.4. 简单测试
这个留给读者自己按照常规的测试去完成吧 。
- 最新评论