I previously posted blog about the topic of a MySQL Cluster set up and replication to a MySQL Server. The first blog was just the installation, now lets test the cluster and get replication started. Both are extremely easy.
Some people might ask, “Why set up replication from a cluster? It is a HA system why replication?”
The ability to move data out of the cluster and allow others to do reporting, data exports, simple backups all can be done easily with it replicated into another MySQL server. Yes a Cluster can do all of those queries, but taking advantage of the replication to keep some users and queries out of your production database is rarely a bad thing.
Testing the Cluster Install. Of course more MySQL Documentation:
mysql> SHOW ENGINES\G
*************************** 1. row ***************************
Engine: ndbcluster
Support: YES
Comment: Clustered, fault-tolerant tables
Transactions: YES
XA: NO
Savepoints: NO
*************************** 3. row ***************************
Engine: ndbinfo
Support: YES
Comment: MySQL Cluster system information storage engine
Transactions: NO
XA: NO
Savepoints: NO
mysql> USE test;
Database changed
mysql>; CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.28 sec)
mysql> SHOW CREATE TABLE ctest \G
*************************** 1. row ***************************
Table: ctest
Create Table: CREATE TABLE `ctest` (
`i` int(11) DEFAULT NULL
) ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
That was easy enough...Table is built and using the ndbcluster engine.
So lets set up replication from a cluster to a MySQL Innodb Server.
1st create a user for the replication.
GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'10.%' IDENTIFIED BY '
<password>
';
flush privileges;
For this example I am jut using a Windows Virtual Box instance. The MySQL windows installer worked great. For a guy who ever runs windows, it was nice that this was so easy. (http://dev.mysql.com/downloads/installer/)
On the Cluster server, I made sure to have my server_id and big logs set.
vi /etc/my.cnf
[mysqld]
server-id=1
log-bin = /var/lib/mysql/mysql_demo-bin
restarted the sql server and confirmed it was running:
mysql> show master status\G<
*************************** 1. row ***************************
File: mysql_demo-bin.000001
Position: 112 Binlog_Do_DB:
Binlog_Ignore_DB: 1
row in set (0.00 sec)
So lets create a simple table that we can use for test later.
PLEASE take note... This is just a table to show it replicated. If you create tables for real use in a Cluster and replicated ... ALWAYS have a primary key!
mysql> CREATE TABLE slave_test (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.28 sec)
Now back on my windows mysql server. Since it came with workbench I used this to set the server_id (server_id =2) and restart the server. I then opened the sql editor and added the master settings.
via_mysql_workbench>
CHANGE
MASTER
TO
-> MASTER_HOST='10.132.241.18',
-> MASTER_PORT=3306,
-> MASTER_USER='slave_user',
-> MASTER_PASSWORD='<password>';
via_mysql_workbench>
start slave;
via_mysql_workbench>
use test;
via_mysql_workbench> SHOW CREATE TABLE slave_test ;
'slave_test', 'CREATE TABLE `slave_test` (
`i` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1'
mysql> show slave status;
...
Slave_IO_Running, Yes
Slave_SQL_Running Yes
...
So now via these blog posts.. I have a cluster with a geolocated node both on Oracle Linux 6 being replicated to another remote location MySQL server running Windows via a Virtual box instance. I can also now use Workbench to connect to the local windows server as well as the cluster if needed. This is not a production system but it does show that Cluster can be installed easily, remotely, and takes advantage of commodity systems.
Part 2 -- Testing Cluster more and Replication Setup