Performance Zone is brought to you in partnership with:

Sean Hull is a renowned author, speaker and advisor on MySQL & Oracle databases, web performance and scalability, EC2, Linux & Internet Architecture. He is the author of "Oracle & Open Source" O'Reilly, 2001, and speaks widely at conferences and forums. With 20 years of professional experience, he consults with large enterprises and startups in New York City and the San Francisco Bay Area. Sean is a DZone MVB and is not an employee of DZone and has posted 64 posts at DZone. You can read more from them at their website. View Full User Profile

Make MySQL clustering work for you

05.01.2013
| 1423 views |
  • submit to reddit

We’ve told you all about MySQL mult-master replication’s limitations. If you write to two masters it is bound to fail for myriad reasons.

Now what? Do what the pros do that’s what.

A. Don’t write to both masters

Using multi-master replication works great as long as you do so in active-passive mode. Never write to two masters at the same time. When you promote a new side to being a master, do it carefully:

o Put application into read-only mode temporarily (disable editing)
o Set current master to read-only mode
o Change all webservers to point to new master
o Change new master to read-write mode
o Turn application edit back on

B. Use Statement based replication

Statement based replication has been around forever, it’s proven and it’s limitations are well known. It’s been tested for many many edge cases. You know what you’re getting.

o supports online schema changes

Perform alter tables, add or drop columns or modify indexes on the inactive master. Once those changes are complete, promote the inactive side to being primary master and perform the changes again on that master. All with zero downtime to your application. Statement based replication makes this easy as differences to columns, column order and so forth won’t break things.

o facilitates point-in-time recovery

With the SQL of all your queries being written directly to binlogs, the forensic process of reconstructing things during point-in-time recovery becomes much easier.

o perform regular checksums against current master

Use the pt-table-checksum tool to verify data. Integrity checking will help you avoid any data drift and keep everything tightly in sync.

C. Degrade gracefully – build for a read-only mode

- facilitates failover
- facilitates maintenance
- insurance plan
- disaster recovery
- levers & dials for the operations team

D. Put Memcache between application and database

- reduces load on database
- reduces latency for remote write master
- key value stores are easier to scale
- continue to get fast application response

E. Misc recommendations

o use provisioned IOPS for the database servers
o use percona server 5.6
o use multi-threaded slaves
o use semi-syncronous replication
o using percona toolkit checksum tool to provide data integrity checks
o using percona toolkit heartbeat to check slave lag
o use percona xtrabackup to do hotbackups
o perform firedrills to restore backups
o perform firedrills to do point-in-time recovery




Published at DZone with permission of Sean Hull, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)