Performance Zone is brought to you in partnership with:

I specialise MySQL Server performance as well as in performance of application stacks using MySQL, especially LAMP. Web sites handling millions of visitors a day dealing with terabytes of data and hundreds of servers is king of applications I love the most. Peter is a DZone MVB and is not an employee of DZone and has posted 262 posts at DZone. You can read more from them at their website. View Full User Profile

Is your MySQL buffer pool warm? Make it sweat!

04.18.2013
| 2597 views |
  • submit to reddit

 This post comes from at the MySQL Performance Blog.

Today’s blog post diving into the waters of the MySQL buffer pool is a cross-post from Groupon’s engineering blog, and is Part 1 of 2. Thank you to Kyle Oppenheim at Groupon for contributing to this project and post. We’ll be posting Part 2 on Thursday. I’ll be at the Percona Live MySQL Conference and Expo next week in Santa Clara, California so look for me there – I’d love to connect and talk more about MySQL buffer pools or anything else that’s on your mind!

There arenumeroussolutionsfor MySQL high availability. Many rely on MySQL’s asynchronous replication to maintain a warm standby server which is flipped into service if the active master server has an issue. At Groupon, our standard MySQL database configuration follows this active/passive pattern. There is a virtual IP that points to the active server of a pair. The passive server has mysqld running and replicating from the active master. In theory, failing over is the simple matter of moving the virtual IP. In practice, it’s slightly more complicated. To manage this process, we use a tool co-developed with Percona.

“Warm standby server”? Did you catch that? What does that mean? In most of these high-availability solutions it means that mysqld is running on the standby server and that replication is not lagging. Unfortunately, that’s often not sufficient for failover during peak traffic. The standby server does not process query traffic except for replication. The buffer pool and adaptive hash index on the standby server will not have recently accessed pages. When it starts handling queries after failover, the lower cache hit rates can lead to outages. Specifically, at Groupon, our servers would be heavily I/O bound after failover as the buffer pool pages were loaded from disk.

Replaying Queries

Working with Groupon, we have developed a solution to keep the standby server’s caches hot. (See my Fosdem 2013 slides for information about discarded designs and benchmarks.)

Diagram of playback architecture - MySQL buffer pool

First, we set long_query_time to 0 in order to log every query. (See part two for handling massive slow log volume.) The slow logs are served, via HTTP, by mysql_slowlogd. This daemon is similar to running tail -f slow.log, except that it knows how to follow the log stream across log rotation events. On the standby server, the logs are replayed with Percona Playback by streaming the slow log from the active server.

wget -q -O - http://master_server:3307/slow | percona-playback --mysql-host 127.0.0.1 --mysql-username playback --mysql-password PaSSwOrd --mysql-schema schema_name --query-log-stdin --dispatcher-plugin thread-pool --thread-pool-threads-count 100 --session-init-query \"set innodb_fake_changes=1\" > /var/log/playback.log 2>&1 &

Our awesome development team added a few features to Percona Playback to make it work better for this use case. You will need version 0.6 or later to get these features. Be aware that playback output is really verbose, in production, most likely you want it to redirect to /dev/null, and only have a log file for debugging purposes.

  1. Streaming logs from stdin Percona Playback now supports the –query-log-stdin command-line option for accepting a never-ending stream of queries to playback.
  2. Read-only playback Using the –session-init-query command-line option, we set the option innodb_fake_changes to prevent INSERTs, UPDATEs, and DELETEs from corrupting the data on the standby server. You will need Percona Server in order to use innodb_fake_changes.
  3. Thread pool Percona Playback added a connection pool option via –dispatcher-plugin-thread-pool that will allow connection reuse. This is necessary when running a large stream of queries.

Benchmarks

We benchmarked with slow query logs captured from our production systems. We restored a production database backup to our test database so that our test database was consistent before applying the captured query traffic. This is an important step because update statements that match no rows or insert statements that have duplicate key errors may be faster than an actual database write.

The slow logs were split into chunks, each containing roughly 1M queries. We warmed the cold database with the first chunk and replayed the second chunk after the warmup.

Disk read I/O for chunk 1 followed by 2 - MySQL buffer pool

The y axis is logarithmic, so the difference between the IO usage is 2 orders of magnitude. All graphs looked like this (we did 39 measurements), the next graph shows chunk 4’s workload warmed up with chunk 3.

Disk read IO for chunk 3 followed by 4 - MySQL buffer pool

The result is similar for every single graph, each chunk warmed up the buffer pool for the next one.

disk_io_chunk_1and1 - MySQL buffer pool

As an additional experiment we tried replaying the same chunk again. We expected everything to be cached if we warmed the cache with the exact same data. All the graphs from such self-warming experiments look like this one. The green part of the graph lines up with the blue part.

Check back Thursday for Part 2!




Published at DZone with permission of Peter Zaitsev, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)