Tuesday, September 20, 2016

Lessons from Deploying MySQL GTID at Scale (at Facebook)

Global Transaction ID (GTID) is one of the most compelling new features of MySQL 5.6. It provides major benefits in failover, point-in-time backup recovery, and hierarchical replication, and it's a prerequisite for crash-safe multi-threaded replication. Over the course of the last few months, we enabled GTID on every production MySQL instance at Facebook. In the process, we learned a great deal about deployment and operational use of the feature. We plan to open source many of our server-side fixes via WebScaleSQL, as we believe others in the scale community can learn from this and benefit from the work we've done.

Background

Traditional MySQL replication is based on relative coordinates — each replica keeps track of its position with respect to its current master's binary log files. GTID enhances this setup by assigning a unique identifier to every transaction, and each MySQL server keeps track of which transactions it has already executed. This permits "auto-positioning," the ability for a replica to be pointed at a master instance without needing to specify a binlog filename or position in the CHANGE MASTER statement.

Auto-positioning makes failover simpler, faster, and less error-prone. It becomes trivial to get replicas in sync after a master failure, without requiring an external tool such as Master High Availability (MHA). Planned master promotions also become easier, as it is no longer necessary to stop all replicas at the same position first. Database administrators need not worry about manually specifying incorrect positions; even in the case of human error, the server is now smart enough to ignore transactions it has already executed.

By permitting replicas to be repointed to masters at different levels of the hierarchy, GTID greatly simplifies complex replication topologies, including hierarchical replication (slaves of slaves). Since a GTID-enabled binlog stream can safely be taken from any member of a replica set, as well as replayed without requiring relative positions, the feature also eases binlog backup and recovery.

Additionally, by combining GTID with semi-synchronous replication, we have implemented automation to safely recover crashed masters as replicas. When a master crashes, we can detect this and promote a replica within 30 seconds without losing data. Later, if the original master was able to be recovered and our automation detects its data is consistent, GTID allows us to repoint it to the new master instead of having to kick off a copy operation to replace it.

Deployment

GTID in MySQL 5.6 is extremely difficult to deploy to an existing large-scale environment. The gtid_mode variable is not dynamic, and also must match between master and replicas. The official deployment plan in the MySQL documentation involves making a master read_only, shutting down all MySQL instances in the replica set simultaneously at the same position, enabling the gtid_mode variable in my.cnf, and then starting all of the instances. This process is fundamentally incompatible with the notion of high availability, making it unviable for production use at scale.

We worked around this problem in fb-mysql by relaxing the constraint that gtid_mode must match between master and replicas. We permit a gtid_mode=ON replica to have a gtid_mode=OFF master, and we suppress assignment of GTIDs to transactions on servers with read_only=ON (i.e., replicas). This permits a high-availability deployment strategy as follows, for each replica set:
  1. On each replica, one at a time, restart MySQL to enable gtid_mode. Afterward, we are in a state where every replica has gtid_mode=ON, but the master still has gtid_mode=OFF.
  2. Perform a master promotion as normal, repointing the replicas and original master to a new master. The original master's replication will intentionally break when started, since it still has gtid_mode=OFF.
  3. Restart the original master to enable gtid_mode. It will now be able to replicate from the new master, and the entire replica set now has gtid_mode=ON.
With sufficient safeguards and validation logic, it is safe to execute this rollout process to a large number of replica sets at a time. During the peak of the deployment process, we were running our rollout script on up to hundreds of replica sets simultaneously.

fb-mysql changes

To deploy GTID and make it work in our production environment, we had to make huge changes both in the MySQL server and in the automation around it. Apart from the deployment changes, during initial testing we have encountered a number of serious bugs and performance regressions with GTID.

In some cases, the MySQL server ended up scanning all binary logs whenever a replica connected with the new auto-position protocol, or during crash recovery, to initialize the global variables GTID_PURGED and GTID_EXECUTED. Opening the binary logs is required to read the previous_gtid_events present at the beginning of the binary log. In fb-mysql, we fixed these issues by changing the format of binary log index file to include previous_gtid_event corresponding to each binary log file name. Fb-mysql uses the previous_gtid_events present in binary log index file instead of opening the binary logs directly thus improving performance.

We also found that slaves with GTID are not crash-safe with less durable settings (sync_binlog !=1 and innodb_flush_log_at_trx_commit != 1). Using fully durable settings requires syncing both the binary log and innodb transaction log to disk after each transaction in single-threaded replication mode, which negatively affects slave apply performance. It is important for any feature to be crash-safe to avoid operational overhead at Facebook scale. So in fb-mysql, we decided to fix this issue by adding a new transaction table (mysql.slave_gtid_info) to consistently store the GTID information.

GTID is a powerful feature that simplifies many replication complexities. For example, we identified multi-threaded slave doesn’t work along with relay_log_recovery=1, but relay_log_recovery=1 is required for crash safety even with less durable replication settings (sync_relay_log != 1). In fb-mysql we allowed the use of relay_log_recovery=1 with multi-threaded slave when GTID is enabled, since gaps in execution created after the multi-threaded slave crash are automatically filled by GTID auto-positioning.

Preparation

There were several steps we had to take prior to beginning our GTID deployment. One major step involved updating all of our automation to use GTID and auto-positioning. Enabling gtid_mode without use of auto-positioning is detrimental to replica crash-safety, so it is important to roll out both at once. The most substantial change was to our promotion logic, which now had to cover additional permutations for whether GTID was already enabled, or being enabled for the first time.

Another important prerequisite involves prevention of GTID-incompatible statements. MySQL has an option, enforce_gtid_consistency, that causes these statements to generate an error. For safety's sake, this should always be enabled whenever gtid_mode is also enabled. However, before beginning the rollout, it is necessary to audit applications and preemptively fix any uses of these query patterns. To make this possible at our scale, we augmented MySQL to add user stat counters for these statements, as well as an option to write full information on them to the MySQL error log. This allowed us to easily identify around 20 cases of these query patterns being used, among our thousands of special-case workloads.

Finally, we wrote a script to aid in skipping statements, in the rare cases where that is necessary. The age-old variable sql_slave_skip_counter does not work with gtid_mode enabled; instead, a DBA must fiddle with the gtid_next variable and insert an empty transaction. This is painful in an emergency, especially while a large DBA team is still ramping up on GTID knowledge, so having a helper script is prudent.

For a more in-depth technical overview of our GTID-related MySQL changes and automation efforts, please see our slides from Percona Live MySQL Conference 2014.

Summary

Facebook's Global Transaction ID deployment was a cross-functional collaboration between our MySQL engineering, database operations, and data performance teams. Deploying GTID to a Facebook-scale environment required substantial effort, including major improvements to the MySQL server, changes to our automation, and a custom rollout script. Early on in the rollout process, we also uncovered, reported, and fixed several bugs and performance issues in MySQL relating to GTID support. We can happily state that it is now extremely stable in our use, with no new problems encountered in recent months. We plan to make these same fixes to the WebScaleSQL branch of MySQL in the next few weeks, so that others can learn and benefit from this.

Despite the effort involved, deploying GTID has proven to be well-worth the time commitment. The feature has provided us immediate benefits, in addition to being a base for further automation improvements in the near future.

List of GTID bugs
  • #69059 GTID lack a reasonable deployment strategy
  • #69097 Mysqld scans all binary logs on crash recovery
  • #68386 Master scans all binlogs when slave reconnects with auto positioning
  • #70659 Make crash safe slave work with gtid + less durable settings
  • #69943 Transactions skipped on slave after "stop/start slave" using GTID replication
  • #71575 Master logs two consecutive GTIDs causing slaves to miss the first GTID
  • #72313 Stop sql_thread, start sql_thread causes a trx to log with a different GTID
  • #72314 Stop io_thread, start io_thread with GTID may cause data inconsistencies
  • #72635 Data inconsistencies when master has truncated binary log with GTID after crash
  • #73032 Setting gtid_purged may break auto_position and thus slaves
  • #70711 Mysqlbinlog prints invalid SQL from relay logs when GTID is enabled
  • #73397 Make MTS work with relay_log_recovery=1 when GTID is enabled

MySQL Replication and GTID-based failover

MySQL Replication and GTID-based failover

For years, MySQL replication used to be based on binary log events - all a slave knew was the exact event and the exact position it just read from the master. Any single transaction from a master may have ended in different binary logs, and in different positions in these logs. It was a simple solution that came with limitations - more complex topology changes could require an admin to stop replication on the hosts involved. Or these changes could cause some other issues, e.g., a slave couldn’t be moved down the replication chain without time-consuming rebuild process (we couldn’t easily change replication from A -> B -> C to A -> C -> B without stopping replication on both B and C). We’ve all had to work around these limitations while dreaming about a global transaction identifier.

GTID was introduced along with MySQL 5.6, and brought along some major changes in the way MySQL operates. First of all, every transaction has an unique identifier which identifies it in a same way on every server. It’s not important anymore in which binary log position a transaction was recorded, all you need to know is the GTID: ‘966073f3-b6a4-11e4-af2c-080027880ca6:4’. GTID is built from two parts - the unique identifier of a server where a transaction was first executed, and a sequence number. In the above example, we can see that the transaction was executed by the server with server_uuid of ‘966073f3-b6a4-11e4-af2c-080027880ca6’ and it’s 4th transaction executed there. This information is enough to perform complex topology changes - MySQL knows which transactions have been executed and therefore it knows which transactions need to be executed next. Forget about binary logs, it’s all in the GTID.
So, where can you find GTID’s? You’ll find them in two places. On a slave, in 'show slave status;' you’ll find two columns: Retrieved_Gtid_Set and Executed_Gtid_Set. First one covers GTID’s which were retrieved from the master via replication, the second informs about all transactions which were executed on given host - both via replication or executed locally.

Setting up a Replication Cluster the easy way

We’ll use the Severalnines Configurator to automatically deploy our replication setup. First, you need to point your browser at:
http://www.severalnines.com/replication-configurator/
The first page doesn’t give you too many options so you can as well click on ‘Next’ button.

The next screen contains some options regarding operating system, where the infrastructure will be created and so forth. All of options are explained in details, one thing that may be worth commenting is the ‘Number of MySQL Slaves’. By default, the deploy scripts create a master-master pair connected by semi-sync replication. This is the smallest possible block created when you set the number of slaves to 0. Every slave that you want to create will be connected to this master-master pair.

The third screen is related to the MySQL configuration of the database nodes - you can define how ‘large’ nodes will be in terms of CPU and memory, you can also set up InnoDB buffer pool sizes and predicted workload pattern.
The last screen lets you fill in the IP addresses of the ClusterControl server and the nodes in the replication setup.
Finally, you need to fill in your email address to which file with deployment scripts will be sent.

Deployment
For this blog post, let’s assume we want to create infrastructure on premises. We’ll use couple of Vagrant nodes. Deployment on EC2 may work in a slightly different way that what you see below.
When all nodes are up and running, you need to copy the tarball that you received via email, to the ClusterControl node. Next,  untar it, go to the install directory and execute the deploy.sh script:
1
2
3
$ tar zxf s9s-mysql-56.tar.gz
$ cd s9s-mysql-56/mysql/scripts/install/
$ ./deploy.sh
At the beginning of the deploy process you’ll be asked the following:
1
2
Can you SSH from this host to all other hosts without password?
Choosing 'n' will allow you to setup shared keys. (y/n):
If you have passwordless ssh set up already, you can choose ‘y’ here. In other case you’ll be asked to provide passwords for root user on all nodes to generate and distribute ssh keys.
The deployment will continue, and you should have you replication setup up and running after 15 to 20 minutes. We can now  take a look at how GTID works.

Errant transactions -  what is the issue?
As we mentioned at the beginning of this post, GTID’s brought a significant change in the way people should think about MySQL replication. It’s all about habits. Let’s say, for some reason, that an application performed a write on one of the slaves. It shouldn’t have happened but surprisingly, it happens all the time. As a result, replication stops with duplicate key error. There are couple of ways to deal with such problem. One of them would be to delete the offending row and restart replication. Other one would be to skip the binary log event and then restart replication.
1
mysql> STOP SLAVE SQL_THREAD; SET GLOBAL sql_slave_skip_counter = 1; START SLAVE SQL_THREAD;
Both ways should bring replication back to work, but they may introduce data drift so it is necessary to remember that slave consistency should be checked after such event (pt-table-checksum and pt-table-sync works well here).
If a similar problem happens while using GTID, you’ll notice some differences. Deleting the offending row may seem to fix the issue, replication should be able to commence. The other method, using sql_slave_skip_counter won’t work at all - it’ll return an error. Remember, it’s now not about binlog events, it’s all about GTID being executed or not.
Why deleting the row only ‘seems’ to fix the issue? One of the most important things to keep in mind regarding GTID is that a slave, when connecting to the master, checks if it is missing any transactions which were executed on the master. These are called errant transactions. If a slave finds such transactions, it will execute them. Let’s assume we ran following SQL to clear an offending row:
1
mysql> DELETE FROM mytable WHERE id=100;
Let’s check show slave status:
1
2
3
4
                  Master_UUID: 966073f3-b6a4-11e4-af2c-080027880ca6
           Retrieved_Gtid_Set: 966073f3-b6a4-11e4-af2c-080027880ca6:1-29
            Executed_Gtid_Set: 84d15910-b6a4-11e4-af2c-080027880ca6:1,
966073f3-b6a4-11e4-af2c-080027880ca6:1-29,
And see where the 84d15910-b6a4-11e4-af2c-080027880ca6:1 comes from:
1
2
3
4
5
mysql> SHOW VARIABLES LIKE 'server_uuid'\G
*************************** 1. row ***************************
Variable_name: server_uuid
        Value: 84d15910-b6a4-11e4-af2c-080027880ca6
1 row in set (0.00 sec)
As you can see, we have 29 transactions that came from the master, UUID of 966073f3-b6a4-11e4-af2c-080027880ca6 and one that was executed locally. Let’s say that at some point we failover and the master (966073f3-b6a4-11e4-af2c-080027880ca6) becomes a slave. It will check its list of executed GTID’s and will not find this one: 84d15910-b6a4-11e4-af2c-080027880ca6:1. As a result, the related SQL will be executed:
1
mysql> DELETE FROM mytable WHERE id=100;
This is not something we expected… If, in the meantime, the binlog containing this transaction would be purged on the old slave, then the new slave will complain after failover:
1
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

How to detect errant transactions?
MySQL provides two functions which come in very handy when you want to compare GTID sets on different hosts.
GTID_SUBSET() takes two GTID sets and checks if the first set is a subset of the second one.
 Let’s say we have following state.
Master:
1
2
3
4
5
6
7
8
9
10
mysql> show master status\G
*************************** 1. row ***************************
             File: binlog.000002
         Position: 160205927
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1,
ab8f5793-b907-11e4-bebd-080027880ca6:1-2
1 row in set (0.00 sec)
Slave:
1
2
3
4
5
6
7
mysql> show slave status\G
[...]
           Retrieved_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1
            Executed_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1,
ab8f5793-b907-11e4-bebd-080027880ca6:1-4
We can check if the slave has any errant transactions by executing the following SQL:
1
2
3
4
mysql> SELECT GTID_SUBSET('8a6962d2-b907-11e4-bebc-080027880ca6:1-153,ab8f5793-b907-11e4-bebd-080027880ca6:1-4', '8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2') as is_subset\G
*************************** 1. row ***************************
is_subset: 0
1 row in set (0.00 sec)
Looks like there are errant transactions. How do we identify them? We can use another function, GTID_SUBTRACT()
1
2
3
4
mysql> SELECT GTID_SUBTRACT('8a6962d2-b907-11e4-bebc-080027880ca6:1-153,ab8f5793-b907-11e4-bebd-080027880ca6:1-4', '8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2') as mising\G
*************************** 1. row ***************************
mising: ab8f5793-b907-11e4-bebd-080027880ca6:3-4
1 row in set (0.01 sec)
Our missing GTID’s are ab8f5793-b907-11e4-bebd-080027880ca6:3-4 - those transactions were executed on the slave but not on the master.

How to solve issues caused by errant transactions?
There are two ways - inject empty transactions or exclude transactions from GTID history.
To inject empty transactions we can use the following SQL:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
mysql> SET gtid_next='ab8f5793-b907-11e4-bebd-080027880ca6:3';
Query OK, 0 rows affected (0.01 sec)
 
mysql> begin ; commit;
Query OK, 0 rows affected (0.00 sec)
   
Query OK, 0 rows affected (0.01 sec)
 
mysql> SET gtid_next='ab8f5793-b907-11e4-bebd-080027880ca6:4';
Query OK, 0 rows affected (0.00 sec)
 
mysql> begin ; commit;
Query OK, 0 rows affected (0.00 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
mysql> SET gtid_next=automatic;
Query OK, 0 rows affected (0.00 sec)
This has to be executed on every host in the replication topology that does not have those GTID’s executed. If the master is available, you can inject those transactions there and let them replicate down the chain. If the master is not available (for example, it crashed), those empty transactions have to be executed on every slave. Oracle developed a tool called mysqlslavetrx which is designed to automate this process.
Another approach is to remove the GTID’s from history:
Stop slave:
1
mysql> STOP SLAVE;
Print Executed_Gtid_Set on the slave:
1
mysql> SHOW MASTER STATUS\G
Reset GTID info:
1
mysql> RESET MASTER;
Set GTID_PURGED to a correct GTID set. based on data from SHOW MASTER STATUS. You should exclude errant transactions from the set:
1
mysql> SET GLOBAL GTID_PURGED='8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2';
Start slave:
1
mysql> START SLAVE\G
In every case, you should verify consistency of your slaves using pt-table-checksum and pt-table-sync (if needed) - errant transaction may result in a data drift.

TokuMX Fractal Tree(R) indexes, what are they?

TokuMX Fractal Tree(R) indexes, what are they?

With recent release of TokuMX 1.0, we’ve made some bold claims about how fast TokuMX can run MongoDB workloads. In this post, I want to dig into one of the big areas of improvement, write performance and reduced I/O.
One of the innovations of TokuMX is that it eliminates a long-held rule of databases: to get good write performance, the working set of your indexes should fit in memory. The standard reasoning goes along the lines of: if your indexes’ working set does not fit in memory, then your writes will induce I/O, you will become I/O bound, and performance will suffer. So, either make sure your indexes fit in memory, or make sure your indexes have an insertion pattern that keeps the working set small, like right-most insertions.
With TokuMX, THIS SIMPLY ISN’T TRUE. The innovation of Fractal Tree indexes is that as your working set grows larger than main memory, write performance stays consistent. This innovation is why Fractal Tree indexes perform so well on write-heavy benchmarks (for both MongoDB and MySQL).
So how does TokuMX achieve this write performance where many other databases struggle? By replacing B-Trees, the predominant storage data structure in many databases (MongoDB, MySQL, BerkeleyDB, etc…) with Fractal Tree indexes, a write-optimized data structure.
What do we mean by a write-optimized data structure?
To understand what we mean, we first need to understand why a B-Tree struggles when indexes no longer fit in memory. Below is a picture of a B-tree.

A B-tree is a simple (and elegant) data structure. The internal nodes store many pivots and pointers, and the leaf nodes store all the data. To insert into a B-tree, one must traverse to the leaf node where the data belongs, and place the data into the leaf node. If all of the data fits in memory, this is fast. But if most of the data does not fit in memory (as in the picture above, where only the internal nodes and very few leaf nodes fit), then retrieving that leaf node will require an I/O. In fact, nearly all insertions will incur an I/O. This is where the I/O bottleneck comes from. This is where the struggling write performance comes from. If your hard disk can do on the order of a few hundred I/O’s per second, then your B-tree can handle at most a few hundred insertions per second. This is why MongoDB and MySQL struggle with iiBench, and users are justifiably told to “keep the working set of indexes in memory”.
So why are Fractal Tree indexes so much better? In short, they drastically reduce the I/O. Here is how.
The key difference between Fractal Tree indexes and B-Trees that explains the difference in write performance can be found in the internal nodes:
  • with B-trees, internal nodes store just pivots and pointers for each child
  • with Fractal Tree indexes, internal nodes store pivots, pointers, and buffers for each child

Note in the picture above that in the internal node, for each child, there is a grey buffer.
The buffers batch up write operations, so that a write works as follows:
  • in the root node, find out which child the write SHOULD traverse down
  • serialize the pending operation into the buffer
  • if the buffer associated with that child has space, return. If the node’s buffer has no space, flush the pending operations in the buffer down a level, thereby making space for future writes.
The flush of a buffer in the root node may cause a cascading flush. That is, the flush in the root node may flood the child with enough data such that now the child’s buffers are full, and the child needs to flush. This keeps happening until data eventually flushes all the way down to leaves.
So why does this algorithm result in such better performance? The short answer is reduced I/O (really, it’s ALL about the I/O). With I/O’s being so expensive, if we must do an I/O we want the benefit we receive to be worth it. With B-trees, on a write, we do an I/O to insert one measly document or row, or key/value pair. With Fractal Tree indexes, by assuming the root node is always in memory, we know that when we perform an I/O on a write, we do it to flush a buffer’s worth of data. This may contain many many documents (or rows, etc…). With each I/O servicing many more writes, Fractal Tree indexes reduce the amount of I/O done by a LARGE factor, thereby eliminating the I/O bottleneck that B-Trees have.
Because of this I/O reduction, Fractal Tree indexes don’t require indexes to fit in memory, and TokuMX is able to achieve such high sustained write performance on data that does not fit in memory.
Another interesting thing to note about these algorithmic properties is that if the data resides in memory, then Fractal Tree indexes are not providing any algorithmic advantage over B-Trees for write performance. If everything fits in memory, then algorithmically, both data structures are fast.

Monday, September 19, 2016

Is Synchronous Replication right for your app?

Is Synchronous Replication right for your app?

I talk with lot of people who are really interested in Percona XtraDB Cluster (PXC) and mostly they are interested in PXC as a high-availability solution.  But, what they tend not to think too much about is if moving from async to synchronous replication is right for their application or not.

Facts about Galera replication

There’s a lot of different facts about Galera that come into play here, and it isn’t always obvious how they will affect your database workload.  For example:
  • Transaction commit takes approximately the worst packet round trip time (RTT) between any two nodes in your cluster.
  • Transaction apply on slave nodes is still asynchronous from client commit (except on the original node where the transaction is committed)
  • Galera prevents writing conflicts to these pending transactions while they are inflight in the form of deadlock errors.  (This is actually a form of Eventual Consistency where the client is forced to correct the problem before it can commit.  It is NOT the typical form of Eventual Consistency, known as asynchronous repair, that most people think of).

Callaghan’s Law

But what does that all actually mean?  Well, at the Percona Live conference a few weeks ago I heard a great maxim that really helps encapsulate a lot of this information and puts it into context with your application workload:
[In a Galera cluster] a given row can’t be modified more than once per RTT
This was attributed to Mark Callaghan from Facebook by Alexey Yurchenko from Codership at his conference talk.  Henceforth this will be known as “Callaghan’s law” in Galera circles forever, though Mark didn’t immediately recall saying it.

Applied to a standalone Innodb instance

Let’s break it down a bit.  Our unit of locking in Innodb is a single row (well, the PRIMARY KEY index entry for that row).  This means typically on a single Innodb node we can have all sorts modifications floating around as long as they don’t touch the same row.  Row locks are held for modifications until the transaction commits and that takes an fsync to the redo log by default, so applying Callaghan’s law to single-server Innodb, we’d get:
[On a single node Innodb server] a given row can’t be modified more than the time to fsync
You can obviously relax that by simply not fsyncing every transaction (innodb_flush_log_at_trx_commit != 1), or work around it with by fsyncing to memory (Battery or capacitor-backed write cache), etc., but the principle is basically the same.  If we want this transaction to persist after a crash, it has to get to disk.
This has no effect on standard MySQL replication from this instance, since MySQL replication is asynchronous.

What about semi-sync MySQL replication?

It’s actually much worse than Galera.  As I illustrated in a blog post last year, semi-sync must serialize all transactions and wait for them one at a time.  So, Callaghan’s law applied to semi-sync is:
[On a semi-sync replication master] you can’t commit (at all) more than once per RTT. 

Applied to a Galera cluster

In the cluster we’re protecting the data as well, though not by ensuring it goes to disk (though you can do that).  We protect the data by ensuring it gets to every node in the cluster.
But why every node and not just a quorum?  Well, it turns out transaction ordering really, really matters (really!).  By enforcing replication to all nodes, we can (simultaneously) establish global ordering for the transaction, so by the time the original node gets acknowledgement of the transaction back from all the other nodes, a GTID will also (by design) be established.  We’ll never end up with non-deterministic ordering of transactions as a result.
So this brings us back to Callaghan’s law for Galera.  We must have group communication to replicate and establish global ordering for every transaction, and the expense of doing that for Galera is approximately one RTT between the two nodes in the cluster that are furthest apart (regardless of where the commit comes from!).  The least amount of data we can change in Innodb at a time is a single row, so the most any single row can be modified cluster-wide is once per RTT.

What about WAN clusters?

Callaghan’s law applies to WAN clusters as well.  LANs usually have sub-millisecond RTTs.  WANs usually have anywhere from a few ms up to several hundred.  This really will open a large window where rows won’t be able to be updated more than just a few times a second at best.

Some things the rule does not mean on Galera

  • It does NOT mean you can’t modify different rows simultaneously.  You can.
  • It does NOT mean you can’t modify data on multiple cluster nodes simultaneously.  You can.
  • It does NOT set an lower bound on performance, only a upper bound.  The best performance you can expect is modifying a given row once per RTT, it could get slower if apply times start to lag.

So what about my application?

Think about your workload.  How frequently do you update any given row?  We call rows that are updated heavily “hotspots“.

Examples of hotspots

Example 1: Your application is an online game and you keep track of global achievement statistics in a single table with a row for each stat; there are just a few hundred rows.  When a player makes an achievement, your application updates this table with a statement like this:
How many players might accomplish this achievement at the same time?
Example 2: You have users and groups in your application.  These are maintained in separate tables and there also exists a users_groups table to define the relationship between them.  When someone joins a group, you run a transaction that adds the relationship row to users_groups, but also updates groups with some metadata:
How often might multiple users join the same group?

Results

In both of the above examples you can imagine plenty of concurrent clients attempting to modify the same record at once.  But what will actually happen to the clients who try to update the same row within the same RTT?  This depends on which node in the cluster the writes are coming from:
From the same node: This will behave just like standard Innodb.  The first transaction will acquire the necessary row locks while it commits (which will take the 1 RTT).  The other transactions will lock wait until the lock(s) they need are available.  The application just waits in those cases.
From other nodes: First to commit wins.  The others that try to commit AFTER the first and while the first is still in the local apply queue on their nodes will get a deadlock error.
So, the best case (which may not be best for your application database throughput) will be more write latency into the cluster.  The worst case is that your transactions won’t even commit and you have to take some action you normally wouldn’t have had to do.

Workarounds

If your hotspots were really bad in standalone Innodb, you might consider relaxing the fsync:  set innodb_flush_log_at_trx_commit to something besides 1 and suddenly you can update much faster.  I see this tuning very frequently for “performance” reasons when data durability isn’t as crucial.  This is fine as long as you weigh both options carefully.
But in Galera you cannot relax synchronous replication.  You can’t change the law, you can only adapt around it, but how might you do that ?

Write to one node

If your issue is really the deadlock errors and not so much the waiting, you could simply send all your writes to one node.  This should prevent the deadlock errors, but will not change the lock waiting that your application will need to do for hotspots.

wsrep_retry_autocommit

If your hotspots are all updates with autocommits, you can rely on wsrep_retry_autocommit to auto-retry the transactions for you.  However, each autocommit is retried only the number of times specified by this variable (default is 1 retry).  This means more waiting, and after the limit is exceeded you will still get the deadlock error.
This is not implemented for full BEGIN … COMMIT multi-statement transactions since it cannot be assumed that those are not applying application logic in between the statements that is not safe to retry after the database state changes.

retry deadlocks

Now we start to get into (*gasp*) territory where your application needs to be modified.  Generally if you use Innodb, you should be able to handle deadlock errors in your application.  Raise your hands if your application has that logic (I usually get less than 5 people who do out of 100).
But, what to do?  Retrying automatically, or giving your end user a chance to retry manually are typical answers.  However, this means more latency waiting for a write to go through, and possibly some poor user experience.

batch writes

Instead of updating global counters one at a time (from Example 1, above), how about maintaining the counter in memcache or redis and only flushing to the database periodically?

change your schema

In Example 2, above, how above moving the ‘joined’ column to the users_groups table so we don’t need to update the parent group row so often?

Conclusion

Choosing a system to replicate your data to a distributed system requires tradeoffs.  Most of us are used to the tradeoffs we take when deploying conventional stand-alone MySQL Innodb with asynchronous slaves.  We may not think about the tradeoffs, but we’re making them (anyone obsessively testing slave position to ensure it’s caught up with the master?).
Synchronous replication with PXC and Galera is no different in that there are trade-offs, they just aren’t what we commonly expect.

If Callaghan’s law is going to cause you trouble and you are not prepared to adapt to work with it, PXC/Galera Synchronous replication is probably not right for you

Monday, August 15, 2016

Multi-Master Replication with MySQL Cluste



There are a few asynchronous replication capabilities that are unique to MySQL Cluster – one of those is that changes are replicated even if they are made directly to the data nodes using the NDB API, another is that replication can be performed in both directions i.e. multi-master. One of the advantages of this is you can share both read and write operations across both sites so that no capacity is wasted – it also gives you the confidence that either site is sane and ready to take over for the other at any point. You should aim to minimise how often the same rows are modified at the same time on both sites – conflict detection/resolution has been implemented but it can only roll-back the rows changes that conflict with other row changes rather than the full transaction.
It’s important to note that in this article, I’m not talking about the synchronous replication that takes place between data nodes within a single MySQL Cluster site (that happens in parallel and is orthogonal to the asynchronous replication to a remote site).

When performing multi-master asynchronous replication between 2 Clusters, 1 (or more) MySQL Servers in each Cluster is nominated as a master and 1 or more as slaves (it can be the same server that takes on both roles and you can have multiple channels set up in case a channel or MySQL Server is lost). Changes made to the data at either site through any of their MySQL Servers (or directly to the data nodes using the NDB API) will be replicated to the other site.
I will focus on setting up the replication, you can refer to Deploying MySQL Cluster over multiple hosts for the steps to configure and run each MySQL Cluster site.

Most of the steps are very similar to those in the first example – the main differences would come when introducing asynchronous replication to  a MySQL Cluster instance that already contains data and is up and running (processing updates) which will be covered in a subsequent article.

Setting up MySQL Asynchronous Replication for High Availability

Setting up MySQL Asynchronous Replication for High Availability

Asynchronous Replication for High Availability
MySQL replication is often positioned as a solution to provide extra throughput for your database (especially when dealing with high numbers of read operations). What tends to be overlooked is how it can be used to provide high availability (HA) – no matter how much redundancy you have at a local level, your system remains at risk from a single catastrophic failure – loss of power, earthquake, terrorist attack etc. By using MySQL asynchronous replication to maintain a geographically remote copy of that data, service can be maintained through any single site failure.
As this replication is asynchronous, there are no low latency demands on the network and so the two sites can be thousands of miles apart while utilising low-cost connections.
This article provides a simple example of how to set up asynchronous replication between 2 MySQL databases in order to provide a Highly Available solution. First of all, it will be 2 databases where the tables will be stored in the MyISAM storage engine and then between 2 MySQL Cluster databases where I also configure it for Multi-master replication where changes can be made at either site.
Subsequent articles will build on this to show:
  • Collision detection and resolution when using MySQL Cluster multi-master asynchronous replication
  • Introducing asynchronous replication to a running MySQL Cluster database
Throughout this article, two machines are used: ws1 (192.168.0.3) and ws2 (192.168.0.4). ws1 will be set up as the master and ws2 as the slave (in the multi-master configuration, both act as both master and slave).

Setting up replication for non-Cluster databases

Replication is performed from one MySQL Server to another; the master makes the changes available and then one or more other Servers pick up those changes and apply them to their own databases. In this example, both databases will store the table data using the same storage engine (MyISAM) but it’s possible to mix and match (for example, take a look at  MySQL Cluster – flexibility of replication). As this is intended as a simple introduction to replication, I’m keeping life simple by assuming that this is all being set up before the database goes into production – if that isn’t the case for you and you need to cope with existing data then check out the MySQL documentation or subsequent articles on this site.
The my.cnf files can be set up as normal but the one for the MySQL Server that will act as the Master needs to have binary-logging enabled. Also, each of the server needs to have a unique server-id. Here are the my.cnf files used for this example:
my.cnf (Master)
[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=1
log-bin
my.cnf (Slave)
[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=2

General Overview of Multimaster Replication

General Overview of Multimaster Replication 
 
Multimaster replication is a utility that allows data in multiple databases to be automatically kept in sync. For example, in a multimaster replication system, if a row gets inserted into one of the databases in the system, that row will be automatically propagated to all of the other databases in that system. Updates and deletes to the data in any of the databases will be propagated in the same way.
A multimaster replication environment is set up by configuring databases to be part of a “replication group”. One of the databases in the group is defined as the “master definition site,” and all of the other databases in the group are classified as “master sites.” The main difference between the two types of sites is that most of the replication administration commands must be invoked from the master definition site.

There are two basic ways that transactions get propagated to remote databases—“synchronously” and “asynchronously”. Synchronous replication occurs by causing each transaction to be applied to all the master sites in a group immediately. The way this is achieved is by using Oracle’s two- phase commit functionality, to ensure that all of the databases in question can apply a given transaction. If any of the sites in the group cannot accept the transaction (such as because the site’s database has crashed, or the network connection to a database is down) then none of the master sites in the replication group will be able to accept the transaction—the transaction will not be able to take place.
The way asynchronous replication works is that all the transactions that occur on a site are temporarily placed in a buffer, called the “deferred transaction queue,” or deftran queue. Periodically, such as once per minute, all of the transactions in a site’s deftran queue get sent to all of the other sites, by “push” jobs. These jobs get created by calling the “schedule_push” procedure. Finally, the transactions in a deftran queue that have already been sent to other sites must be periodically purged, to prevent the deftran queue from growing too large.

The vast majority of customer sites that use multimaster replication use asynchronous replication rather than synchronous. One of the reasons for this is that asynchronous replication has been available for a much longer time; the initial versions of multimaster replication only allowed for asynchronous propagation. The main reason that asynchronous is used, though, is because it has many advantages over synchronous.

First of all, asynchronous replication uses much less network bandwidth and provides higher performance than synchronous replication. The primary reason for this is that it is more efficient to store multiple transactions and then propagate them all as a group, rather than to propagate each transaction separately.

This is particularly important when the sites in question are very far apart geographically (such as having one site in San Francisco and another in New York). Another reason for these bandwidth and performance improvements is that there is much more overhead associated with synchronous replication because each and every transaction requires that separate connections be established to all of the other sites in the replication group. With asynchronous replication, fewer connections need to be established, since transactions are propagated as a group.

The biggest advantage of asynchronous replication, though, is that it provides for high availability of the replication group. With asynchronous replication, if one of the sites in the replication group crashes, all of the other sites will still be able to accept updates—the transactions that are made on the remaining sites will just “stack up” in those sites’ deftran queues until the down site becomes available.

On the other hand, with synchronous replication, if any one of the sites becomes unavailable (such as because of a database crash or a network failure) then none of the sites will be updatable. This is because with synchronous replication, each and every transaction must be able to be immediately applied to all of the sites in the replication group, and of course if a site is unreachable no transactions will be able to be applied to it. This means that not only does synchronous replication not provide any higher database availability, it can actually provide lower availability than using a single database!

Multi-Master Replication

If You *Must* Deploy Multi-Master Replication, Read This First

An increasing number of organizations run applications that depend on MySQL multi-master replication between remote sites.   I have worked on several such implementations recently.  This article summarizes the lessons from those experiences that seem most useful when deploying multi-master on existing as well as new applications.

Let's start by defining terms.  Multi-master replication means that applications update the same tables on different masters, and the changes replicate automatically between those masters.  Remote sites mean that the masters are separated by a wide area network (WAN), which implies high average network latency of 100ms or more.  WAN network latency is also characterized by a long tail, ranging from seconds due to congestion to hours or even days if a ship runs over the wrong undersea cable.

With the definitions in mind we can proceed to the lessons.  The list is not exhaustive but includes a few insights that may not be obvious if you are new to multi-master topologies.  Also, I have omitted issues like monitoring replication, using InnoDB to make slaves crash-safe, or provisioning new nodes.  If you use master/slave replication, you are likely familiar with these topics already.

1. Use the Right Replication Technology and Configure It Properly

The best overall tool for MySQL multi-master replication between sites is Tungsten.  The main reason for this assertion is that Tungsten uses a flexible, asynchronous, point-to-point, master/slave replication model that handles a wide variety of topologies such as star replication or all-to-all.  Even so, you have to configure Tungsten properly.  The following topology is currently my favorite:
  • All-to-all topology.  Each master replicates directly to every other master.  This handles prolonged network outages or replication failures well, because one or more masters can drop out without breaking  replication between the remaining masters or requiring reconfiguration.  When the broken master(s) return, replication just resumes on all sides.  All-to-all does not work well if you have a large number of masters.  
  • Updates are not logged on slaves.  This keeps master binlogs simple, which is helpful for debugging, and eliminates the possibility of loops.  It also requires some extra configuration if the masters have their own slaves, as would be the case in a Tungsten Enterprise cluster
There are many ways to set up multi-master replication replication, and the right choice varies according to the number of masters, whether you have local clustering, or other considerations.  Giuseppe Maxia has described many topologies, for example here, and the Tungsten Cookbook has even more details.

One approach you should approach with special caution is MySQL circular replication.  In topologies of three or more nodes, circular replication results in broken systems if one of the masters fails.  Also, you should be wary of any kind of synchronous multi-master replication across sites that are separated by more than 50 kilometers (i.e. 1-2ms latency).  Synchronous replication makes a siren-like promise of consistency but the price you pay is slow performance under normal conditions and broken replication when WAN links go down.

2. Use Row-Based Replication to Avoid Data Drift

Replication depends on deterministic updates--a transaction that changes 10 rows on the original master should change exactly the same rows when it executes against a replica.  Unfortunately many SQL statements that are deterministic in master/slave replication are non-deterministic in multi-master topologies.  Consider the following example, which gives a 10% raise to employees in department #35.

   UPDATE emp SET salary = salary * 1.1 WHERE dep_id = 35;

If all masters add employees, then the number of employees who actually get the raise will vary depending on whether such additions have replicated to all masters.  Your servers will very likely become inconsistent with statement replication.  The fix is to enable row-based replication using binlog-format=row in my.cnf.  Row replication transfers the exact row updates from each master to the others and eliminates ambiguity.

3. Prevent Key Collisions on INSERTs

For applications that use auto-increment keys, MySQL offers a useful trick to ensure that such keys do not  collide between masters using the auto-increment-increment and auto-increment-offset parameters in my.cnf.  The following example ensures that auto-increment keys start at 1 and increment by 4 to give values like 1, 5, 9, etc. on this server.

server-id=1
auto-increment-offset = 1
auto-increment-increment = 4
This works so long as your applications use auto-increment keys faithfully.  However, any table that either does not have a primary key or where the key is not an auto-increment field is suspect.  You need to hunt them down and ensure the application generates a proper key that does not collide across masters, for example using UUIDs or by putting the server ID into the key.   Here is a query on the MySQL information schema to help locate tables that do not have an auto-increment primary key. 
SELECT t.table_schema, t.table_name 
  FROM information_schema.tables t 
    WHERE NOT EXISTS 
      (SELECT * FROM information_schema.columns c
       WHERE t.table_schema = c.table_schema  
         AND t.table_name = c.table_name
         AND c.column_key = 'PRI'
         AND c.extra = 'auto_increment')

4. Beware of Semantic Conflicts in Applications

Neither Tungsten nor MySQL native replication can resolve conflicts, though we are starting to design this capability for Tungsten.  You need to avoid them in your applications.  Here are a few tips as you go about this.
First, avoid obvious conflicts.  These include inserting data with the same keys on different masters (described above), updating rows in two places at once, or deleting rows that are updated elsewhere.  Any of these can cause errors that will break replication or cause your masters to become out of sync.  The good news is that many of these problems are not hard to detect and eliminate using properly formatted transactions.  The bad news is that these are the easy conflicts.  There are others that are much harder to address.  
For example, accounting systems need to generate unbroken sequences of numbers for invoices.  A common approach is to use a table that holds the next invoice number and increment it in the same transaction that creates a new invoice.  Another accounting example is reports that need to read the value of accounts consistently, for example at monthly close.  Neither example works off-the-shelf in a multi-master system with asynchronous replication, as they both require some form of synchronization to ensure global consistency across masters.  These and other such cases may force substantial application changes.  Some applications simply do not work with multi-master topologies for this reason. 

5. Remove Triggers or Make Them Harmless
Triggers are a bane of replication.  They conflict with row replication if they run by accident on the slave.  They can also create strange conflicts due to weird behavior/bugs (like this) or other problems like needing definer accounts present.  MySQL native replication turns triggers off on slaves when using row replication, which is a very nice feature that prevents a lot of problems.  
Tungsten on the other hand cannot suppress slave-side triggers.  You must instead alter each trigger to add an IF statement that prevents the trigger from running on the slave.  The technique is described in the Tungsten Cookbook.  It is actually quite flexible and has some advantages for cleaning up data because you can also suppress trigger execution on the master.  
You should regard all triggers with suspicion when moving to multi-master.  If you cannot eliminate triggers, at least find them, look at them carefully to ensure they do not generate conflicts, and test them very thoroughly before deployment.  Here's a query to help you hunt them down: 
SELECT trigger_schema, trigger_name 
  FROM information_schema.triggers;
6. Have a Plan for Sorting Out Mixed Up Data

Master/slave replication has its discontents, but at least sorting out messed up replicas is simple: re-provision from another slave or the master.  No so with multi-master topologies--you can easily get into a situation where all masters have transactions you need to preserve and the only way to sort things out is to track down differences and update masters directly.   Here are some thoughts on how to do this.
  1. Ensure you have tools to detect inconsistencies.  Tungsten has built-in consistency checking with the 'trepctl check' command.  You can also use the Percona Toolkit pt-table-checksum to find differences.  Be forewarned that neither of these works especially well on large tables and may give false results if more than one master is active when you run them.  
  2. Consider relaxing foreign key constraints.  I love foreign keys because they keep data in sync.  However, they can also create problems for fixing messed up data, because the constraints may break replication or make it difficult to go table-by-table when synchronizing across masters.  There is an argument for being a little more relaxed in multi-master settings. 
  3. Switch masters off if possible.  Fixing problems is a lot easier if you can quiesce applications on all but one master.  
  4. Know how to fix data.  Being handy with SQL is very helpful for fixing up problems.  I find SELECT INTO OUTFILE and LOAD DATA INFILE quite handy for moving changes between masters.  Don't forget SET SESSION LOG_FILE_BIN=0 to keep changes from being logged and breaking replication elsewhere.  There are also various synchronization tools like pt-table-sync, but I do not know enough about them to make recommendations.  
At this point it's probably worth mentioning commercial support.  Unless you are a replication guru, it is very comforting to have somebody to call when you are dealing with messed up masters.  Even better, expert advice early on can help you avoid problems in the first place.

(Disclaimer:  My company sells support for Tungsten so I'm not unbiased.  That said, commercial outfits really earn their keep on problems like this.)

7. Test Everything

Cutting corners on testing for multi-master can really hurt.  This article has described a lot of things to look for, so put together a test plan and check for them.  Here are a few tips on procedure:
  1. Set up a realistic pre-prod test with production data snapshots.  
  2. Have a way to reset your test environment quickly from a single master, so you can get back to a consistent state to restart testing. 
  3. Run tests on all masters, not just one.  You never know if things are properly configured everywhere until you try. 
  4. Check data consistency after tests.  Quiesce your applications and run a consistency check to compare tables across masters. 
It is tempting to take shortcuts or slack off, so you'll need to find ways to improve your motivation.  If it helps, picture yourself explaining to the people you work for why your DBMS servers have conflicting data with broken replication, and the problem is getting worse because you cannot take applications offline to fix things.  It is a lot easier to ask for more time to test.  An even better approach is to hire great QA people and give them time to do the job right.

Summary

Before moving to a multi-master replication topology you should ask yourself whether the trouble is justified.  You can get many of the benefits of multi-master with system-of-record architectures with a lot less heartburn.  That said, an increasing number of applications do require full multi-master across multiple sites.  If you operate one of them, I hope this article is helpful in getting you deployed or improving what you already have.

Tungsten does a pretty good job of multi-master replication already, but I am optimistic we can make it much better.  There is a wealth of obvious features around conflict resolution, data repair, and up-front detection of problems that will make life better for Tungsten users and reduce our support load.  Plus I believe we can make it easier for developers to write applications that run on multi-master DBMS topologies.  You will see more about how we do this in future articles on this blog.