Wednesday, May 24, 2017

How to setup and maintain Percona backup using XtraBackup

How to setup and maintain Percona backup using XtraBackup


Backups are your insurance against server failures. A well designed backup helps you quickly get back online even if the whole server is lost. For modern websites running WordPress, Magento, etc., the entire information is stored in databases. So, a sound backup design for such sites should include reliable database backups.
Website owners use various strategies such as hot copies (aka physical backups), database dumps (aka logical backups), etc. to backup databases. But, traditional tools like mysqlhotcopy or mysqldump often lead to performance or availability issues. Percona’s XtraBackup tool avoids these issues while ensuring reliable backups.
However, like any other server system, improper configuration or lack of periodic maintenance can cause XtraBackup systems to fail, and result in inconsistent backups.
Today we’ll go through how to setup a reliable Percona backup system and how to maintain the XtraBackup process.

Why use XtraBackup for Percona backup?

Percona is essentially MySQL server on steroids. So, all backup tools used in MySQL can be used on Percona as well. This includes mysqldump, InnoDB hot backup, etc. However, all these tools imposed performance or uptime penalties that cannot be afforded by busy websites. Let’s see how.

Mysqldump blocks new site updates – causing downtime

The most popular backup tool for MySQL compatible databases is “mysqldump“. Even popular backup tools such as phpMyAdmin, AutoMySQLBackup, phpMyBackup, etc. use mysqldump as its core component.
However, mysqldump has a critical flaw. It makes the entire database read-only during the backup process to prevent database corruption. So, when backing up large databases, new data (such as comments, orders, contacts, etc.) cannot be added for a long time, affecting a significant number of users. The same holds true for very busy websites. Even if the backup time is a few minutes, it can mean several users unable to use features that needs a database update.
Additionally, mysqldump takes up a lot of system memory to build SQL queries to write into the backup file. It results in high server load, and impacts website performance.
For all these reasons, mysqldump (and other such logical backup tools) cannot be used on large or mission critical websites.

LVM snapshot and other “hot backups” require database to be paused

As explained above, mysqldump takes a long time to complete, and degrades server performance. As an alternate solution, websites owners use tools such as mysqlhotcopy, LVM snapshot, or Zmanda, which takes a snapshot of the database files at the file system level.
The problem with these tools is that, the database must be paused for a small time period when the snapshot is taken. If the database is not stopped, the tool could take the backup of an incomplete database transaction, causing the entire table to be unreliable. For busy websites, stopping the database service, for however small a time, is not an option.
So, none of these “physical backup” tools are reliable for websites that need high uptime.

Percona XtraBackup does not block site updates and minimizes resource usage

What websites with large or busy databases needed was a backup tool that minimizes resource usage, and does not block database updates. MySQL produced such a tool called “MySQL Enterprise Backup”, but it is a commercial tool that costs $5000/server. For many web owners it is a costly proposition.
Percona came up with an open source alternative with the same functionality as MySQL Enterprise Backup, called “XtraBackup”.
XtraBackup copies the data files (without pausing the server), and then uses the transaction log (aka redo log) to fill in any incomplete transactions that were running at the time of taking the backups. Since there’s no large data set manipulation, the memory and I/O usage is low for XtraBackup – which means the website visitors won’t be affected by the backup process.

How to setup Percona backup

Percona servers can use either InnoDB or XtraDB engines based on whether the database service is configured as a stand-alone server or as a cluster. XtraBackup supports non-blocking backups for both these engines.

Factors to consider in designing a backup system

The installation and configuration of XtraBackup is straight-forward (which we’ll cover in a minute), but the main question is how to setup the whole backup process. The following are the points you need to consider:
  1. “Freshness” of your backups – In case of a server crash, what’s least recent data you can accept? For eg., can you continue business with 1 day old backup without serious business impact? Note that the more “fresh” your backup is, the more disk space it’ll take, and more performance impact it’ll cause.
  2. Acceptable recovery time – What’s the max. acceptable limit before your site should come back from a server crash? For eg., Can you afford your site to be offline for 1 hour? Note that the lower the recovery time, the more frequent full database backups should be – leading to more disk space usage, and performance impact.
  3. Retention time of your backups – How long do you need your backups? For statutory or business purposes, some sites should store their data for a certain period of time. Define your retention time. Is it 3 months? 6 months? 1 year? 3 years?
  4. Safety of your backups – How destruct-proof you want your backups to be? If you store your backups in the same server, it’ll be lost along with the server crash. But if it is in a remote data center, it’ll survive even a data center wide downtime. With a remote server, you incur more costs in hardware, bandwidth, etc.

Designing the backup system for an eCommerce website

To explain how we decide a backup strategy, I’ll use the example of a Percona Backup system we implemented for an eCommerce website. The site updated their products every day, and needed a way to quickly restore the site in case the database crashed.
So, the site need to be at least 1 day “fresh”. For this, we setup daily “incremental” backups – that is backups that just updates the differences from last day to the backup location. By writing only a small amount of data, we minimized the system memory and disk I/O needed for the operation.
In case of a crash, we wanted the website to be recoverable within 30 minutes. To enable that, we took weekly full backups, so that the incrementals that need to be applied at any point would be less than 7- and thereby minimize the recovery time.
We then synched these backup files to a remote data center, so that in case the whole data center went down for some reason, we’d be able to restore the site in a remote location within 30 mins.
percona backup using xtrabackup
Weekly and Monthly full Percona backup taken to facilitate faster restores.

Steps to setup the backup system

Once the backup design is finalized, XtraBackup setup can be done as follows:

1. Install XtraBackup

In CentOS/RedHat servers, first install the Percona repository, and then install XtraBackup using the command:
# yum install percona-xtrabackup

2. Configure XtraBackup

XtraBackup configuration variables are defined in my.cnf (usually /etc/my.cnf). The only mandatory setting is the backup directory, which can be specified as shown:
[xtrabackup]
target_dir = /backup/db/
Other settings can be added as per backup design considerations.

3. Configure cron to run backup automatically

The basic command to run an XtraBackup is,
# xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/backup/data
Depending on the periodicity (daily, weekly or monthly) setup incremental or full backup using a Bash script.

4. Sync the backups to a remote location

We recommend our customers (high traffic websites, managed hosting providers, etc.) to sync backups to a remote location. For this we use customized Bash scripts which uses Rsync over SSH.
The remote server is secured so that only the script can transfer data into it. This is done so that, the data cannot be modified inadvertently either by other programs or by attackers.

How to maintain Percona backup

Backups are too critical to set it up and forget it. A variety of issues like file system errors, disk space issues, high server load, etc., can cause a backup to fail. So, it is important to monitor the backup process closely, and quickly fix any issues that’s noted.
In the Percona databases that we maintain, we monitor server parameters like disk I/O, memory usage and disk space usage to make sure the server conditions do not affect the backup process in any way. In growing websites, database size is monitored, and additional disk space is added to accommodate as and when backup size increases.
Random restore tests are done once in a week on test databases to confirm that all backups are working fine. This procedure serves two purposes – (1) We know that the backups are reliable. (2) We get a way to evaluate our disaster recovery plans. On many occasions these weekly tests gave us an opportunity to improve the disaster recovery processes.

XTRABACKUP IN A NUTSHELL

XTRABACKUP IN A NUTSHELL

TABLE OF CONTENT


INTRODUCTION


No one can deny that one of the most important and daily tasks for DBAs is performing backup and restore operations, we're not required to perform backup and restore operations only when we want to add new replication slave, when we want to implement disaster recovery procedures or when we want to prepare testing or staging server for the running production system, but even if we're going to make any changes to the database schema in order to enhance the database performance, it's recommended to have fresh backup copy before making any live changes, so if backup and restore operations cannot be handled smoothly, we're going to face many troubles in our daily work. If we're going to talk about backup and restore operations, Xtrabackup tool will be strongly appeared.
Xtrabackup tool is a free open source tool developed by Percona to perform physical backup and restore operations which is much faster than performing logical backup and restore using the MySQL utilities (mysqldump and mysql), and many other advantages.
Xtrabackup tool has many options and features which are very useful, but in this article, I'll go through only on how to use this tool to perform simple full, incremental and partial backups and restores, advantages and disadvantages of each method and some important tips.
For more information about Xtrabackup tool, you can browse the manual document from here.

PREREQUISITES

FULL BACKUP

If you want to have a full backup from your entire database system with the shortest/fastest backup and restore time, this method will be very useful for you. As compared to the full logical database backup using mysqldump and mysql utilities (very long time to backup and more than the doubled time to restore), taking a full physical backup using Xtrabackup tool will make your life much easier.
Below is the needed steps to make a full physical database backup using XtraBackup tool:

CREATE BACKUP

A simple Xtrabackup command to backup the full databases should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!

A timestamped folder (for ex. "2013-11-06_00-00-00") would be created to contain all backup files if we didn't use the option "--no-timestamp" in the above command (I didn't use the timestamped folders here to just simplify the names for the reader, otherwise, it's very useful when using automated backup scripts).
Xtrabackup tool now created the backup files under the folder "full-backup" plus some extra files like "xtrabackup-checkpoints" file which contains some information (useful in the incremental backups) like:
  • backup_type = full-backuped : which indicates the backup type "full backup".
  • from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
  • to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Another important file is "xtrabackup_binlog_info" which is very useful in replication setups:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191

WHERE:
  • mysql-bin.000027: is the binary log file name of the master when the backup created.
  • 191: is the binary log position of the backup.

PREPARE FULL BACKUP

The backed up files are not ready at the moment to be restored, we must prepare the backup files first as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!

Now, the full backup is ready to be restored ...

RESTORE FULL BACKUP

To get the full backup restored, the MySQL instance should be stopped first and then one of the following two procedures should be done:
  • Using the copy back option:
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --copy-back /backup/dir/path/full-backup . . . innobackupex: completed OK!

    Xtrabackup tool - in this method - will copy all files under the backup folder (full-backup) to the MySQL datadir which must be indicated in the my.cnf file, otherwise, it wouldn't know where the datadir should be placed.
  • Using the operating system copy or move commands: 
    If you don't want to keep the backup files on your local system (you have another copy in an external server), the move command will be very fast to get your backup restored:
    [root@ ~]# mv /backup/dir/path/full-backup /var/lib/mysql

  • As the user who moves/copies the files into MySQL datadir is not "mysql" user, you should make sure that mysql user has the right permissions on its datadir and also the path "/var/lib/mysql" should be replaced with the MySQL datadir if it's set to a different path.
[root@ ~]# chown -R mysql:mysql /var/lib/mysql

After moving/copying the backup files into MySQL datadir, you are free to start the MySQL instance again.

PREPARE SLAVE FROM FULL BACKUP


Preparing a slave using Xtrabackup is pretty easy and a straight forward process:
  • Restore the full backup as explained above.
  • Check the binary logs information of the backup: 
    [root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191

  • Execute the CHANGE MASTER TO command using the above info and start the slave: 
    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_LOG_FILE='mysql-bin.000027', ## taken from xtrabackup_binlog_info -> MASTER_LOG_POS=191; ## taken from xtrabackup_binlog_info SQL> START SLAVE;

For more information on how to set up MySQL Replication, check out this nice manual link.

PREPARE GTID SLAVE FROM FULL BACKUP


GTID is supported in Xtrabackup starting from version 2.1.0. To restore a GTID slave server, the GTID_MODE should be enabled on the master server before creating its backup, otherwise, no GTID values will be included in the "xtrabackup_binlog_info" file under the backup directory. However, the following steps should be done:
  • Restore the full backup normally as explained above.
  • Check the GTID value of the backup: 
    [root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 b9b4712a-df64-11e3-b391-60672090eb04:1-12

  • Set the variable GTID_PURGED with the GTID value of the backup:
    SQL> SET GLOBAL GTID_PURGED="b9b4712a-df64-11e3-b391-60672090eb04:1-12";

  • Execute the auto position CHANGE MASTER TO command and start the slave: 
    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_AUTO_POSITION = 1; SQL> START SLAVE;

For more information on how to set up Transaction-based Replication in MySQL, check out this link.

ADVANTAGES / DISADVANTAGES

  • Advantages:
    • Fast, simple and easy way to get your full database backed up and restored.
    • All Xtrabackup tool features (like streaming: move the backed up files directly to a remote server) are supported in the full backup method.
    • Simple way to introduce a new slave to the master.
  • Disadvantages:
    • We have to replace the entire MySQL datadir with the new one (In other words, the datadir folder should be empty/removed before the restore process).
    • We can't extract one single database or single table from the whole backup (Unless it's MyISAM table), which means that you have to take it all or leave it all.

IMPORTANT HINTS

  • The message innobackupex: completed OK! should be printed at the end of every xtrabackup command , otherwise, it would be failed to make a successful command (backup, prepare or restore).
  • The ib_logfile files size should be the same in both source and destination servers, if not, you have to either remove them from the backup folder (which will be restored) before starting the MySQL instance and MySQL will create new ones for you OR change those files size in the destination server's configuration file to match the same size in the backup before starting the MySQL instance
  • The MySQL user used in the Xtrabackup tool, should have at least the following privileges (RELOAD, LOCK TABLES and REPLICATION CLIENT).
  • To prepare a new slave from another slave, just add the two options (“--slave-info" & --safe-slave-backup”) to the backup command and use the information in the file "xtrabackup_slave_info" under the backup folder to issue the "CHANGE MASTER TO" command in the new slave after finishing the restore.
  • To Accelerate the preparation process of your backup, just add the option "--use-memory" in the prepare command in order to allocate more used memory (Xtrabackup will use the specified memory as an internal innodb_buffer_pool_size for the prepare process), for ex:
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --use-memory=512M /backup/dir/path/full-backup . . . innobackupex: completed OK!
  • The preparation process consists of two steps, replaying the committed transactions and rolling back the uncommitted transactions, using the --apply-log option only in the preparation command will do both steps for you.
  • The backup folder "/backup/dir/path/full-backup" SHOULD NOT be created before executing the backup command, because Xtrabackup will create that folder for you, and it will fail to continue processing if that folder was already exist.

INCREMENTAL BACKUP

When you have a very large database system, you will need large enough storage to store your database backups in, and if you want to perform a daily backup then your mission will be more difficult. In such cases, the incremental database backup method will be very useful. It allows you to have only the database changes (delta) - after the physical full backup – with the minimum storage space required in a fast way, and hence, you can perform the daily backup operations to your database system without the need to having large storage available.
The following steps describe a simple way to perform your physical incremental database backup using XtraBackup tool:

CREATE INCREMENTAL BACKUP

To perform an incremental backup, we should first perform a full backup - the same like we did in the previous section - to be the base backup of the upcoming incremental backups.
Creating the full backup (Base Backup):
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!

The "xtrabackup-checkpoints" file contents will be something like:
  • backup_type = full-backuped : which indicates the backup type "full backup".
  • from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
  • to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Creating the first incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc1 --incremental-basedir=/backup/dir/path/full-backup . . . innobackupex: completed OK!

We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the full-backup path as the basedir, we informed it from which backup it should start tracking the database changes.
The "xtrabackup-checkpoints" file contents will be something like:
  • backup_type = incremental : which indicates the backup type "incremental backup".
  • from_lsn = 3768762 : which indicates the log sequence number where the backup process started from (the same LSN as the previous full backup ended at).
  • to_lsn = 4908762 : which indicates the log sequence number where the backup process ended at.
Creating the second incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc2 --incremental-basedir=/backup/dir/path/inc1 . . . innobackupex: completed OK!

We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the 1st incremental backup path as the basedir, we informed it to start tracking the database changes since the last incremental (not the full backup).
The "xtrabackup-checkpoints" file contents will be something like:
  • backup_type = incremental : which indicates the backup type "incremental backup".
  • from_lsn = 4908762 : which indicates the log sequence number where the backup process started from (the same LSN as the 1st incremental backup ended at).
  • to_lsn = 6508762 : which indicates the log sequence number where the backup process ended at.
Note: We can create as many incremental backups as we want by using the same procedure above.

PREPARE INCREMENTAL BACKUP

As mentioned earlier in the article, the preparation process consists of two steps (replaying the committed transactions and rolling back the uncommitted transactions) and using the --apply-log option only will do both of them (like we did in the full backup) but in the incremental backups, we MUST do them separately as follows:
  • Replay the committed transactions on the base backup (by adding the option "--redo-only"):
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup . . . innobackupex: completed OK!
  • Replay the committed transactions on the 1st incremental backup:
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc1 . . . innobackupex: completed OK!

    Note: we specified the full backup folder here, because replaying the committed transactions steps, appends all changes from the incremental backup to the full backup.
  • Replay the committed transactions on the 2nd incremental backup:
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc2 . . . innobackupex: completed OK!

    Note: here, we didn't use the 1st incremental backup folder, because all changes in the 1st incremental was already appended to the full backup in the previous step.
  • Finally, roll back all uncommitted transactions:
    [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!

    Note: as the full backup folder contains all data now (full + 1st & 2nd incremental), there's no need to do this step on the incremental backup folders.
Now, the incremental backup is ready to be restored ...

RESTORE INCREMENTAL BACKUP

The full backup folder will be the only folder to be restored (there's no need to the incremental backup folders) as it contains all data after appending the changes from all incremental backup. We can restore it the same way we did in the full backup restore.

ADVANTAGES / DISADVANTAGES

  • Advantages:
    • Less storage resources needed.
    • Faster than the full backup.
  • Disadvantages:
    In addition to the disadvantages of the full backup, there are other ones:
    • Complicate and hard process to implement as compared to the full backup.
    • The incremental backup strategy, based on Log Sequence Number which affects only XtraDB and InnoDB storage engines while the others (like MyISAM) will be backed up completely (changed + unchanged data) in each incremental backup process.
    • If we have many incremental backups, appending all of them might consume time and might be confusing as well.
    • If one of the incremental backups become corrupted or not available for any reason, we will not be able to add all incremental backups after that to the full backup.

IMPORTANT HINTS

  • The backup preparation sequence steps above, MUST be followed using the same order.
  • If the "--redo-only" option was not be used in any of the preparation steps (except the final step), all up coming incremental backups will be useless as we won't be able to add them to the base backup anymore.
  • Replaying the committed transactions steps bring all incremental data and append it to the full backup, so that, the rolling back of the uncommitted transactions step should be execute only on the full backup (as it contains already the whole data).).
  • In the incremental backups, Xtrabackup generates two files for every table ".delta" & ".meta"(for ex. test.ibd.delta & test.ibd.meta), the delta file size reflects the changes which was applied on that table since the last incremental backup.
  • The preparation time of the individual incremental backup will depend on how much data changed there since the last incremental.
  • The preparation time for the full backup - in most cases - is really small as compared to the incremental ones because full backups apply the redo logs only while the incremental backups apply the deltas plus the redo logs. So if the delta files are big, the preparation process will take longer.
  • Full backups is recommended against Incremental backups if there are many changes applied on the DB, while the incremental backups are recommended when there are few changes applied on the DB.

DIFFERENTIAL BACKUP

We can use the incremental backup strategy in order to perform differential backups, but we should consider the following points:
  • We always specify the full backup folder as the base backup (in the incremental we specify the previous incremental folder as a base backup)
  • All incremental backups between differential and full backups MUST BE ignored when preparing the backup files because the differential backup contains already all changes since the last full backup.
  • In the backup preparation process, we should consider the last differential backup as the first incremental backup and all incremental backups after that could be applied normally.
Note: Having differnetial backups in the middle of incremental backups will be useful for many reasons, such as:
  • Differential backups reduce the backup preparation steps/time needed because differential backp will replace all its previous incremental backups.
  • Differential backups reduce the chances of loosing the incremental backups if we have corrupted incremental backup in the middle, because in this case, differential backup will act as a backup of the previous incremental backups.

PARTIAL BACKUP

Unlike MyISAM, having physical database backup for a single database or table is not possible if the table engine type is InnoDB. But by using the partial database backup method in the XtraBackup tool, it will be possible to have physical InnoDB tables backup the same like MyISAM ones (but with some restrictions).
The following steps describing how to perform partial database backup using XtraBackup tool:

CREATE PARTIAL BACKUP

A simple Xtrabackup command to backup some databases (or tables) should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --databases=”db1 db2 db3.tbl1” /backup/dir/path/partial-backup . . . innobackupex: completed OK!

PREPARE PARTIAL BACKUP

The same like the other backup methods, the backed up files are not ready until we get them prepared by adding the "--export" option as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --export /backup/dir/path/partial-backup . . . innobackupex: completed OK!

Some errors regarding those not included InnoDB tables from the backup may be appeared, but that's fine. Also there will be a notification of creating the ".exp" file for each table which will be used in the import (restore) process.

Now, the partial backup is ready to be restored ...

RESTORE PARTIAL BACKUP

The restore process of the partial backup is quite different than the full and incremental backups.
To restore a partial backup, the following steps should be made:
  • Unlike the other methods (Full and Incremental backups), MySQL instance on the destination server shouldn't be stopped because we will have to execute some SQL commands.
  • On the destination server, we should create new tables (as many as we have in the partial backup or as we will restore) with the same structure like the one in the partial backup and then discard its table space:
    mysql> CREATE TABLE db.tbl1 (...)ENGINE=INNODB; mysql> ALTER TABLE db.tbl1 DISCARD TABLESPACE;
  • Copy “.ibd” and “.exp” files for each table into the corresponding DB directory then assign the right permissions to mysql user:
    [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.ibd /var/lib/mysql/db [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.exp /var/lib/mysql/db [root@ ~]# chown -R mysql:mysql /var/lib/mysql/db
  • Now we should tell MySQL to use the new table spaces:
    mysql> ALTER TABLE db.tbl1 IMPORT TABLESPACE;

ADVANTAGES / DISADVANTAGES

  • Advantages:
    • Although it's a complicated process, but it allows us to backup and restore individual InnoDB tables the same like MyISAM.
    • Useful when having huge InnoDB tables and we want to backup/restore them only.
  • Disadvantages:
    • The streaming feature is not available in the partial backup.
    • Restoring/importing individual tables or databases from a partial backup is not applicable unless the destination server is Percona Server.
    • In addition to restoring the files(copy back), three SQL statements should be executed for each table (table creation + two ALTER statements) in order to get them ready for use, which means that we might do a very boring job (or we have to create a special script) to get the partial backup restored if it contains many tables.

IMPORTANT HINTS

  • Although we didn't remove the MySQL datadir before the restore process (like full and incremental backups) as well as having the MySQL instance running, but we can restore the partial backup using the same way (remove the datadir contents and copy/move the backup files to the datadir), but we should take into our consideration that we'll have only the backed up databases/tables and all other databases/tables (which are not included in the partial backup) will be missed.
  • “innodb_file_per_table” server option must be enabled (in both source and destination servers).
  • "innodb_expand_import" option must be enabled in the destination server which is available only in Percona server (and that explain why we can restore partial backup on Percona server only).
  • Beside the "--databases" option, other two alternative options to perform the same needs could be used but we must provide each table with the fully qualified naming format:
    • --include='db.tbl'.
    • --tables-file=/path/to/file.txt ==> in that file, we can add multiple tables one per line in the fully qualified naming format.

Now, you can use the Xtrabackup tool to perform full, incremental and partial database backups, you can decide which method(s) of them are suitable for you according to the advantages and disadvantages of each one, and by considering the important hints for each method you can perform your backup efficiently.
I hope you found this article useful for you and to be familiar with such wonderful tool.