Tuesday, October 22, 2013

3 Ways to Make Tab Delimited Files From Your MySQL Table.

3 Ways to Make Tab Delimited Files From Your MySQL Table

Method One:

Using MySQL’s SELECT INTO OUTFILE feature, you can direct your query’s results to a file using some additional parameters to format the content. I needed to do this in two steps in order to get the column headers at the top of the file.

Command:
mysql --user=root --password='' -e "SELECT GROUP_CONCAT(COLUMN_NAME SEPARATOR '\t') FROM INFORMATION_SCHEMA.COLUMNS WHERE table_schema='phineas_and_ferb' and table_name='characters' INTO OUTFILE '~/tmp/output.txt' FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '' ESCAPED BY '' LINES TERMINATED BY '\n';"

mysql --user=root --password='' phineas_and_ferb -e "SELECT * FROM characters INTO OUTFILE '~/tmp/data.txt' FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n';"

cat ~/tmp/data.txt >> ~/tmp/output.txt

Advantage: Optional quoting of output fields allows integers to be interpreted correctly by applications importing the data.
Disadvantages: Adding column headers requires an extra command and the use of a temp file. The queries are more complicated than other methods.


Method Two: Redirect query results to file

Execute a simple query against the database table and redirect it to an output file.
Command:
mysql --user=root --password='' --column-names=TRUE phineas_and_ferb -e "SELECT * from characters;" > ~/tmp/output.txt

Advantages: Column headers are automatically included in the output. Results are automatically tab-delimited.
Disadvantage: None of the output fields are quoted.


Method Three: mysqldump

Run mysqldump to directly write the data to a file. Again, I included an additional command to get the column headers at that top of the output file.

Command:
mysql --user=root --password='' -e "SELECT GROUP_CONCAT(COLUMN_NAME SEPARATOR '\t') FROM INFORMATION_SCHEMA.COLUMNS WHERE table_schema='phineas_and_ferb' and table_name='characters' INTO OUTFILE '~/tmp/output.txt' FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '' ESCAPED BY '' LINES TERMINATED BY '\n';"

mysqldump --single-transaction --user=root --password='' -T ~/tmp/ phineas_and_ferb --fields-enclosed-by=\"

cat ~/tmp/characters.txt >> ~/tmp/output.txt

Advantage: Simplified method to get quoting around output fields.
Disadvantages: All output fields are quoted. Adding column headers requires an extra command and the use of a temp file.

TokuDB - Compression Test — InnoDB vs. TokuDB

Compression — Highest Compression 

Compression is an always-on feature of TokuDB. We tested InnoDB compression with two values of key_block_size (4k and 8k) and with compression disabled. To find the maximum compression, we loaded some web application performance data (log style data with stored procedure names, database instance names, begin and ending execution timestamps, duration row counts, and parameter values). TokuDB achieved 29x compression, far more than InnoDB.
MySQl Compression
Platform: Ubuntu 11.04; Intel Corei7/920 @ 3.6Ghz; 12GB RAM; 2x 7.2k SATA.

TokuDB - Replication — Eliminate Slave Lag

Replication — Eliminate Slave Lag 

MySQL’s single threaded design often leads to slave lag. With TokuDB, slave lag is eliminated. This insures replication can be used for read scaling, backups, and disaster recovery, without resorting to sharding, expensive hardware, or placing limits on what can be replicated.The graph below shows the slave trying to keep up with new orders in a TPCC-like environment. At 1,000 TPS there is no lag for InnoDB or TokuDB. Beyond that, MySQL with InnoDB begins to fall behind.

Platform: Master – Centos 5.6; 2x Xeon L5520; 72GB RAM; 8x 300GB 10k SAS in RAID10. Slave – Centos 5.7; 2x Xeon E5310; 8GB RAM; 6x 1TB SATA with 2 in RAID1 and 4 in RAID0.

With TokuDB - Schema Changes in Seconds, not Hours

Hot Schema — Schema Changes in Seconds, not Hours back to top

TokuDB v5.0 introduced Hot Column Addition (HCAD). You can add or delete columns from an existing table with minimal downtime — just the time for MySQL itself to close and reopen the table. The total downtime is seconds to minutes. We detailed an experiment that showed this in this blog. TokuDB v5.0 also introduced Hot Indexing. You can add an index to an existing table with minimal downtime. The total downtime is seconds to a few minutes, because when the index is finished being built, MySQL closes and reopens the table. This means that the downtime occurs not when the command is issued, but later on. Still, it is quite minimal, as we showed in this blog.

Platform: CentOS 5.5; 2x Xeon E5310; 4GB RAM; 4x 1TB 7.2k SATA in RAID0.

Monday, October 21, 2013

Database Backup Strategy



So how do you make backups of huge MySQL databases that are in your production environment without affecting your customers? The answer is with Percona’s Xtrabackup tool. It performs binary backups of heavily loaded MySQL servers amazingly fast. It even supports incremental backups so that you don’t have to backup your entire database every single time. However, even it requires a table lock at end of its procedure if you want the binary log position of the backup. Luckily, there’s a “–no-lock” option and a little trick you can use to get the binary log position when you use it.
Now that we’re using Xtrabackup to backup our live data and we know how to get the binary log position, we just have to automate the procedure. This is harder than you think, because for every incremental backup you need information on the last one you did so that it knows where to start the new one. If you store your backups as compressed data (which you should to save space), this information must be stored separately which means you have to parse it out yourself. Also, in order to restore a backup, you need a list of all the incremental backups so that you can restore them in order.
I spent a long time creating the perfect automation script for all this. For a full backup, the procedure is as such:
  1. Run ‘innobackupex’ with –no-lock, –stream, and –compress to create a compressed backup.
  2. Use ‘sed’ to parse the log sequence number from the output, which is used for incremental backups.
  3. Save the LSN in a separate file so you can refer to it later.
  4. Save the filename of the backup in it’s own file, so that you can easily keep track of the all the backups you’ve done in case you need to restore them in order.
  5. Upload the final compressed backup and the files from step 3 and 4 to Amazon’s S3. To do this, it’s best to split the backup up into smaller files and upload them all in parallel.
For an incremental backup, the procedure is very similar:
  1. Grab the LSN from the file that was created during the full backup
  2. Run ‘innobackupex’ with the same options as before, but add –incremental and –incremental-lsn=<LSN>
  3. Use ‘sed’ to parse the new log sequence number from the output.
  4. Overwrite the LSN file with the new one.
  5. Append the incremental backup’s filename to the backup list file.
  6. Upload everything to S3.
Restoring the backups is perhaps the trickiest part:
  1. Grab the list of all the backups that have happened from the backup list file.
  2. Loop through them, and for each one:
    1. Uncompress the backup
    2. Run ‘innobackupex’ with –redo-only, –apply-log, and –incremental-dir=<full backup directory> on the uncompressed backup. If it’s the original full backup then leave out the –incremental-dir part.
  3. Now that all the incremental backups have been applied to the full backup (now called the base), finish up the process by running ‘innobackupex’ with –apply-log on the base directory.
  4. chown -R mysql:mysql <base directory>
  5. Start MySQL on the base directory
We’ve been running this script regularly for weeks now, and it has been working great. We do one full backup per day and an incremental backup each hour. Also, since the backups contain the binary log position, we also have the ability to do point-in-time recovery by replaying the bin-logs. It’s important to note that creating these backups uses a lot of disk IOPS, so it’s wise to do them on a separate drive.

Thursday, October 17, 2013

Testing TokuDB – Faster and smaller for large tables

For the past two months, I have been running tests on TokuDB in my free time. TokuDB is a storage engine put out by Tokutek. TokuDB uses fractal tree indexes instead of B-tree indexes to improve performance, which is dramatically noticeable when dealing with large tables (over 100 million rows).

For those that like the information “above the fold”, here is a table with results from a test comparing InnoDB and TokuDB. All the steps are explained in the post below, if you want more details, but here’s the table:

Action InnoDB TokuDB
Importing ~40 million rows 119 min 20.596 sec 69 min 1.982 sec
INSERTing again, ~80 million rows total 5 hours 13 min 52.58 sec 56 min 44.56 sec
INSERTing again, ~160 million rows total 20 hours 10 min 32.35 sec 2 hours 2 min 11.95 sec
Size of table on disk 42 Gb 15 Gb
COUNT(*) query with GROUP BY 58 min 10.11 sec 5 min 3.21 sec
DELETE query 2 hours 46 min 18.13 sec 1 hour 14 min 57.75 sec
Size of table on disk 42 Gb 12 Gb
OPTIMIZE TABLE 1 day 2 hours 19 min 21.96 sec 21 min 4.41 sec
Size of table on disk 41 Gb 12 Gb
TRUNCATE TABLE 1 min 0.13 sec 0.27 sec
Size of table on disk 41 Gb 193 Mb (after waiting 60 seconds before doing an ls -l)
OPTIMIZE TABLE 23.88 sec 0.03 sec
Size of table on disk 176 Kb 193 Mb

Installing TokuDB is not quite as easy as plugging in a storage engine. TokuDB requires a patch to the MySQL source code, so you can either patch the source code yourself or download an already-patched version from Tokutek that contains TokuDB as well. I used the already-patched version of MySQL from Tokutek, and it was no different than setting up a regular MySQL install — install, configure and go.

On disk, a table using the TokuDB storage engine is different from both InnoDB and MyISAM. It has a .frm file, as all MySQL tables do. In addition, there is a directory which contains a main.tokudb file with the data, a status.tokudb file (I believe with an action queue), and a key-KEYNAME.tokudb file for each index:

# ls -1
testtoku.frm
testtoku.tokudb


# ls -1 */*
testtoku.tokudb/key-DATE_COLLECTED.tokudb
testtoku.tokudb/key-HASHCODE.tokudb
testtoku.tokudb/main.tokudb
testtoku.tokudb/status.tokudb

A bit of playing around, and we see that we cannot get much from the file — with MyISAM tables, you can see the data in the table by doing a “strings” command on it:

# cd testtoku.tokudb
# file *
key-DATE_COLLECTED.tokudb data
key-HASHCODE.tokudb data
main.tokudb data
status.tokudb data


# strings *
tokudata
tokuleaf
x^cd
 fdac
tokudata
tokuleaf
x^cd
bN fda

For a basic test I compared bulk insertion, simple querying and deletes with TokuDB and InnoDB. I did not use any special features of TokuDB. I started with an sql file produced by mysqldump that was 2.0 Gb in size, which had 19 million rows, and performed some simple tests on it. The table has a signed INT as a primary key, and the goal of this test was to see how easy it would be to delete test data. “Test data” is defined as anything that had a particular field (HASHCODE, defined as VARCHAR(32)) in common with more than 10,000 rows.

0) imported 19,425,235 rows

1) SELECT COUNT(*),HASHCODE FROM test[engine] GROUP BY HASHCODE HAVING COUNT(*)>10000;

2) DELETE FROM PRIMARY_KEY_HASH WHERE HASHCODE IN ([list of id's]); This deleted about 3.3% of the records in the table (647,732 rows)

3) OPTIMIZE TABLE test[engine] – to defragment

Tests were done on an Amazon EC2 instance — AMI ID:ami-2547a34c which is a Fedora 64-bit machine, using the m1.xlarge size (16 Gb RAM).

Action InnoDB TokuDB
Importing over 19 million rows 33 min 2.107 sec 31 min 24.793 sec
Size of table on disk 4.4 Gb 2.4 Gb
COUNT(*) query with GROUP BY 8.64 sec 29.28 sec
DELETE query 26.06 sec 2 min 19.51 sec
Size of table on disk 4.4 Gb 1.9 Gb
OPTIMIZE TABLE 35 min 15.04 sec 1 min 20.42 sec
Size of table on disk 4.3 Gb 1.2 Gb

InnoDB performed exceedingly well because the InnoDB buffer pool was sized larger than the data (12 Gb buffer pool vs. 4.4 Gb table), and the data import caused the buffer pool to have all the data and indexes already cached when the queries were run. Even so, TokuDB only fared slightly worse than InnoDB in overall performance.

The most interesting part of the table, for me, is the fact that there is no need to defragment the table — Even though the size on disk does decrease after the OPTIMIZE TABLE, the Tokutek folks explained that there’s a queue of work to be done (such as defragmentation) that is done automatically, and OPTIMIZE TABLE processes the rest of the queue. This is why the size of the table on disk was already reduced even before teh OPTIMIZE TABLE was done, and if I had waited a minute or so before performing the OPTIMIZE TABLE it would have automatically been done and I would have seen no results with the OPTIMIZE TABLE.

(specifically, I was told “The fractal tree is a dynamic data structure which may rearrange itself when queries run. In addition, since the fractal tree is periodically checkpointed, there may be more than one version of the data changed since the last check point was taken in the underlying file.” and pointed to a blog post this post about quantifying fragmentation effects.

The table shows that for smaller amounts of data (fewer than 100 million rows), TokuDB is about 9% faster for inserts, but somewhat slower for even simple queries and deletes. There is no need to defragment TokuDB, which saves a lot of time in the long run.

As TokuDB is recommended for tables larger than 100 million rows, let’s see this same test with a large amount of data. This time we started with an import of 39,334,901 rows, a 4.0 Gb file produced by mysqldump. However, since we want more than 100 million rows, after the import we did 2 inserts to produce almost 160 million records:

INSERT INTO test[engine] (HASHCODE, [other non-primary key fields]) SELECT (HASHCODE, [other non-primary key fields]) FROM test[engine];
# after this there are almost 80 million records (78,669,802)

INSERT INTO test[engine] (HASHCODE, [other non-primary key fields]) SELECT (HASHCODE, [other non-primary key fields]) FROM test[engine];
# after this there are almost 160 million records (157,339,604)

Action InnoDB TokuDB
Importing ~40 million rows 119 min 20.596 sec 69 min 1.982 sec
INSERTing again, ~80 million rows total 5 hours 13 min 52.58 sec 56 min 44.56 sec
INSERTing again, ~160 million rows total 20 hours 10 min 32.35 sec 2 hours 2 min 11.95 sec
Size of table on disk 42 Gb 15 Gb
COUNT(*) query with GROUP BY 58 min 10.11 sec 5 min 3.21 sec
DELETE query 2 hours 46 min 18.13 sec 1 hour 14 min 57.75 sec
Size of table on disk 42 Gb 12 Gb
OPTIMIZE TABLE 1 day 2 hours 19 min 21.96 sec 21 min 4.41 sec
Size of table on disk 41 Gb 12 Gb
TRUNCATE TABLE 1 min 0.13 sec 0.27 sec
Size of table on disk 41 Gb 193 Mb (after waiting 60 seconds before doing an ls -l)
OPTIMIZE TABLE 23.88 sec 0.03 sec
Size of table on disk 176 Kb 193 Mb

Clearly, TokuDB is better than InnoDB for all these values. And I did not even use any of the special features of TokuDB — no extra indexes were added!

One great aspect about TokuDB is that it gives you approximate statistics on how many rows have been inserted:

mysql> show processlistG
*************************** 1. row ***************************
     Id: 3
   User: root
   Host: localhost
     db: test
Command: Query
   Time: 1
  State: Inserted about 4000 rows
   Info: INSERT INTO `testtoku` VALUES (14600817,NULL,'c40325fb0406ccf2ad3e3c91aa95a6f2','000bxi504',

The “State” value is per query, so in doing a bulk insert with many rows, I saw this number go up and down. However, it’s very useful nonetheless.

The SHOW TABLE STATUS shows that the statistics are exact (like MyISAM — InnoDB metadata is approximate):

mysql> select count(*) from testtoku;
+----------+
| count(*) |
+----------+
| 18431319 |
+----------+
1 row in set (15.01 sec)


mysql> SHOW TABLE STATUSG
*************************** 1. row ***************************
           Name: testtoku
         Engine: TokuDB
        Version: 10
     Row_format: Dynamic
           Rows: 18431319
 Avg_row_length: 0
    Data_length: 0
Max_data_length: 0
   Index_length: 0
      Data_free: 0
 Auto_increment: 19425236
    Create_time: NULL
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options:
        Comment:
1 row in set (0.00 sec)

TokuDB does have that “action queue” I mentioned before — the statistics are exact, but may not be up-to-date if there are still actions to be performed. However, any statement that touches every record will perform all the actions left in the queue — so after statements like OPTIMIZE TABLE and even SELECT COUNT(*) FROM tbl, the statistics are up-to-date.

Just in case anyone wants it, here is the my.cnf used for both InnoDB and TokuDB tests:

[mysqld]
datadir = /mnt/mysql/data
port            = 3306
socket          = /tmp/mysql.sock
skip-locking
key_buffer_size = 384M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
thread_concurrency = 8
server-id       = 1
innodb_data_home_dir = /mnt/mysql/data/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /mnt/mysql/data/
innodb_buffer_pool_size = 12G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 100M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 50
innodb_file_per_table