Thursday, May 17, 2018

10 essential MySQL tools for admins

The following 10 open source tools are valuable resources for anyone using MySQL, from a stand-alone instance to a multiple-node environment. The list has been compiled with variety in mind. You will find tools to help back up MySQL data, increase performance, guard against data drift, and log pertinent troubleshooting data when problems arise.
There are several reasons why you should consider these tools instead of creating your own in-house tools. First, thanks to their wide use, they're mature and field-tested. Second, because they are free and open source, they benefit from the knowledge and experience of the continually expanding MySQL community. Finally, these tools are actively developed, and many are professionally supported (either for free or commercially), so they continue to improve and adapt with the evolving MySQL industry.
Keep in mind that there are many more tools worthy of your attention. I have chosen to emphasize free and open source, and to err on the side of usefulness and usability. Also note that all are Unix command-line programs but one, in large part because MySQL is more widely deployed and developed on Unix systems.
Now, let's meet the first of the 10 essential MySQL tools.

Essential MySQL tool No. 1: mk-query-digest

mk-query-digest can help you find the queries that take the longest time to execute as compared to other queries. Optimizing these slow queries will make MySQL run faster by reducing the greatest delays. The real art of query optimization is more nuanced, but the basic goal is the same: find slow queries, optimize them, and increase query response times.
The tool is easy to use; executing mk-query-digest slow-query.log will print the slowest queries in slow-query.log. The tool includes support for "query reviews," for reporting queries you have not yet seen or approved, making frequent log analyses quick and efficient.

Essential MySQL tool No. 2: mydumper

Essential MySQL tool No. 3: xtrabackup and xtrabackup-manager

Essential MySQL tool No. 4: tcprstat

Essential MySQL tool No. 5: mk-table-checksum

mk-table-checksum works with both stand-alone servers and servers in a replication hierarchy, where the tool's greatest value is easily seen. Verifying table data between a master and a slave must account for replication consistency. Because changes to the master are replicating to slaves with some amount of time delay ("lag"), simply reading data from the servers is an unreliable way to verify consistency, given that the data is constantly changing and incomplete until fully replicated. Locking tables and waiting for all data to replicate would allow consistent reads, but to do so would mean effectively halting the servers. mk-table-checksum allows you to perform nonblocking, consistent checksums of master and slave data. (For technical details on how this is accomplished, see thetool's documentation.)
Apart from replication consistency, there are other problems with verifying data. Table size is one of them. The MySQL command CHECKSUM TABLE is sufficient for small tables, but large tables require "chunking" to avoid long locks or overloading CPU or memory resources with checksum calculations.
Chunking solves a second problem: the need for regular data-consistency checks. While data drift can be a one-time occurrence, often it is recurring. mk-table-checksum is designed to continuously check tables, vetting certain chunks one run and other chunks the next run until eventually the whole table has been checked. The ongoing nature of this process helps ensure that recurring drift is corrected.

Essential MySQL tool No. 6: stalk and collect

stalk and collect are configurable, so they can be used for almost any problem. The one requirement is a definable condition to establish a trigger for stalk. If multiple conditions signal the problem, then you may also need to consult with a professional for a more extensive review of your MySQL environment because problems can appear in MySQL even though the underlying cause is elsewhere.
stalk and collect can be used proactively, too. For example, if you know that there should never be more than 50 active MySQL connections at a time, then you could proactively monitor this stalk, making these tools helpful both for problems that you know and problems that you have not yet seen.

Essential MySQL tool No. 7: mycheckpoint

mycheckpoint-sample-metric.png

Essential MySQL tool No. 8: shard-query

SELECT DayOfWeek, COUNT(*) AS c
FROM ontime_fact
JOIN dim_date USING(date_id)
WHERE Year
BETWEEN 2000 AND 2008
GROUP BY DayOfWeek
ORDER BY c DESC;
Maintainer: Justin Swanhart

Essential MySQL tool No. 9: mk-archiver

Essential MySQL tool No. 10: oak-security-audit

-- Looking for anonymous user accounts
-- -----------------------------------
-- Passed
--
-- Looking for accounts accessible from any host
-- ---------------------------------------------
-- Found 1 accounts accessible from any host. Recommended actions:
RENAME USER 'msandbox'@'%' TO 'msandbox'@'<specific host>';

Wednesday, May 16, 2018

What causes MySQL high IOWait

There’s no single reason for I/O bottle necks. It can range from hardware issues to unoptimized server configurations. Here are the top issues.

1. Slow disk or degraded RAID array

Since disk access takes more time and is slower than accessing the memory, write and read operations slows up the MySQL performance., especially if the disk is slow. I/O wait can increase if the disk is degraded or corrupted.

2. Low system memory

As MySQL perform its transactions in the RAM, fetching the data each time between the RAM and the disk. If the RAM size is less, this can lead to many I/O wait cycles when dealing with large databases.

3. High number of transactions and size of data

The I/O overhead can further increase depending on the size of the database and number of transactions. Moreover, a lot of time is wasted especially during insert operations in large data sets, as write operations are slower than reads.

4. Unoptimized MySQL configuration

No write cache to store frequently accessed data sets, improper flushing mechanism, inadequate temporary table size for doing database operations, etc. are some issues that can lead to high I/O.

5. Complex queries

When the code in the application involves operations such as complex joins and quering over a large range in large data sets, the memory usage increases and so does the I/O overhead involved.

sync_binlog=0 affects MySQL performance? but sync_binlog=1 works well

sync_binlog=0 affects MySQL performance?

but sync_binlog=1 works well


Performance

As expected, with sync_binlog=0 there are plenty of binary log changes to be flushed to disk. There is none when running with sync_binlog=1. On the other hand, the average time to perform 10000 INSERTs jumped from 23 seconds to 41 seconds – nearly doubled.

Reliability

I’ve tested how reliable is each setting by running the script with 10000 INSERTs, forcefully powering off the whole system, starting up again and then comparing records (transactions) written to the database versus those written to binary log. I have tried it few times with both setting and the same results each time:
  • with sync_binlog=1 the data in the database was always in sync with the binary log. That is, the last record succesfully inserted into the database, matches the last entry in binary log.
  • with sync_binlog=0, after reboot there were thousands of records persisted in the database but missing from the binary log.

Summary

If you care about binary log (think about master-slave replication for example) set sync_binlog to 1, if you want to squeeze a bit more performance, set it to 0.
So as conclusion:
  • setting sync_binlog innodb_support_xa to 1 is required for maintaining the acidity of your database when using replication (make sure that your storage is able to deal with a large amount of sync to disk, fdatasync).
  • choosing the alternative (sync_binlog != 1) can provide better performance at the cost of difficulties and conflicts during replication, and potentially losing transactions during a recovery.

sync_binlog = 1 in MySQL 5.7

In Mysql 5.7.7 and newer the default value of sync_binlog is now 1.
This of course has an impact on performance especially if you don’t have fast storage like a RAID controller with a cache for example.
Already in O’reilly High Performance MySQL, is was written that the most important setting for binary logging on the master is sync_binlog = 1.
And this recommendation was made for MySQL 5.0 ! At that time the performance impact could reach 50%. In 5.6, with the addition of binary log group commit the impact from enabling sync_binlog = 1 on many non-synthetic workloads should be reduced considerably and is the safest option as all transactions are synchronized to the binary log before they are committed.  Therefore, even after a crash, any transactions that are missing from the binary log are only in prepared state and those transaction will be rollback during the recovery process at server restart. This guarantees that no transaction is lost from the binary log, and is the safest option. If disabled you could have some transaction that have been committed but not present in the binary log. This could be very dangerous in some architecture were more and more people rely on replication. For example if you provision a new slave using last night’s backup and replay all binary logs since, the master and the slave won’t be in sync.
This is a major change compare to previous versions and don’t forget that it needs innodb_support_xa to be enabled (this is the default). So yes, since 5.6 during recovery the binary logs can be edited !
So as conclusion:
  • setting sync_binlog innodb_support_xa to 1 is required for maintaining the acidity of your database when using replication (make sure that your storage is able to deal with a large amount of sync to disk, fdatasync).
  • choosing the alternative (sync_binlog != 1) can provide better performance at the cost of difficulties and conflicts during replication, and potentially losing transactions during a recovery.

INNODB VARIABLES AND STATUS EXPLAINED

INNODB VARIABLES AND STATUS EXPLAINED

INNODB BUFFER POOL

The InnoDB Buffer Pool is the memory area where the InnoDB Storage Engine caches its data and index blocks. Each InnoDB data and index block has a size of Innodb_page_size (16384 byte = 16 kbyte). The InnoDB Buffer Pool is configured in bytes with the innodb_buffer_pool_size variable. On a dedicated system the InnoDB Buffer Pool can be configured up to 80% of the systems physical RAM (free).
The innodb_buffer_pool_awe_mem_mb variable is relevant only on 32-bit Windows systems with more than 4 Gbyte of RAM using the so-called Address Windowing Extensions (AWE).
The usage of the InnoDB Buffer Pool can be measured with the SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_%' command. The sum of datamisc and free pages is equivalent to total pages. And the number of total pages multiplied by Innodb_page_size corresponds to your innodb_buffer_pool_size.
Innodb_buffer_pool_pages_data       1757
Innodb_buffer_pool_pages_misc     +   10
Innodb_buffer_pool_pages_free     + 2072
Innodb_buffer_pool_pages_total    = 3839

Innodb_buffer_pool_pages_total x Innodb_page_size = innodb_buffer_pool_size
                          3839 x            16384 = 62898176 (= 60 Mbyte)

Innodb_buffer_pool_pages_data shows the number of dirty and clean data and index pages. Innodb_buffer_pool_pages_misc shows the number of pages that are busy because they have been allocated for administrative overhead such as row locks or the adaptive hash index.
A small number of Innodb_buffer_pool_pages_free pages does not necessarily indicate that you InnoDB Buffer Pool is too small. Where instead a large number of free pages over a longer period is a strong indicator that your InnoDB Buffer Pool is too big and can easily be decreased.
Innodb_buffer_pool_pages_dirty indicates the number of InnoDB buffer pool data pages that have been changed in memory, but the changes are not yet written (flushed) to the InnoDB data files. The opposite of a dirty page is a clean page.
The InnoDB main thread tries to write pages from the InnoDB Buffer Pool so that the percentage of dirty (not yet written) pages will not exceed the value of Innodb_buffer_pool_pages_dirty.
Innodb_buffer_pool_pages_data * innodb_max_dirty_pages_pct / 100 > Innodb_buffer_pool_pages_dirty
                         1757 *                         90 / 100 > 5

Innodb_buffer_pool_pages_flushed indicates the number of requests to flush pages from the InnoDB buffer pool to the data file.
Similiar information about the InnoDB Buffer Pool constitution can be retrieved with the command: SHOW ENGINE INNODB STATUS\G:
----------------------
BUFFER POOL AND MEMORY
----------------------
Buffer pool size   512
Free buffers       490
Database pages     22
Modified db pages  0

Normally, writes to the InnoDB Buffer Pool happen in the background. However, if it is necessary to read or create a page and no clean pages are available, it is also necessary to wait for pages to be flushed first. The Innodb_buffer_pool_wait_free counter counts how many times this has happened. Innodb_buffer_pool_wait_free greater than 0 is a strong indicator that the InnoDB Buffer Pool is too small.

INNODB BUFFER POOL HIT RATIO

Innodb_buffer_pool_read_requests indicates the the number of logical read requests (read from memory) InnoDB has done.
Innodb_buffer_pool_reads indicates the number of logical reads that InnoDB could not satisfy from the buffer pool, and had to read directly from the disk (physical reads).
The InnoDB Buffer Pool hit ratio is a indicator how often your pages are retrieved from memory instead of disk:
Innodb_buffer_pool_read_requests / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads) * 100 = InnoDB Buffer Pool hit ratio
                         1600770   (                          1600770 + 1715) * 100 = 99.9%

The same ratio can be calculated over the last n seconds with the SHOW ENGING INNODB STATUScommand:
Per second averages calculated from the last 58 seconds
...
----------------------
BUFFER POOL AND MEMORY
----------------------
...
Buffer pool hit rate 1000 / 1000

A InnoDB Buffer Pool hit ratio below 99.9% is a weak indicator that your InnoDB Buffer Pool could be increased.

INNODB ADDITIONAL MEMORY POOL

The innodb_additional_mem_pool_size variables configures the size in bytes of a memory pool InnoDB uses to store data dictionary information and other internal data structures. The more tables you have in your application, the more memory you need to allocate here. If InnoDB runs out of memory in this pool, it starts to allocate memory from the operating system and writes warning messages to the MySQL error log.
How much memory InnoDB has allocated for this additional memory pool can be found with:
pager grep 'additional pool'
SHOW ENGINE INNODB STATUS\G
Total memory allocated 20618000; in additional pool allocated 676608

QUESTIONS TO ANSWER

  • What happens if innodb reaches innodb_max_dirty_pages_pct?
  • What is stored in innodb buffer pool beside data and index (=misc, row logs, undo?)
  • How is flushed related to write requests?
buffer pool lru buffer pool instances
Innodb_buffer_pool_read_ahead_rnd indicates the number of random read-aheads initiated by InnoDB. This happens when a query scans a large portion of a table but in random order. This variable was removed in newer MySQL releases.
Innodb_buffer_pool_read_ahead_seq indicates the number of sequential read-aheads initiated by InnoDB. This happens when InnoDB does a sequential full table scan. This variable was removed in newer MySQL releases.
Innodb_buffer_pool_write_requests indicates the number writes done to the InnoDB buffer pool.
The ratio of write requests to pages flushed should be an indicator of how many rows are changed in a block before it is flushed to disk:
Innodb_buffer_pool_write_requests / Innodb_buffer_pool_pages_flushed = row changes per flush
                             8367 / 8160 = 1.02 row changes per flush

A value much higher that 1 is an indicator of a good locality of data