MyTop
➢ http://jeremy.zawodny.com/mysql/mytop/
hackmysql.com/mysqlsla
Sep 3, 2009 – mysqlsla
parses, filters, analyzes and sorts MySQL slow, general, binary and
microslow patched logs in order to create a customizable report
mysqltuner.pl/
To run the script with the default options, run the script without arguments\n". " Allow MySQL server to run for at least 24-48 hours before trusting suggestions\n".
jeremy.zawodny.com/mysql/mytop/
mytop is a console-based (non-gui) tool for monitoring the threads and overall performance
code.google.com/p/innotop/
The best top clone for MySQL, with special attention paid to InnoDB ... innotop is a 'top' clone for MySQL with more features and flexibility than similar tools
www.mysqlperformanceblog.com/tools/
Maatkit is a great set of tools for MySQL Performance Analyses and maintainence. Must know and have for any MySQL
Essential MySQL tool No. 1: mk-query-digestNothing
frustrates like slow MySQL performance. All too often, faster hardware
is thrown at the problem, a solution that works only if hardware is in
fact to blame. More often than not, poor performance can be attributed
to slowly executing queries that are blocking other queries, creating a
ripple effect of slow response times. Since it's a lot cheaper to
optimize queries than to upgrade hardware, the logical first step in
MySQL optimization is query log analysis.
Database administrators
should analyze query logs frequently, depending on the volatility of
the environment. And if you've never performed query log analysis, it's
time to start, even if you are relying on third-party software, which is
often assumed to be optimized when, in fact, it is not.
Today's best query log analyzer is
mk-query-digest.
Co-written by Baron Schwartz and myself, it is actively developed,
fully documented, and thoroughly tested. MySQL distributions include the
query log analyzer
mysqldumpslow, but the tool is outdated, poorly documented, and untested. Other query log analyzers, like
mysqlsla, which I wrote several years ago, suffer the same problems as
mysqldumpslow.
mk-query-digest
analyzes query logs and generates reports with aggregated, statistical
information about execution times and other metrics. Since query logs
usually contain thousands, if not millions, of queries, query log
analysis requires a tool.
mk-query-digest can help
you find the queries that take the longest time to execute as compared
to other queries. Optimizing these slow queries will make MySQL run
faster by reducing the greatest delays. The real art of query
optimization is more nuanced, but the basic goal is the same: find slow
queries, optimize them, and increase query response times.
The tool is easy to use; executing
mk-query-digest slow-query.log will print the slowest queries in
slow-query.log.
The tool includes support for "query reviews," for reporting queries
you have not yet seen or approved, making frequent log analyses quick
and efficient.
Download: http://maatkit.org/get/mk-query-digestMaintainers: Daniel Nichter and Baron Schwartz
More info: http://maatkit.org/ |
http://code.google.com/p/maatkit/
Essential MySQL tool No. 2: mydumperBeing able to generate data dumps quickly is vital for backups and server cloning. Unfortunately,
mysqldump,
which ships with MySQL distributions, is single-threaded and, thus, too
slow for data-intensive jobs. Thankfully, the modern replacement,
mydumper, uses multiple threads, making it 10 times as faster than
mysqldump.
Also
known as MySQL Data Dumper, this tool does not manage backup sets,
differentials, or other parts of a complete backup plan. It just dumps
data from MySQL as quickly as possible, enabling you to complete backups
under tight time constraints, such as overnight, while employees are
offline, or to perform backups more frequently than you would with
mysqldump.
One technical point to know about
mydumper
is that it locks tables, so it is not the ideal tool for performing
backups during operating hours. Then again, professional data recovery
costs hundreds of dollars per hour, and you always get a bill even if
the data isn't recoverable.
mydumper is free and well worth exploring for even basic backups.
mydumper
also comes in handy when cloning servers. Other tools perform complete
hard drive duplications, but when all you need is MySQL data,
mydumper is the fastest way to get it. Servers provisioned in a cloud are particularly suited to cloning using
mydumper. Just dump your MySQL data from an existing server and copy it to the new instance.
Cloning
is worthwhile for creating slave servers, benchmarking, and profiling,
but nowhere is it more vital than in testing and development. Being able
to spin up a replica for quick testing before going live is essential
for dynamic MySQL environments. With
mydumper,
you can quickly create a server that is nearly identical to your
production server, enabling your test results to better mimic production
results.
Download: https://launchpad.net/mydumper/+downloadMaintainers: Domas Mituzas, Andrew Hutchings, Mark Leith
More info: http://www.mydumper.org/ |
https://launchpad.net/mydumper/
Essential MySQL tool No. 3: xtrabackup and xtrabackup-managerIf your databases are in use every day, all day, giving you no "overnight" during which tables can be locked for backup,
xtrabackup
is your solution. Also known as Percona XtraBackup, this tool performs
nonblocking backups and is the only free, open source tool that can do
this. By comparison, proprietary nonblocking backup software can cost
more than $5,000 per server.
xtrabackup
also offers incremental backups, allowing you to back up only the data
that has changed since the last full backup. Adding incremental backups
to your backup process is powerful, given the reduced performance hit of
these tremendously smaller backups.
Furthermore, another project has grown up around
xtrabackup that makes managing a full backup plan even easier:
xtrabackup-manager.
Although this tool is new and still in development, it holds a lot of
potential because it offers advanced features like rotating backups with
groups and backup set expiring. Together,
xtrabackup and
xtrabackup-manager are a formidable and free backup solution.
Download: http://www.percona.com/software/percona-xtrabackup/downloads/Maintainer: Percona
More info: http://www.percona.com/docs/wiki/percona-xtrabackup:start |
https://launchpad.net/percona-xtrabackup
Download: http://code.google.com/p/xtrabackup-manager/Maintainer: Lachlan Mulcahy
More info: http://code.google.com/p/xtrabackup-manager/ |
http://mysqlsoapbox.blogspot.com/
Essential MySQL tool No. 4: tcprstattcprstat
is probably the most esoteric of the 10 on this list. The tool monitors
TCP requests and prints statistics about low-level response times. When
you become familiar with the response time way of thinking about
performance, the payoff of
tcprstat is significant.
The
principle is elaborated in the book "Optimizing Oracle Performance" by
Cary Millsap and Jeff Holt, and it applies equally well to MySQL. The
basic idea is that a service, in this case MySQL, accepts a request
(query), fulfills that request (execution time), and responds with
results (result set). The service's response time is the time span
between receiving a request and sending a response. The shorter the
response time, the more requests can be served in the same amount of
time.
Parallel processing and other low-level factors play a
significant part here, but the simplified upshot is that there are
28,800 seconds in an 8-hour workday, so reducing response times by just
four-tenths of a second (from 0.5 to 0.1 second) results in 230,400 more
requests served each day.
tcprstat helps you achieve this. I have only enough space in this article to pique your curiosity, so
I'll finish this tool's introduction by telling you the first step
toward getting started with MySQL response time optimization: read
"Optimizing Oracle Performance." Then start using
tcprstat.
Download: (source)
https://launchpad.net/tcprstat | (binary)
http://www.percona.com/docs/wiki/tcprstat:startMaintainers: Percona
More info: http://www.percona.com/docs/wiki/tcprstat:start |
https://launchpad.net/tcprstat
Essential MySQL tool No. 5: mk-table-checksum"Data
drift" is a significant problem for dynamic MySQL environments. This
problem, wherein slave data becomes out of sync with the master, is
often caused by writing data to a slave or executing certain
nondeterministic queries on the master. What's worse is that the data
differences may go unnoticed until they become crippling. Enter
mk-table-checksum, a tool that performs the complex, sensitive calculations necessary to verify the data in two or more tables is identical.
mk-table-checksum
works with both stand-alone servers and servers in a replication
hierarchy, where the tool's greatest value is easily seen. Verifying
table data between a master and a slave must account for replication
consistency. Because changes to the master are replicating to slaves
with some amount of time delay ("lag"), simply reading data from the
servers is an unreliable way to verify consistency, given that the data
is constantly changing and incomplete until fully replicated. Locking
tables and waiting for all data to replicate would allow consistent
reads, but to do so would mean effectively halting the servers.
mk-table-checksum
allows you to perform nonblocking, consistent checksums of master and
slave data. (For technical details on how this is accomplished, see the
tool's documentation.)
Apart from replication consistency, there are other problems with verifying data. Table size is one of them. The MySQL command
CHECKSUM TABLE
is sufficient for small tables, but large tables require "chunking" to
avoid long locks or overloading CPU or memory resources with checksum
calculations.
Chunking solves a second problem: the need for
regular data-consistency checks. While data drift can be a one-time
occurrence, often it is recurring.
mk-table-checksum
is designed to continuously check tables, vetting certain chunks one
run and other chunks the next run until eventually the whole table has
been checked. The ongoing nature of this process helps ensure that
recurring drift is corrected.
Download: http://maatkit.org/get/mk-table-checksumMaintainers: Daniel Nichter & Baron Schwartz
More info: http://maatkit.org/ |
http://code.google.com/p/maatkit/
Essential MySQL tool No. 7: mycheckpointYou
don't always want to wait for something to go wrong before addressing a
problem, and dashboards provide an essential way for you to monitor
your MySQL environment for potential problems before they arise.
There
are many free and commercial monitoring applications for MySQL, some
MySQL-specific and others generic with MySQL plug-ins or templates.
mycheckpoint is notable because it is free, open source, MySQL-specific, and full-featured.
mycheckpoint can be
configured to monitor both MySQL and server metrics, like InnoDB buffer
pool flushes, temporary tables created, operating system load, memory
usage, and more. If you don't like charts,
mycheckpoint can also generate human-readable reports.
As with
stalk, alert conditions can be defined with email notifications, but no secondary tool like
collect will be run to log additional troubleshooting data. Another useful feature is
mycheckpoint's
ability to monitor MySQL variables to detect changes that can lead to
problems, or signal that someone has modified MySQL when they shouldn't
have.
Monitoring MySQL isn't just for data centers or large
deployments. Even if you have a single MySQL server, monitoring is
essential; as with your vehicle, there's a lot to know about the system
while it's running to help you foresee or avoid malfunctions.
mycheckpoint is one solution among many worth trying.
Download: http://code.google.com/p/mycheckpoint/downloads/listMaintainer: Shlomi Noach
More info: http://code.openark.org/forge/mycheckpoint
Essential MySQL tool No. 9: mk-archiverAs
tables become larger, queries against them can become slower. Many
factors influence response times, but if you have optimized everything
else and the only remaining suspect is a very large table, then
archiving rows from that table can restore fast query-response times.
Unless
the table is unimportant, you should not brazenly delete rows.
Archiving requires finesse to ensure that data is not lost, that the
table isn't excessively locked, and that the archiving process does not
overload MySQL or the sever. The goal is an archiving process that is
reliable and unnoticeable except for the beneficial effect of reducing
query times.
mk-archiver achieves all this.
mk-archiver
has two fundamental requirements, the first of which is that archivable
rows must be identifiable. For example, if the table has a date column
and you know that only the last N years of data are needed, then rows
with dates older than N years ago can be archived. Moreover, a unique
index must exist to help
mk-archiver
identify archivable rows without scanning the entire table. Scanning a
large table is costly, so an index and specific SELECT statements are
used to avoid table scans.
In practice,
mk-archiver
automatically handles the technical details. All you have to do is tell
it what table to archive, how to identify archivable rows, and where to
archive those rows. These rows can be purged, copied to another table,
or written to a dump file for future restoration if needed. Once you're
comfortable with the tool, there are many options to fine-tune the
archiving process. Also,
mk-archiver is pluggable, so it can be used to solve complex archiving needs without patching the code.
Download: http://maatkit.org/get/mk-archiverMaintainers: Daniel Nichter and Baron Schwartz
More info: http://maatkit.org/ |
http://code.google.com/p/maatkit/
Essential MySQL tool No. 10: oak-security-auditWhen
was the last time you audited the security of your MySQL servers?
You're not alone if "never" is the answer. There are many companies that
provide security audits, but unless nothing ever changes after those
audits, then the security of your MySQL environment should be checked
regularly.
External threats are one obvious reason to enforce
MySQL security, but internal threats like current or former employees
are often more dangerous because they are (or were) trusted. Security is
also important for enforcing privacy (medical/HIPAA regulations),
preventing accidental access (for example, logging into the production
server instead of the development server), or enabling third-party
programs to interact with your systems.
For those looking to increase the security of their deployments,
oak-security-audit
is a worthwhile, free, open source tool that performs basic MySQL
security audits. It doesn't require any setup; just run it against your
MySQL servers, and it prints a report with risks and recommendations
about accounts, account privileges, passwords, and some general best
practices, like disabling network access. Here's a snippet of a report:
-- Looking for anonymous user accounts
-- -----------------------------------
-- Passed
--
-- Looking for accounts accessible from any host
-- ---------------------------------------------
-- Found 1 accounts accessible from any host. Recommended actions:
RENAME USER 'msandbox'@'%' TO 'msandbox'@'<specific host>';
oak-security-audit
focuses just on MySQL security, so it's not a replacement for a full
system security audit by a human, but it's a great first line of defense
that is easy to use. You could run it weekly with cron and have the
reports emailed to you.
Download: http://openarkkit.googlecode.com/svn/trunk/openarkkit/src/oak/oak-security-audit.pyMaintainer: Shlomi Noach
More info: http://openarkkit.googlecode.com/svn/trunk/openarkkit/doc/html/oak-security-audit.html