Friday, February 28, 2014

Dumping multiple databases, but to separate files

Dumping multiple databases, but to separate files

As we’ve seen in the second section, the option –all-databases gives you the possibility to backup all the databases available. However, the databases will all be backed up to the same dump file, so sometimes you may prefer having a separate dump for each database. This can be useful in the case you need to restore one database, but need to leave the other ones alone. First, we need to find out which databases are available, and perhaps exclude some of them from the backup. Say, for instance, that you want to backup all the databases but mysql, information_schema. First, we list all the databases with
You should see something similar to this:
We need to manipulate this output a little bit, so that we can extract from it just the database names, filtering out all the other characters that are not needed:
Which produces:
Almost there. We now want to remove the Database heading, and also exclude the databases mysql and information_schema, as per the example (unless you want to back them up too):
Here we are:
We now have the clean list of the names of the databases we want to backup to separate files. All we have to do is chain this command with a simple for..do loop that will execute mysqldump for each database:
By executing the command above, you will create as many .sql dump files as the number of databases you have backed up.

Restoring from multiple dumps

If you dump each database to a different file, you can restore all these databases -should you need to restore all of them- this way:
1
2
3
for file in `ls *.sql`; do  echo $file && mysql -u... -p...  \
  "`echo $file | sed "s/\.sql//"`" < $file; done
 

Faster MySQL dumps and loads with –tab and –use-threads

Faster MySQL dumps and loads with –tab and –use-threads

By default, mysqldump writes a series of sql DDL and inserts to standard out, that you can then pipe to another database server to recreate a given database.
The problem is that this is all serial, and if you’re having to do this task regularly (because you’re sharing databases between different development environments, for example), it’d be nice if this could be sped up, and it certainly can with the current 5.1.x versions of MySQL.

MySQL recently added two new features that help with this: mysqldump --tab=path and mysqlimport --use-threads=N.

Dumping

Here’s how you’d dump multiple databases without –tab (assuming your ~/.my.cnf told mysqldump how to connect to your database server):
There are a bunch of caveats to using the --tab option:
  • The mysqldump command must be run on the database server, because mysqldump will invoke SELECT INTO OUTFILE on the server.
  • The FILE permission must be granted to the mysqldump user
  • The directory needs to be writeable by the mysqld euid
  • If you want to dump multiple databases, you’ll need to create a new directory per database so same-named tables won’t clobber eachother
In the interests of simplicity, I used globally-read-write permissions here. If untrusted users had access to these directories, these permissions would be unacceptable, of course.

Loading

Loading from a .sql.gz file is trivial — just pipe it to mysql and call it a day:
Loading from tab files is a bit more work. Note that this NUKES AND PAVES your database with the content of the dump–including the mysql users, their passwords, and their permissions! You’ll also want to play with –use-threads, depending on the number of processors your machine hardware has.
1
2
3
4
5
6
cd ${DUMP_DIR} # <- where DUMP_DIR is the database dump you want to load, like /tmp/$dir from above
for db in * ; do
  mysql -e "drop database $db; create database $db default charset utf8"
  cat $db/*.sql | mysql $db
  mysqlimport --use-threads=3 --local $db $db/*.txt
done

Using LVM for MySQL Backup and Replication Setup

Using LVM for MySQL Backup and Replication Setup

 Backup advice my first question would be if they have LVM installed or have some systems with similar features set for other operation systems. Veritas File System can do it for Solaris. Most SAN systems would work as well.What is really needed is ability to create atomic snapshot of the volume, which can be later mounted same as original file system

Why snapshot based MySQL backups are great ?
There are number of reasons:
Almost Hot backup In most cases you can perform this type of backup while your application is running. No need to shut down server, make it read only or anything like it.
Support for all local disk based storage engines It works with MyISAM and Innodb and BDB, It also should work with Solid, PrimeXT and Falcon storage engines.
Fast Backup You simply do file copy in the binary form so it is hard to beat in speed.
Low Overhead It is simply file copy so overhead to the server is minimal.
Easy to Integrate Do you want to compress backup ? Backup it to tape, FTP or any network backup software – it is easy as you just need to copy files.
Fast Recovery Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further. More on this later.
Free No extra commercial tools as Innodb Hot Backup are required to perform backup.
Are there any downsides ?
Need to have snapshot campatibility – this is obvious one.
May need root access In some organizations DBA and System Administrator are different people from different departmnents which might not like to trade access rights between each other.
Hard to predict downtime I mentioned this solution is often hot backup, but bad thing it is hard to estimate when it is hot and when it is not – FLUSH TABLES WITH READ LOCK may take quite a while to complete on systems with long queries.
Problems with data on multiple volumes If you have logs on separate devices or just your database spanning across multiple volumes you will be in trouble as you will not get consistent snapshot across all the database. Some systems may be able to do atomic snapshot of many volumes.
Lets speak a bit about how LVM and snapshotting in general works. Really there are different implementations but the sake of them is to provide you with volume which consistently matches state of the volume at the time storage is created. In LVM it is implementeed as copy on write. Special storage area allocated on device where old version of changed pages are stored. You can think about it as about simplified form of versioning like in Innodb if it is closer to you. In other cases snapshot may be implemented by tripple-mirroring. Ie you have RAID1 volume but there are 3 copies of data rather than 2. So you can move one devices out of mirror and use it as snapshot while still having your data safe and secure.
There are two types of snapshots – some of them are read-only while others can be read-write. read-only snapshots may sound good enough as you’re only going to read data anyway, but in reality read-write snapshots have number of benefits. First no extra handling is needed for journaling file sytems – you can simply do journal recovery on snapshot. With read-only snapshot you need to make sure filesystem synchronizes device before snapshot is taken so no journal replay is needed.
The other benefit of read-write snapshot is you can actually start MySQL Server on it and perform recovery, check tables or do whatever else you might need to do to ensure your backup is consistent. Backing up database which was already corrupted is very nasty problem you want to avoid.
Let’s now see what exactly you need to do to perform backup of MySQL Database (or create slave) using LVM2 on Linux.
1) Connect to MySQL and run FLUSH TABLES WITH READ LOCK
Note – this command may take a while to complete if you have long running queries. The catch here is FLUSH TABLES WITH READ LOCK actually waits for all statements to complete, even selects. So be careful if you have any long running queries. If you’re using only Innodb tables and do not need to synchronize binary log position with backup you can skip this step.
2) While holding connection open run: lvcreate -L16G -s -n dbbackup /dev/Main/Data – This will create snapshot named dbbackup for Logical Volume Main/Data . You should specify enough of undo space to hold modifications during backup process – I’ve specified 16GB in this case. If your undo size is not large enough snapshot will get invalidated and backup will be aborted.
Sometimes you might run into the errors on this step, The most common one I’ve resently seen is: snapshot: Required device-mapper target(s) not detected in your kernel – This means snapshot module is not loaded in your kernel by default and you need to load it, which is done by running modprobe dm-snapshot
3) Now you have created logical volume and can unlock the tables, but before that you should probably record binary log position which is done by running SHOW MASTER STATUS – This is binary log position you’ll need to point your MySQL Slaves created from this snapshot.
4) Snapshot created, now you want to let MySQL Server to continue, which is done by running UNLOCK TABLES or simply closing connection.
5) Mount backup Filesystem: mount /dev/Main/dbbackup /mnt/backup
6) Copy data to backup. Normally you can skip slow query logs and error log while taking backup. You also can skip most of binary logs – however if some of your slaves are far behind you might want to keep some of last binary logs just in case, or you can assume in case of recovery from the backup you will need to restore slaves as well and skip binary logs in your backup process.
7) Unmount filesystem umount /mnt/backup
8) Remove snapshot: lvremove -f /dev/Main/dbbackup
If you want to create slave based on such snapshot you need to perform couple of more simple steps
9) Extract/Copy database to the slave database directory.
10) Start MySQL Server. Wait for it to perform recovery.
11) Use CHANGE MASTER TO to point slave to saved binary log position:
12) Run SLAVE START to restart replication.
With slightly modified process you can clone slaves from the slaves without stopping them – you just need to use SHOW SLAVE STATUS instead of SHOW MASTER STATUS to find out appropriate binary log position. Be careful however – cloning slave from the slave also clones inconsistences in data which slave could have accomulated – especially if you use slave_skip_errors or sql_slave_skip_counter. Cloning master you’re starting from consistent copy.

If you’re interested in ready script you can try mylvmbackup by Lenz Grimmer
http://www.lenzg.net/mylvmbackup/

mydumper & myloader : faster backup and restore ?

mydumper & myloader : faster backup and restore

At PalominoDB we do not normally use mysqldump for regular backups, but only in some circumstances (for example MySQL upgrade).
Lately we gave a try to mydumper as an alternative to mysqldump, and results are quite promising.
We found that mydumper performs very fast exporting both small and large datasets!!
We also found that the with large datasets restore with myloader doesn't perform a lot better than simple restore from mysqldump SQL dump: this depends from the storage engine and not from the client used to restore.
On a box we run 2 tests:
1) with a dataset that was fitting in the InnoDB buffer pool;
2) with a dataset larger than the InnoDB buffer pool.

TEST #1

We created 128 tables of 1M rows each, for a total dataset of 31GB on disk:
shell$ time ./sysbench --test=tests/db/parallel_prepare.lua --oltp-tables-count=128 --oltp-table-size=1000000 --mysql-table-engine=innodb --mysql-user=root --num-threads=12 run real 22m0.013s
user 204m22.054s
sys 0m37.430s
Doing the backup with mydumper:
shell$ time ./mydumper -t 8 -B sbtest
real 0m29.807s
user 2m35.111s
sys 0m26.102s
... and with mysqldump:
shell$ time mysqldump --single-transaction sbtest > sbtest.sql
real 6m24.607s
user 5m19.355s
sys 0m46.761s
Within this test, mydumper looks around 13 times faster than mysqldump.
We also tried compression, but I/O was fast enough to make compression only an unnecessary overhead: in other words, on that hardware and with this dataset, mydumper with compression was slower than mydumper without compression.
To complete the test, we tried recovery time, after deleting and recreating and empty database:
shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ./myloader -t 8 -d export-20110720-090954
real 9m12.548s
user 0m55.193s
sys 0m28.316s

shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ( echo "SET SQL_LOG_BIN=0;" ; cat sbtest.sql ) | mysql sbtest
real 46m46.140s
user 9m3.604s
sys 0m48.256s
With this dataset, restore time using myloader was 5 times faster than using the SQL dump from mysqldump.

TEST #2

Test #2 is very similar to test #1 , but with some different in the dataset:
48 tables instead of 128 tables;
10M rows on each table instead of 1M rows;
a total dataset on disk of 114GB instead of 31GB.

First, we created the tables with sysbench:
shell$ time ./sysbench --test=tests/db/parallel_prepare.lua --oltp-tables-count=48 --oltp-table-size=10000000 --mysql-table-engine=innodb --mysql-user=root --num-threads=12 run
real 107m24.657s
user 689m2.852s
sys 2m11.980s

Backup with mydumper:
shell$ time ./mydumper -t 8 -B sbtest
real 7m42.703s
user 15m14.873s
sys 2m20.203s

The size of the backup is quite big because not compressed: 91GB
On average, mydumper was writing on disk at around 200MB/s.
Backup with mysqldump:
shell$ time mysqldump --single-transaction sbtest > sbtest.sql
real 32m53.972s
user 20m29.853s
sys 2m47.674s

mydumper was again faster than mysqldump , but not as much as in the previous test: only 4 times faster.
Was now the time to measure recovery time:
shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ./myloader -t 6 -d export-20110720-171706
real 130m58.403s
user 4m5.209s
sys 1m51.801s

shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ( echo "SET SQL_LOG_BIN=0;" ; cat sbtest.sql ) | mysql sbtest
real 204m18.121s
user 34m33.520s
sys 3m43.826s

myloader is just a bit more than 50% times faster than importing the SQL dump from mysqdump
Conclusion from second test:
a) With larger dataset, mydumper slows down because the system does more I/O as the dataset doesn't fit in memory, but still way faster than mysqldump.
b) With larger dataset, load time with myloader slowed down a lot. Although, the root cause of the performance drop isn't mydumper , but:
- more I/O (dataset + dump don't fit in RAM);
- InnoDB inserts rate degenerates with bigger tables.

On the topic of InnoDB inserts rate degeneration with big tables , probably another blog post will follow.
Notes on hardware and configuration:
CPU: 2 x 6cores with HT enabled
96 GB of RAM
FusionIO

innodb_buffer_pool_size=64G
innodb_log_file_size=2047M
innodb_io_capacity=4000
innodb_flush_log_at_trx_commit=2
(binlog disabled)

Wednesday, February 26, 2014

How To Install Percona Server 5.5 On Ubuntu 12.10

How To Install Percona Server 5.5 On Ubuntu 12.10





How To Install Percona Server 5.5 On Ubuntu 12.10


Percona Server is a drop-in replacement for MySQL. It offers more performance and scalability than the default MySQL server coming with your Linux distribution, while it uses the same init scripts and command line tools which makes it easy to use. This tutorial explains how to install Percona Server 5.5 on Ubuntu 12.10.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

If MySQL is already installed on your system, this is no problem - Percona Server will ssimply replace it, but keep data and also your MySQL configuration from /etc/mysql/my.cnf.
You can find a Percona Server benchmark here (in German):
 Benchmark: MySQL 5.5 vs. Percona Server 5.5
Because we will run all the steps from this tutorial with root privileges, we can either prepend all commands in this tutorial with the string sudo, or we become root right now by typing
sudo su

2 Configuring apt

Percona provides an apt repository for Debian and Ubuntu, which makes Percona Server very easy to install. First, import Percona's key:
gpg --keyserver hkp://keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | apt-key add -
Next open /etc/apt/sources.list...
vi /etc/apt/sources.list
... and add the following two lines to it:
[...]
deb http://repo.percona.com/apt quantal main
deb-src http://repo.percona.com/apt quantal main
Next we need to pin Percona's package so that they don't get overwritten by upgrades from the Ubuntu repositories:
vi /etc/apt/preferences.d/00percona.pref
Package: *
Pin: release o=Percona Development Team
Pin-Priority: 1001
Then update the package database:
apt-get update

3 Installing Percona Server

Percona Server can now be installed as follows:
apt-get install percona-server-server-5.5 percona-server-client-5.5
New password for the Percona Server "root" user: <-- yourrootsqlpassword
Repeat password for the Percona Server "root" user: <-- yourrootsqlpassword
That's it already. To check your MySQL version, log into the MySQL shell:
mysql -u root -p
root@Ubuntu-1210-quantal-64-minimal ~ # mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 38

Server version: 5.5.28-29.2 Percona Server (GPL), Release 29.2
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> <-- quit
Bye
root@Ubuntu-1210-quantal-64-minimal ~ #

As you see, we're using Percona Server now.

Tuesday, February 25, 2014

Recovering an InnoDB table from only an .ibd file

Recovering an InnoDB table from only an .ibd file.

Sometime you may need to recover a table when all you have is the .ibd file. In this case, if you try to load it into a new instance, your likely to encounter some errors about the table id not matching. And there is not really a way around this.
However, I’ve found two work-arounds for this:
Note: You will need the .ibd file and the CREATE TABLE statement for each table you want to recover using these methods.
  1. Simulate the internal InnoDB table counter. That is, create work tables (with innodb_file_per_table enabled) until you have the internal pointer of table id equal to (1 – id_of_ibd_table_you_need_to_restore). (See Method #1)
  2. Manually hex edit the .ibd file, changing the table id. (See Method #2)
*Note: There are some internal structures with this meta information, so you’ll need to dump/import that data after you get it loaded, so you avoid unpleasantries that will inevitably show their face.
Method #1 – Create work tables
1. Start up clean/fresh instance of MySQL with innodb_file_per_table enabled.
2. Now, we need to find the table id that MySQL is currently set at, as well as the table id for the table we need to recover.
Note:
Step 2 (2a – 2f) is simply to find the table id that is stored inside of the .ibd file. I’ve written a PHP script to determine this, so using the script can save a bunch of time. See the bottom of this page (under “Associated Files”) for the exact script.
2a. Create a test database:

mysql> CREATE DATABASE test1;
mysql> USE test1;
2b. Issue the create table command for the table:

mysql> CREATE TABLE `product` (
  `PRODUCT_ID` bigint(20) unsigned NOT NULL auto_increment,
  `BRAND_ID` int(10) unsigned default NULL,
  `PRODUCT_TYPE_ID` int(10) unsigned default NULL,
  `GROUP_ID` int(10) unsigned default NULL,
  `PRODUCT_NAME` varchar(500) NOT NULL,
  `DEFAULT_EMAIL_ID` varchar(48) default NULL,
  `PRODUCT_STATUS` tinyint(1) NOT NULL,
  `CLIENT_ID` bigint(20) unsigned default NULL,
  `LAST_MODIFIED_BY` varchar(45) NOT NULL,
  `LAST_MODIFIED_DATE` datetime NOT NULL,
  PRIMARY KEY  (`PRODUCT_ID`)
  ) ENGINE=InnoDB;
2c. Discard the tablespace, which will delete the newly created .ibd file:

mysql> ALTER TABLE product DISCARD TABLESPACE;
2d. Copy the pre-existing .ibd file to the datadir/test1 folder
2e. Import this tablespace:

mysql> ALTER TABLE product IMPORT TABLESPACE;
This should produce the following error (at least this is most likely). The only way it would not is if MySQL’s current table id matched that of the preexisting ibd table id. In which case, you can now dump your table.

ERROR 1030 (HY000): Got error -1 from storage engine
2f. So, now to check the error log (manually). Look for the following entry:

081010 11:47:40  InnoDB: Error: tablespace id in file
'.test1product.ibd' is 1193, but in the InnoDB
InnoDB: data dictionary it is 1.
So, now we know the internal table id is at 1, and that of the ibd table is 1193.
3. Clean up working database:
3a. Manually move the ibd file from the $datadir to a safe location (as you will need this file again).
3b. Drop this table.

mysql> DROP TABLE product;
Note this does not re-set the internal table counter.
4. You’ll need to create the number of tables you need to increase the internal table id value.
In this case, you’d create 1191 test InnoDB tables (already at 1, and need to leave 1 for the actual table, so 1193-2=1191). Run below in a loop.

for ($1=1; $i<=1191; $1++) {
  CREATE TABLE t# (id int) ENGINE=InnoDB;
}
I accomplished this via a simple php script. See the bottom of this page (under "Associated Files") for the exact script.
5. After these are created, go ahead and drop this database and all tables (as they are not needed).

DROP DB test1;
6. Now, re-perform steps 2a through 2e.

mysql> CREATE DATABASE test1;
mysql> USE test1;
mysql> CREATE TABLE `product` ( ... ) ENGINE=InnoDB;
mysql> ALTER TABLE product DISCARD TABLESPACE;

<--  Here is where you copy back the original ibd file to /$datadir/test1/ -->

mysql> ALTER TABLE product IMPORT TABLESPACE;
Success!

mysql> show tables;
+-----------------+
| Tables_in_test1 |
+-----------------+
| product         |
+-----------------+
1 row in set (0.00 sec)
7. Now, dump the table using mysqldump, and then you can import this to any MySQL instance. Note, you must dump this and re-import it, or you'll run into problems.
However, it's possible to encounter crashes and/or reports of corruption in the logs.
If this happens, try to force innodb recovery (which is most likely), and then dump the table.
Start by setting innodb_force_recovery=1 (and try 2,3,4,5,6) until the dump works.
For this example table, I had to set innodb_force_recovery=5 before the dump would succeed.
The # in the output file name is the value I had innodb_force_recovery set to when trying to perform the dump:

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump1.txt
mysqldump: Couldn't execute 'show table status like 'product'':
Lost connection to MySQL server during query (2013)

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump2.txt
mysqldump: Couldn't execute 'show table status like 'product'':
Lost connection to MySQL server during query (2013)

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump3.txt
mysqldump: Couldn't execute 'show table status like 'product'':
Lost connection to MySQL server during query (2013)

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump4.txt
mysqldump: Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */
* FROM `product`': Lost connection to MySQL server during
query (2013)

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump5.txt

C:Program FilesMySQLmysql-5.0.68bin>
mysqladmin -u root -P 3385 shutdown

C:Program FilesMySQLmysql-5.0.68bin>
mysqldump -uroot -P3385 test1 product > product_dump6.txt
In fact, in this case, I could have simply started with 5. This is because the error log stated this:

InnoDB: Error: trying to access update undo rec field 19
in index PRIMARY of table test1/product
InnoDB: but index has only 12 fields
So, I knew there was a problem trying to look at the undo logs, and from the manual, a setting of 5 says this:
"Do not look at undo logs when starting the database: InnoDB treats even incomplete transactions as committed"
However, it's best to start at 1 and work your way forward so as to prevent as much data loss as possible.
Method #2 - Hex Edit .ibd file
First of all, you'll need to backup everything (ibdata files, ib_logfile(s), data). I'd also perform a mysqldump of everything you currently have, just so you have a mysqldump of it in the event that you need it.
Also, very important, be sure to make a copy of the .ibd file for the specific table in question.
Lastly, get a copy of the CREATE TABLE statement that will recreate this table.
Then, you'll follow the steps #1-5 (but do not perform step #6 yet) outlined on the following page:
http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html
Let me post them here for completeness, however:
  1. Use mysqldump to dump all your InnoDB tables.
  2. Stop the server.
  3. Remove all the existing tablespace files, including the ibdata and ib_log files. If you want to keep a backup copy of the information, then copy all the ib* files to another location before the removing the files in your MySQL installation.
  4. Remove any .frm files for InnoDB tables.
  5. Configure a new tablespace.
  6. Restart the server.
  7. Import the dump files.
At this point, MySQL should be running fine with an empty slate (and should have just re-created your new ibdata and log files).
Now, you'll want to recreate the table (just using the CREATE TABLE output from above), and its database to hold it.
Then, you'll basically be following the steps #1-3 outlined on the following page:
http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
1. Issue this ALTER TABLE statement:

ALTER TABLE tbl_name DISCARD TABLESPACE;
Caution: This statement deletes the current .ibd file.
2. Put the backup .ibd file back in the proper database directory (the one that you copied above).
3. Issue this ALTER TABLE statement:

ALTER TABLE tbl_name IMPORT TABLESPACE;
Everything should go smoothly until step #3 (above). More than likely, this will produce an error like the following on your console:

"Got error -1 from storage engine"
Now, if you look in the error log, you'll see something like:

"InnoDB: Error: tablespace id in file '.testt2.ibd' is 2,
but in the InnoDB data dictionary it is 1."
It would not produce the above error and would work fine if the existing table already had a tablespace id of 1. However, this is unlikely.
So, assuming you see the above errors, then you can modify the tablespace id actual ibd file using a hex editor. I would do this on a different copy of the ibd file (other than the original, just in case).
Note that I used "Freeware Hex Editor XVI32" for Windows for this step. Start the program, and then open the .ibd file. You'll see each byte in it's own cell. You can then click on a cell, click edit, and then edit that byte. (http://www.chmaas.handshake.de/delphi/freeware/xvi32/xvi32.htm)
Now, in this file, there are 2 places where this tablespace id is located.
For me, and I assume it should be the same for you, but just look at the values to be sure, I see the tablespace id values listed at position 37 and 41 (positions 25 and 29 in hex). In the actual hex column, if you're previous tablespace id was 2, then in positions 37 and 41, you'd see 02 and 02.
(Note these positions can change. For instance, I tested on a table with an internal id of 1193. This in hex is 04A9. However, when searching the file, for the first instance of the table id, I found the '04' in position 39 and 'A9' in position 40. Then, for the second instance of the table id, the '04' was at position 43 and the 'A9' was at position 44. So, you'll have to convert the table id to hex, and then search for that value, near the beginning of the file.)
Note that this value (02) may vary depending on what your actual tablespace id is.
Then, simply modify both of those fields to 01, and save the file.
Then, re-do the following 3 steps:

1. ALTER TABLE tbl_name DISCARD TABLESPACE;
2. Put the newly saved .ibd file back in the proper database directory
3. ALTER TABLE tbl_name IMPORT TABLESPACE;
This time, step #3 works fine.
It is at this point you should dump/import the data. At least, get a good mysqldump of this table now. You may find that this causes corruption in InnoDB, and you may need to start MySQL using --force-innodb-recovery.
http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html Forcing InnoDB Recovery
Associated Files :: PHP Scripts
Simple PHP script - Used to create a number of InnoDB tables (to increase internal table id counter):

$dbhost = "localhost:3385";
$dbname = "test1";
$dbuser = "root";
$dbpwd  = "";

mysql_connect($dbhost,$dbuser,$dbpwd) or die(mysql_error());

for ($i = 1033; $i <= 1190; $i++) {
   $dbquery = "CREATE TABLE test1.t" . $i . " (id int) ENGINE=InnoDB";

   echo "" . $dbquery . "";

      $result = mysql_db_query($dbname,$dbquery) or die(mysql_error());

      $j = 0;

      while($row = mysql_fetch_array($result)) {
         $j++;
         echo $row[0];
      }
}

mysql_close();
PHP Internal Table ID Finder - Used to determine the internal Table ID from the binary .ibd file:

/*
Tested with tables from 4.1.23, 5.0.68, 5.1.28, and 6.0.7.
*/

// Set the filename
$filename = "C:\Users\Chris\Desktop\mysql\working\ibds\z1.ibd";

// Read 2 bytes in at a time
$offset = 2;

// Echo filename and path
echo "filename = $filename

";

// Open the filename - need 'rb' for binary file on Windows
$handle = fopen($filename, "rb");

// Define redundant, local variables for possible later functionality and/or checks
$ibd_id_bin = 0;
$ibd_id_hex = 0;
$ibd_id_dec = 0;
$ibd_id_bin2 = 0;
$ibd_id_hex2 = 0;
$ibd_id_dec2 = 0;

// Find the filesize (note: below command messes up script)
//$filesize = filesize($filename));

// Only loop through first 21 bytes - as table is is in $array[18] and $array[20]
for ($z = 0; $z <= 20; $z++) {

 // Set variable $contents equal to 2 ($offset) bytes of binary data
 $contents = fread($handle, $offset);

 // Convert $contents from binary data to hex data
 $contents2 = bin2hex($contents);

 // Convert $contents2 from hex data to decimal data
 $contents3 = hexdec($contents2);

 // Debug Output
 //echo "contents[$z] = " . $contents . "";
 //echo "contents2[$z] = " . $contents2 . "

";
 //echo "contents3[$z] = " . $contents3 . "

";

 // If position 19, array position [18], then store the values
 if ($z == 18) {
  $ibd_id_bin = $contents;
  $ibd_id_hex = $contents2;
  $ibd_id_dec = $contents3;
 }

 // If position 21, array position [20], then store the values
 if ($z == 20) {
  $ibd_id_bin2 = $contents;
  $ibd_id_hex2 = $contents2;
  $ibd_id_dec2 = $contents3;
 }
}
fclose($handle);

// More Debug output
//echo "

The table id is $ibd_id_dec

";
//echo "

The table id is $ibd_id_dec2

";

// Check to see if both values are equal.  If so, then it's
// most certain this is the correct value.
// If not, then there's a chance the positions are off for
// this table (due to versions, etc.).
if ($ibd_id_dec == $ibd_id_dec2) {
 echo "

The table id is $ibd_id_dec

";
} else {
 echo "The values from positions [18] and [20] did not match,";
             echo "so please enable debug output, and check for proper positions.";
}