How do you use mysqldump?

The MySQL development team is seeking feedback on how you use mysqldump!

Here is some of the feedback I have to pass along:

  • With InnoDB now the default (since MySQL 5.5) I would love to see mysqldump change to take advantage of this. For example:
    • mysqldump can do a hot backup with all InnoDB tables, all you need to do is include --single-transaction. We need to find a way to make this the default behaviour so that applications are not blocked waiting.
    • Adding indexes should be delayed until after all data has been inserted into the table to take advantage of the InnoDB fast-index creation feature (InnoDB Plugin/5.5+).
    • Option to dump as MyISAM, restore as InnoDB.
  • An option to have progress reports while running. Even in the form 34/60 tables backed up, this is incredibly useful for beginners.
  • Take inspiration from mydumper. The user should be able to dump and restore in parallel 🙂

As with some of my other posts, please leave a comment or get in touch!

Published by


I joined MySQL AB in 2006, left, and am now back at Oracle working on the MySQL team. I’ve also worked at Percona and InPowered.

  • SH2

    Such as fast index creation, adding foreign keys can be faster by delaying definition and foreign_key_checks = 0 (MySQL 5.6+). I want this feature.

  • I just use mysqldump -u user -ppass db | gzip > dump.sql.gz

    Is that a bad thing?

    • It is not a bad thing, but it will blcok your tables from being written to temporarily. InnoDB has the ability to not block readers or writers.

  • Maciej

    I’d be great if it could do something like -T does, but still create SQL with inserts rather than CSV files that can’t be easily imported back in smaller transactions like multi-row inserts do. The way I see it, there is a master SQL file that allows you to import everything in the right order easily, but then each table has a separate table-defs.sql (table def, triggers, stored procs) and table-data.sql. Having only small -defs.sql files to deal with you could also easily edit whatever you need before importing any data.

    • Maciej

      Oh, and a tarred output to go with that to allow easy backup over ssh directly to remote systems 🙂

  • 1. Single-transaction should be called “–hot” to be simpler. Most MySQL users don’t understand transactions and snapshots anyway, so “–single-transaction” is not clear or meaningful.

    Also allow “hot” to be an option one can put in the my.cnf file under the [mysqldump] group. That way it can be default in out-of-the-box MySQL installations, but still easy to revert to legacy behavior for a site that needs that.

    Hot backups are a little bit limited anyway, because the system tables are still stored in MyISAM. But that’s a broader issue.

    2. Testing a backup to make sure it can be restored is crucial, but many sites don’t do it. I don’t know of any backup tools that support any way to verify backups.

    Here’s a suggestion: while mysqldump creates its output, it should also output a simulated log file of what the output of the mysql client *should* be during a restore of that dump. Then provide a way to restore a dump file, but change the storage engine of all tables to BLACKHOLE. Test restore in this mode, and save the output. Then simply diff this output against the output saved during the dump. If they don’t match, there’s a problem.

    That would at least verify that the correct *number* of rows are restored. Of course this isn’t the same as verifying the data, but it’s impractical for sites with terabytes of data to test restoration. That’s why they don’t do it. Restoring to blackhole would allow them to test at least partially, and it would be quick and require no space.

    3. You mention mydumper as an inspiration. This tool actually solves a lot of use cases for mysqldump:
    – parallel backup
    – parallel restore
    – compressed output
    – file-per-table output
    – separate DDL output
    – progress reporting
    – scheduling backups
    – streaming output to a remote server

    4. Regarding delayed index creation, this would be great, but first this bug must be fixed: “MySQL uses significantly more memory for ALTER TABLE than expected”

    • Great suggestions Bill. With you on the –single-transaction naming. My suggestion would be to see it on by default unless –legacy is enabled (requires locking for MyISAM). I am not sure how practical this is though.

  • Domas Mituzas

    I use mysqldump with new MDL functionality in 5.5 and 5.6 to completely lock up servers for no reason.

  • ronaldbradford

    One of the biggest downers in MySQL Cloud offerings is restricted privileges and backup tools. This leads to mysqldump almost the only necessary tool to use. However to use it properly –single-transaction is never enough, you have to add –master-data (or –dump-slave) for a backup to ACTUALLY BE USABLE.

    However, that requires more privileges, and AWS for example does not give you this, instead you get mysqldump: Couldn’t execute ‘FLUSH TABLES WITH READ LOCK’: Access denied for user.

    So it’s impossible to actually produce a consistent backup with the ability to point in time recovery.

    This is not the fault of mysqldump, however, for MySQL to keep up with the times, interactions such as this are important to make mysqldump usable in the widest possible use cases

    FWIW, –dump-slave is a poorly named variable when used in context of –master-data and –dump-slave.

    • As long as there is no DDL running, and all tables are InnoDB, it’s technically possible to fetch binlog coordinates without running FLUSH TABLES WITH READ LOCK. So maybe there is a feature request in your description – “Innodb only hot backup w/binlog coordination without locking”.

      Agree on the usefulness of –master-data and –dump-slave. With GTIDs these should be the same thing, it would be great to be able to include this information by default.

  • JD

    I would love to see mysqldump being able to limit table columns. I have in the past had issues with a primary key auto increment ID column when combining databases. I could then use –full-insert and leave out the primary key column so I could easily insert them back and fix the primary key issues. I use mysqldump all the time though and it usually works great.

  • If it’s going to take the form of mydumper – you have to be able to control where each dump file goes as well i.e. pipe to somewhere else instead of just to SQL file.

  • “Option to dump as MyISAM, restore as InnoDB”, better make that “–restore-as ” and not limited to InnoDB (restore as NDB is also needed). I would like to have a consistend dump (–master-data/–single-transaction) which is split in a file-per-database or a file-per-table. And “–single-transaction” should emit a warning if it encounters MyISAM files which are not system tables. (I hope that –restore-as.. will leave those system tables to be MyISAM, unless the change to is NDB and NDB is a supported engine for that system table)

    • Great suggestions! Love the –restore-as + –single-transaction warning.

      • Justin Swanhart

        Sorry, to wake up old thread, but –no-engine would actually be better, then you can just use the default storage engine when importing, or an alternate one with default_storage_engine

  • Marcelo Altmann

    As you said in your post, I would like to split the dump/restore proccess into multi parallel threads

  • Eric Bergen

    Fix some of the silly bugs that have been open for years?

    • shantanuo

      This bug (48972) seems to be really silly. But Bug #20786 “mysqldump always includes AUTO_INCREMENT” is really annoying.
      Bug 39233 and 62069 are feature requests I would like to see implemented.

  • Greg Lane

    Split up –routines to be –procedures and –functions and allow for selecting specific ones such as the –tables option

    • Interesting suggestion. I would like to see –routines included by default as well (currently it is not).

      • Greg Lane

        when i was first learning about mysql i thought procedures and functions were automatic and was wondering why none ever showed up when i restored dumps in some testing :). I was always looking for procedures and functions and not routines in the options.

  • Koa McCullough

    Regarding the ‘Take inspiration from mydumper’ comment. I’m all for seeing a mydumper like tool as part of the default mysql utilities but, the nature of my dumper is so different from traditional mysqldump (ie. it writes files rather than acting as a stream that could be piped to other operations) it seems that it warrants a new utility. One option that would address the parallel restore, getting things into file per table, separate DDL, etc. while maintaining the ability to do streaming operations would be to allow mysqldump to output things as tarballs (though getting the tar headers correct while streaming would be tricky). Just some random thoughts on the subject.

  • Günter Grodotzki

    I wish I could use mysqldump again the way as I started using MySQL. Meaning: simple dimple dumping of databases to a single file. Either as backup, replication init or server migration.

    Nowadays when working with BigData it is impossible to use mysqldump and I have to use tools like XtraBackup.

    As a newbie: why can’t MySQL just provide a simple tool for backing up single or multiple databases that works well with InnoDB and does not lock up your whole website?

    • You can already use mysqldump without locking, if you use InnoDB tables only, and you use the –single-transaction option.

  • Sébastien Lavoie

    I just wrote an article about how I want newlines before new rows when using extended inserts. This allows for more readability and is really helpful when comparing outputs using diff.

    On the other hand, I use this when dumping:

    –single-transaction –disable-keys –routines –triggers –events –extended-insert –create-options –skip-lock-tables –add-drop-table –add-drop-trigger –dump-date –comments –quote-names –quick –set-charset

    Or simpler (using shorthands and excluding defaults):

    –opt –single-transaction –routines –triggers –events –skip-lock-tables –add-drop-trigger