How important is using the correct datatype in MySQL?

Frequently in performance talks I hear speakers talk about the importance of using the correct datatypes for storing values in columns. i.e. representing a number with an INT or BIGINT, storing IP addresses as INT UNSIGNED, and VARCHAR(60) instead of VARCHAR(255).

This advice is correct, but today I thought I would try my best to go into a bit more detail đŸ™‚

The Reasons

I can think of three reasons why this optimization is true:

  1. Using numeric data types as strings incurs some added CPU overhead performing character-set and collation work. i.e. it’s not free to make 'MontrĂ©al' = 'Montreal' = 'MONTREAL', but MySQL behaves this way by default.

  2. Using correct data types will save space. By ‘space’ usually memory-fit is more important than disk fit, as it can improve cache efficiency. Disk fit can also be important with smaller SSDs.

  3. Some wire protocol and client library buffers are not variable in length. This is a little out of scope of my knowledge area, but you should expect more memory consumption with larger VARCHAR values – so going to just a little bit of effort judging expected length can help.

Memory Fit

I would rate reason (2) above as the most likely reason for why this optimization is correct. In fact, I would say that a large majority of optimizations made by DBAs are to try and stretch out memory fit for as long as possible. i.e. indexes prevent table scans,
and allow you to focus on just the data you need.

Lets take a look at memory fit improvements by concentrating on data types:

  • Short primary keys. There is a lot of space to be saved by keeping your PRIMARY KEY as short as possible. InnoDB uses a clustered index, with the value of the PRIMARY KEY also included in each secondary index as an internal row pointer.

  • Smaller values for indexed columns. Indexed values are essentially duplicated because they appear in both the index and the table row, so there is an amplification created by any inefficiency.

    How much of a memory fit improvement you will gain depends on how the index is accessed. For example:

    • SELECT DISTINCT(indexed_col) FROM my_table requires all of the index in memory to be efficient.
    • Ranged access such as SELECT * FROM my_table WHERE indexed_col BETWEEN x AND y may also require more memory fit if the ranges are wide. However, this can also be true for non ranged access (i.e. SELECT * FROM my_table WHERE indexed_col = x) if indexed_col is not unique and matches a number of rows.
    • If the index has good cardinality (i.e. a high distribution of values), and the queries executed focus on a smaller percentage of values, then memory fit may be less important.
  • Smaller values for un-indexed columns. It is also important to make sure that any un-indexed columns are small, but to a lesser degree. The case where I expect it to matter the most is if your workload includes table scans, or if there is the situation where ‘hot’ rows are highly scattered amongst a large number of pages.

    Small side-note: MySQL 5.6 allows you to change the InnoDB page size, which may help improve memory fit in this case too.

That’s all I can think of. What am I missing?

Explicit Partition Selection in MySQL 5.6

In case you missed it, MySQL 5.6 added support for explicit partition selection. Since its release in MySQL 5.1, I have found partitioning an incredibly useful feature for aging out time-series data, and I plan to put this new syntax to work.

Today I wanted to show two hypothetical examples of how it can be used.

Consider it an optimizer hint

MySQL is usually able to optimize a query to search only the partitions which will be required. For example here we can see there are 4 partitions but 2 are searched:

CREATE TABLE t1 (
 id INT NOT NULL PRIMARY KEY auto_increment,
 cola char(255)
)
PARTITION BY RANGE(id) (
 PARTITION p0 VALUES LESS THAN (64),
 PARTITION p1 VALUES LESS THAN (128),
 PARTITION p2 VALUES LESS THAN (192),
 PARTITION p3 VALUES LESS THAN MAXVALUE
);

EXPLAIN PARTITIONS SELECT * FROM t1 WHERE id BETWEEN 120 AND 140\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: t1
   partitions: p1,p2
         type: range
possible_keys: PRIMARY
          key: PRIMARY
      key_len: 4
          ref: NULL
         rows: 2
        Extra: Using where
1 row in set (0.00 sec)

This feature is called partition pruning.

However, similar to optimizer hints there may be cases where you are in a better position to decide which partitions should be accessed. A (poor) example to demonstrate this, is to alter the previous example and pass the value 120 through a non-deterministic function:

CREATE FUNCTION myint (i INT)
RETURNS INT NOT DETERMINISTIC
RETURN i;

mysql> EXPLAIN PARTITIONS SELECT * FROM t1 
WHERE id BETWEEN myint(120) AND 140\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: t1
   partitions: p0,p1,p2,p3 <-- requires all partitions
         type: ALL
possible_keys: PRIMARY
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 4
        Extra: Using where
1 row in set (0.00 sec)

mysql> EXPLAIN PARTITIONS SELECT * FROM t1 PARTITION (p1, p2) 
WHERE id BETWEEN myint(120) AND 140\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: t1
   partitions: p1,p2 <-- searches two partitions mentioned
         type: ALL
possible_keys: PRIMARY
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 2
        Extra: Using where
1 row in set (0.00 sec)

Workaround partition-columns required in primary key

I am not sure I recommend this suggestion, but for some applications retrofitting in partitioning is difficult because it may require modification of the primary key. Here is an example:

mysql> CREATE TABLE invoices (
->      id INT NOT NULL PRIMARY KEY auto_increment,
->      created_at datetime NOT NULL,
->      customer_id INT NOT NULL,
->      total_amount DECIMAL(10,2)
->     )
->      PARTITION BY RANGE (YEAR(created_at)) (
->       PARTITION p2009 VALUES LESS THAN (2010),
->       PARTITION p2010 VALUES LESS THAN (2011),
->       PARTITION p2011 VALUES LESS THAN (2012),
->       PARTITION p2012 VALUES LESS THAN (2013),
->       PARTITION pCurrent VALUES LESS THAN MAXVALUE
->     );
ERROR 1503 (HY000): A PRIMARY KEY must include all 
columns in the table's partitioning function

It is possible to use the new explicit selection as a way of bypassing this requirement by partitioning by range on the primary key, and then naming the partitions as something useful. For example:

CREATE TABLE invoices (
 id INT NOT NULL PRIMARY KEY auto_increment,
 created_at datetime NOT NULL,
 customer_id INT NOT NULL,
 total_amount DECIMAL(10,2)
)
 PARTITION BY RANGE (id) ( 
  PARTITION p2009 VALUES LESS THAN (324334),
  PARTITION p2010 VALUES LESS THAN (673898),
  PARTITION p2011 VALUES LESS THAN (1203248),
  PARTITION p2012 VALUES LESS THAN (1703940),
  PARTITION pCurrent VALUES LESS THAN MAXVALUE
);

Application queries can then be written as:

SELECT * FROM invoices PARTITION(p2009) WHERE customer_id=2134;

Why I said I don’t recommend it, is that if a created_at date were to be modified, the row could suddenly find itself in the wrong partition. With the typical usage (partition by range on created_at) the row will be moved to another partition automatically upon modification.

How do you use partitioning?
Do you find explicit partition selection useful?
Leave a comment!

The future of ALTER IGNORE TABLE syntax

As part of the ongoing effort to refactor MySQL and make it better, we are defining the behaviour of certain features and in some cases reimplimenting them in a more maintainable fashion.

I have previously blogged on:
* A proposal to simplify SQL_MODE options.
* The Federated Storage Engine.
* The Query Cache.
* The deprecation of EXPLAIN PARTITIONS and EXPLAIN EXTENDED.
* The deprecation of NULL synonym \N.

Today what I want to write about, is the IGNORE option in ALTER TABLE. From the MySQL manual:

“IGNORE is a MySQL extension to standard SQL. It controls how ALTER
TABLE works if there are duplicates on unique keys in the new table
or if warnings occur when strict mode is enabled. If IGNORE is not
specified, the copy is aborted and rolled back if duplicate-key
errors occur. If IGNORE is specified, only the first row is used of
rows with duplicates on a unique key. The other conflicting rows are
deleted. Incorrect values are truncated to the closest matching
acceptable value.”

This creates several issues for the MySQL server team:

  1. IGNORE could remove rows from a parent table when using a foreign key relationship.
  2. IGNORE makes it impossible to use InnoDB Online DDL for several operations, for example adding a PRIMARY KEY or UNIQUE INDEX.
  3. IGNORE has some strange side-effects for replication. For example: DDL is always replicated via statement-based replication, and since SQL does not imply ordering, it’s not clear which rows will be deleted as part of the ignore step. I also see cross-version replication problematic if future MySQL versions were to introduce more strictness, since a slave may de-duplicate more rows.

The most common case

We believe that the most common use case for IGNORE is to be able to add a UNIQUE INDEX on a table which currently has duplicate values present. i.e.

ALTER IGNORE TABLE users ADD UNIQUE INDEX (emailaddress);

In this scenario, a novice user manages to avoid auditing each entry in the users table, and simply lets MySQL pick a row to be kept, with all duplicates automatically removed.

There are two other ways to be able to do that:

Hand removal

Using the same an example as above, return a list of email addresses and PRIMARY KEY values for records that conflict:

SELECT GROUP_CONCAT(id), emailaddress, count(*) as count FROM users 
GROUP BY emailaddress HAVING count >= 2;

/* delete or merge duplicate from above query */

ALTER TABLE users ADD UNIQUE INDEX (emailaddress);

Note: This method will be the fastest way, since when not using IGNORE, MySQL is able to use InnoDB’s Online DDL.

New table + INSERT IGNORE

While this method looks very similar, internally it’s semantics are quite different:

CREATE TABLE users_new LIKE users;
ALTER TABLE users ADD UNIQUE INDEX (emailaddress);
INSERT IGNORE INTO users_new SELECT * FROM users;
DROP TABLE users;
RENAME TABLE users_new TO users;

By creating a table first, the MySQL server will not have to manage rows in a foreign key relationship. The rows will also be re-sent to the slave using row-based replication, so issue (3) I mentioned above does not come into play.

Conclusion

We are looking for feedback on how you use this feature.

Is there another common use-case we are not accounting for?

Would it be acceptable to add a feature to MySQL Workbench to assist in preparing a table for adding a unique index (similar to hand removal)?

Help shape the future of MySQL. Please leave a comment or contact me.

A closer look at Online DDL in MySQL 5.6

MySQL 5.6 introduced online DDL for InnoDB tables, which means that a number of ALTER TABLE statements no longer block access to data modification while running.

I’ve written about this feature before: it was enough to make one of my freelancing customers fall in love with MySQL again. Today I wanted to go through what’s covered and what’s not.

The Manual

The MySQL manual actually does all the work for me, since there is a table under 14.2.12.1. Overview of Online DDL.

The particular column you want to pay attention to is “Allows Concurrent DML?”, which means that means that neither reads or writes are blocked.

Operation In-Place? Copies Table? Allows Concurrent DML? Allows Concurrent Query? Notes
CREATE INDEX,
ADD
INDEX
Yes* No* Yes Yes Some restrictions for FULLTEXT index; see next row.
Currently, the operation is not in-place (that is, it
copies the table) if the same index being created was also
dropped by an earlier clause in the same
ALTER TABLE statement.
ADD FULLTEXT
INDEX
Yes No* No Yes Creating the first FULLTEXT index for a table
involves a table copy, unless there is a user-supplied
FTS_DOC_ID column. Subsequent
FULLTEXT indexes on the same table can
be created in-place.
DROP INDEX Yes No Yes Yes Modifies .frm file only, not the data file.
OPTIMIZE TABLE Yes Yes Yes Yes Uses ALGORITHM=INPLACE as of MySQL 5.6.17.
ALGORITHM=COPY is used if
old_alter_table=1 or
mysqld --skip-new
option is enabled. OPTIMIZE TABLE using
online DDL (ALGORITHM=INPLACE) is not
supported for tables with FULLTEXT indexes.
Set default value for a column Yes No Yes Yes Modifies .frm file only, not the data file.
Change auto-increment value
for a column
Yes No Yes Yes Modifies a value stored in memory, not the data file.
Add a foreign key
constraint
Yes* No* Yes Yes To avoid copying the table, disable
foreign_key_checks during
constraint creation.
Drop a foreign key
constraint
Yes No Yes Yes The foreign_key_checks option can be
enabled or disabled.
Rename a column Yes* No* Yes* Yes To allow concurrent DML, keep the same data type and only change the
column name.
Add a column Yes Yes Yes* Yes Concurrent DML is not allowed when adding an
auto-increment
column. Although ALGORITHM=INPLACE is
allowed, the data is reorganized substantially, so it is
still an expensive operation.
Drop a column Yes Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation.
Reorder columns Yes Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation.
Change ROW_FORMAT property Yes Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation.
Change KEY_BLOCK_SIZE property Yes Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation.
Make column NULL Yes Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation.
Make column NOT NULL Yes* Yes Yes Yes When SQL_MODE includes
strict_all_tables or
strict_all_tables, the operation fails
if the column contains any nulls. Although
ALGORITHM=INPLACE is allowed, the data
is reorganized substantially, so it is still an expensive
operation.
Change data type of column No Yes No Yes
Add primary key Yes* Yes Yes Yes Although ALGORITHM=INPLACE is allowed, the data is
reorganized substantially, so it is still an expensive
operation. ALGORITHM=INPLACE is not
allowed under certain conditions if columns have to be
converted to NOT NULL. See
Example 14.9, “Creating and Dropping the Primary Key”.
Drop primary key and add another Yes Yes Yes Yes ALGORITHM=INPLACE is only allowed when you add a new
primary key in the same ALTER
TABLE
; the data is reorganized substantially, so
it is still an expensive operation.
Drop primary key No Yes No Yes Restrictions apply when you drop a primary key primary key without
adding a new one in the same ALTER
TABLE
statement.
Convert character set No Yes No Yes Rebuilds the table if the new character encoding is different.
Specify character set No Yes No Yes Rebuilds the table if the new character encoding is different.
Rebuild with FORCE option Yes Yes Yes Yes Uses ALGORITHM=INPLACE as of MySQL 5.6.17.
ALGORITHM=COPY is used if
old_alter_table=1 or
mysqld --skip-new
option is enabled. Table rebuild using online DDL
(ALGORITHM=INPLACE) is not supported
for tables with FULLTEXT indexes.
Rebuild with null ALTER TABLE ...
ENGINE=INNODB
Yes Yes Yes Yes Uses ALGORITHM=INPLACE as of MySQL 5.6.17.
ALGORITHM=COPY is used if
old_alter_table=1 or
mysqld --skip-new
option is enabled. Table rebuild using online DDL
(ALGORITHM=INPLACE) is not supported
for tables with FULLTEXT indexes.
Set table-level persistent
statistics options
(STATS_PERSISTENT,
STATS_AUTO_RECALC
STATS_SAMPLE_PAGES)
Yes No Yes Yes Modifies .frm file only, not the data file.

Update: I have updated the table to reflect changes in MySQL 5.6.17.

Find that hard to remember?

What I also like about MySQL 5.6, is that it also introduces syntax to force ALTER TABLE statements to not run if they can not be performed online. Here is an example of using LOCK=NONE (must allow concurrent DML):

mysql> CREATE TABLE a (id INT NOT NULL PRIMARY KEY, a char(100),
 b char(100));
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER TABLE a ADD INDEX idx_a (a), LOCK=NONE;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE a DROP PRIMARY KEY,  LOCK=NONE;
ERROR 1846 (0A000): LOCK=NONE is not supported. 
 Reason: Dropping a primary key is not allowed without 
 also adding a new primary key. Try LOCK=SHARED.

I recommend using this syntax whenever writing migration scripts so you are never surprised by how much impact they will have.

What’s upcoming?

In general I would comment that the number of operations which switched to being online in 5.6 is quite substancial. In MySQL 5.7 two additional improvements to online DDL are available, and are already noted in the MySQL 5.7 manual page:

  1. Extending a VARCHAR column is online.
  2. Renaming an index is online.

Have you used Online DDL in 5.6? Please leave a comment and share your experiences.

Reducing MySQL durability for testing

A while ago, I wrote how to make MySQL perform durably in production. Today I want to do the opposite: configure MySQL (with the InnoDB storage engine) for when you really don’t care about your data.

Continuous integration/automated testing is a good example of when you want to do this, since you usually test for functionality and not performance characteristics. It also helps in situations when the data is very easy to rebuild.

Obligatory Disclaimer: These settings are not safe for production environments!

Configuration Settings

  • Keep the binary log disabled (i.e. comment out any log-bin lines in configuration). For testing, you don’t need to be able to point-in-time restore, or seed replicas with changes.

  • Set sync_frm=0. This makes any DDL (schema changes) unsafe, but removes an fsync that is required to make sure the changes are safely on disk.

  • Set innodb-flush-log-at-trx-commit=0. This reduces redo log file pressure as data is committed. I would rate this as one of the most important settings to change. The particular case when it will help the most is when you are running on hardware without a battery backed write cache.

  • Set innodb-doublewrite=0. This makes recovery unsafe, as InnoDB will not be able to recover from a partially written page. However, as with all these steps, we are working with the assumption that you will just throw out the data when that happens.

  • Set innodb-checksums=0. Page checksums are InnoDB’s mechanism to make sure the underlying hardware isn’t failing and corrupting data. It’s usually only a small hit (although more measurable with SSDs), and is only used on loading/unloading pages from disk.

  • Set innodb_support_xa=0. This makes sure changes between the binary logs and synchronized. Since we don’t care about recoverability, we can disable it.

  • Set innodb-log-file-size and innodb-buffer-pool-size appropriately large. I have a guide here. You can typically go much larger on the log files when you don’t care about recoverability – 4G or 8G.

  • In your session set unique_checks=0 and foreign_key_checks=0. These two options are optional, since they are the only recommendation listed here which change behaviour. When set they remove internal constraint checking, which means that a unique key may not actually be unique.

    You may notice these options from files created by mysqldump.

What is an fsync

Many of these settings improve performance specifically because they reduce the number of fsync operations that MySQL needs to do. An fsync is required to keep data safe so it’s not in an in memory buffer when power is lost.

But in making the buffers safe, fsync operations also reduce the amount of request re-ordering and merging that can be done at the layers below MySQL. This will typically drop performance substantially in all situations except when you have a RAID Controller with a battery backed cache.

A special note on libeatmydata

Stewart Smith also maintains a library to disable all fsync operations called libeatmydata. I do not expect it to deliver large performance gains over the configuration recommended above, but I can think one good situation to use it: when you have a desire to keep MySQL configuration the same in testing as it is in production. libeatmydata provides an external way to change MySQL’s behaviour, since it just requires one small change to how mysqld is started.

Did I miss anything? Please leave a comment.

Proposal to simplify SQL_MODE options

In the MySQL team, we’ve observed that having a large number of possible SQL modes creates confusion amongst users. For example, if you follow my advice on ‘recommended defaults for new applications’, it is:

sql-mode="STRICT_TRANS_TABLES,
  ERROR_FOR_DIVISION_BY_ZERO,
  NO_AUTO_CREATE_USER,
  NO_AUTO_VALUE_ON_ZERO,
  NO_ENGINE_SUBSTITUTION,
  NO_ZERO_DATE,
  NO_ZERO_IN_DATE,
  ONLY_FULL_GROUP_BY"

This is quite specific, and may be too much to expect of a novice user to set in configuration.

One proposal we are putting forward to address this is to simply reduce the number of options that are available. Namely:

  • Remove the options ERROR_FOR_DIVISION_BY_ZERO, NO_ZERO_DATE and NO_ZERO_IN_DATE.
  • Have these behaviours be enabled when turning on the strict mode (STRICT_TRANS_TABLES for transactional tables, STRICT_ALL_TABLES for all tables).

Which would then bring the list down to:

sql-mode="STRICT_TRANS_TABLES,
  NO_AUTO_CREATE_USER,
  NO_AUTO_VALUE_ON_ZERO,
  NO_ENGINE_SUBSTITUTION,
  ONLY_FULL_GROUP_BY"

This has some compatibility consequences of course. To understand the change a little more, let me explain what these options do:


ERROR_FOR_DIVISION_BY_ZERO

Produce an error in strict mode (otherwise a warning) when a division by zero (or MOD(X,0)) occurs during an INSERT or UPDATE. If this mode is not enabled, MySQL instead returns NULL for divisions by zero. For INSERT IGNORE or UPDATE IGNORE, MySQL generates a warning for divisions by zero, but the result of the operation is NULL.

For SELECT, division by zero returns NULL. Enabling this mode causes a warning to be generated as well.

NO_ZERO_DATE

In strict mode, do not permit '0000-00-00' as a valid date. You can still insert zero dates with the IGNORE option. When not in strict mode, the date is accepted but a warning is generated.

NO_ZERO_IN_DATE

In strict mode, do not accept dates where the year part is nonzero but the month or day part is 0 (for example, '0000-00-00' is legal but '2010-00-01' and '2010-01-00' are not). If used with the IGNORE option, MySQL inserts a '0000-00-00' date for any such date. When not in strict mode, the date is accepted but a warning is generated.


So in a sense, we are proposing that strict mode will become stricter and that each of these individual behaviours will no longer have their own on/off switches.

What do you think –

  • Do you agree that SQL_MODE usability is an issue?
  • Would this change cause you any compatibility issues? i.e. We would be interested to hear a use-case where you need to enable these options individually.
  • Do you have any other suggestions on how to improve SQL_MODE usability?

We value your input. Please leave a comment, or get in touch.