Help shape the future of SHOW ENGINE INNODB MUTEX

Before there was PERFORMANCE_SCHEMA, there was SHOW ENGINE INNODB MUTEX. As I wrote yesterday, these commands are useful for exposing internal locking operations that happen inside of MySQL. They are mainly useful for MySQL developers, but have some appeal for DBAs as well.
As part of an ongoing effort to cleanup the MySQL code-base and increase usability, the MySQL development team is interested in hearing feedback from users who currently use the command SHOW ENGINE INNODB MUTEX.
Specifically:

  • Is there a use case for the command that is not well satisfied by Performance Schema?
  • Would you be adversely affected if SHOW ENGINE INNODB MUTEX were to be deprecated?
  • What tools do you use to analyze mutex contention, and what features could we add to benefit you?

Please leave a comment, or get in touch. We would love to hear from you.

What is a mutex anyway?

While I was working on the MySQL Workbench booth at MySQL Connect this year, an attendee came up and asked what a mutex was and how important it is to them in their function as a DBA.
It’s a good question. The short answer is:
Mutexes are mutually exclusive locks and are designed to prevent concurrent access to resources that in doing so may result in unsafe conditions.
They are required as part of virtually all multi-threaded applications (not just MySQL), and are not something that you can specifically prevent from happening. This is a different concept to row-locks and table locks.

Some examples

If the InnoDB buffer pool is full, internally free space needs to be created in order to be able to load your desired page into memory. What you don’t want, is in between freeing and loading somebody else to take your desired free-slot.
When a page becomes too full, it may need to split data into other pages. It is possible that there could be a thread deadlock if two pages left and right of each-other reach this condition at once and can’t decide who should proceed first.

Should you care as a DBA?

Yes, and no.
No in the sense that you are not expected to fix these problems, they are usually very internal to the operations of the database and require a MySQL developer to fix them.
Yes in the sense that newer MySQL versions often change the way internal locking works if a mutex becomes hot for a large number of users. Hot is the terminology used to describe a mutex which has other threads blocked waiting for it for a large amount of time. This contention reduces scalability on multi-core machines because some threads sit idle, essentially waiting for the ablity to run.

Hot Mutex History

Here is an incomplete list of some of the mutexes worked on in newer versions of MySQL:
MySQL 5.7:

  • Index lock contention is reduced via introduction of a new SX lock mode. This improves performance of large tables.

  • The LOCK_thread_count mutex protected several independent internal server structures and variables, and was a bottleneck, particularly affecting server performance in the circumstance when many clients were connecting and disconnecting at once.

MySQL 5.6:

  • The kernel mutex is split into smaller, localized locks. This mutex previous covered many different internal tasks to InnoDB.
  • LOCK_open mutex is split – this mutex used to be a mutex to protect the metadata of a table within the MySQL Server.
  • The default configuration was changed, so that InnoDB has 8 buffer pool instances when the buffer pool is > 1GB.

MySQL 5.5:

How can you see if a mutex is hot?

Various metadata commands can expose mutexes that are hot. For example: SHOW ENGINE INNODB STATUS, SHOW ENGINE INNODB MUTEX and PERFORMANCE_SCHEMA. Operating specific tools can also be used, such as oprofile or Linux Perf.
Here is an example from SHOW ENGINE INNODB MUTEX (via dimitrik):

mysql> show engine innodb mutex;
+--------+----------------------------+-------------------+
| Type   | Name                       | Status            |
+--------+----------------------------+-------------------+
| InnoDB | log/log0log.c:775          | os_waits=26       |
| InnoDB | log/log0log.c:771          | os_waits=1        |
| InnoDB | buf/buf0buf.c:1208         | os_waits=3219     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=6990     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=4619     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=5627     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=7873     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=4466     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=16929    |
| InnoDB | buf/buf0buf.c:1208         | os_waits=19305    |
| InnoDB | buf/buf0buf.c:1208         | os_waits=16301962 |
| InnoDB | buf/buf0buf.c:1208         | os_waits=11649    |
| InnoDB | buf/buf0buf.c:1208         | os_waits=950471   |
| InnoDB | buf/buf0buf.c:1208         | os_waits=6545     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=4262     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=5642     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=7878     |
| InnoDB | buf/buf0buf.c:1208         | os_waits=387166   |
| InnoDB | fil/fil0fil.c:1559         | os_waits=1265     |
| InnoDB | srv/srv0srv.c:987          | os_waits=460452   |
| InnoDB | combined buf/buf0buf.c:900 | os_waits=38503    |
| InnoDB | log/log0log.c:832          | os_waits=184      |
| InnoDB | combined buf/buf0buf.c:901 | os_waits=77       |
+--------+----------------------------+-------------------+
23 rows in set (0.56 sec)

Here is an example of SHOW ENGINE INNODB STATUS (via the MySQL forums):

..
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 334026866, signal count 322155474
--Thread 139787989395216 has waited at ../../../storage/innobase/trx/trx0undo.c line 1684 for 165.00 seconds the semaphore:
Mutex at 0x7f23518408c0 created file ../../../storage/innobase/trx/trx0rseg.c line 147, lock var 1
waiters flag 1
--Thread 139788009264912 has waited at ../../../storage/innobase/btr/btr0cur.c line 463 for 165.00 seconds the semaphore:
S-lock on RW-latch at 0x7f230fbd2b48 created in file ../../../storage/innobase/buf/buf0buf.c line 550
a writer (thread id 139787989395216) has reserved it in mode exclusive
number of readers 0, waiters flag 1
Last time read locked in file ../../../storage/innobase/btr/btr0cur.c line 463
Last time write locked in file ../../../storage/innobase/btr/btr0cur.c line 463
--Thread 139787997423376 has waited at ../../../storage/innobase/trx/trx0roll.c line 921 for 15.00 seconds the semaphore:
Mutex at 0x7f23518408c0 created file ../../../storage/innobase/trx/trx0rseg.c line 147, lock var 1
waiters flag 1
--Thread 139788384470800 has waited at ../../../storage/innobase/btr/btr0cur.c line 463 for 15.00 seconds the semaphore:
X-lock on RW-latch at 0x7f230f558038 created in file ../../../storage/innobase/buf/buf0buf.c line 550
a writer (thread id 139788006655760) has reserved it in mode exclusive
number of readers 0, waiters flag 1
Last time read locked in file ../../../storage/innobase/btr/btr0cur.c line 463
Last time write locked in file ../../../storage/innobase/btr/btr0cur.c line 463
..

And one for PERFORMANCE_SCHEMA (via Mark Leith):

SELECT EVENT_NAME,
 SUM_TIMER_WAIT/1000000000 WAIT_MS,
 COUNT_STAR
FROM performance_schema.EVENTS_WAITS_SUMMARY_GLOBAL_BY_EVENT_NAME
ORDER BY SUM_TIMER_WAIT DESC, COUNT_STAR DESC LIMIT 30;
+---------------------------------------------------------+-------------+------------+
| EVENT_NAME                                              | WAIT_MS     | COUNT_STAR |
+---------------------------------------------------------+-------------+------------+
| wait/io/file/innodb/innodb_data_file                    | 150562.0345 |     182750 |
| wait/io/file/innodb/innodb_log_file                     |  77795.5425 |       8913 |
| wait/synch/rwlock/innodb/checkpoint_lock                |  11636.7894 |       1284 |
| wait/synch/rwlock/innodb/btr_search_latch               |   7429.2204 |   39677465 |
| wait/io/file/myisam/kfile                               |   7353.2737 |       2011 |
| wait/synch/mutex/innodb/kernel_mutex                    |   4115.0534 |   59198510 |
| wait/synch/mutex/innodb/fil_system_mutex                |   3473.8341 |    1263809 |
| wait/io/file/myisam/dfile                               |   2940.5701 |        641 |
| wait/synch/mutex/innodb/buf_pool_mutex                  |   2650.8666 |   33982979 |
| wait/synch/mutex/innodb/rw_lock_mutex                   |   1261.8025 |   18877546 |
| wait/io/file/sql/FRM                                    |    116.6419 |        863 |
| wait/synch/cond/sql/COND_thread_count                   |      1.4073 |          1 |
| wait/io/file/sql/pid                                    |      1.2654 |          3 |
| wait/synch/mutex/innodb/mutex_list_mutex                |      0.7675 |      16727 |
| wait/synch/mutex/innodb/rw_lock_list_mutex              |      0.4023 |       8331 |
| wait/io/file/sql/dbopt                                  |      0.2745 |         12 |
| wait/io/file/sql/casetest                               |      0.2041 |          5 |
| wait/synch/mutex/innodb/thr_local_mutex                 |      0.2009 |       2050 |
| wait/synch/mutex/mysys/THR_LOCK_open                    |      0.1993 |        989 |
| wait/synch/rwlock/innodb/trx_purge_latch                |      0.1436 |        255 |
| wait/io/file/sql/ERRMSG                                 |      0.1432 |          5 |
| wait/synch/rwlock/sql/LOCK_grant                        |      0.1375 |        188 |
| wait/synch/rwlock/sql/MDL_lock::rwlock                  |      0.1013 |        481 |
| wait/synch/mutex/sql/Query_cache::structure_guard_mutex |      0.0923 |        628 |
| wait/synch/mutex/mysys/THR_LOCK_myisam                  |      0.0781 |        283 |
| wait/synch/rwlock/sql/Query_cache_query::lock           |      0.0676 |        198 |
| wait/io/file/mysys/charset                              |      0.0561 |          3 |
| wait/synch/mutex/sql/LOCK_open                          |      0.0529 |        543 |
| wait/synch/mutex/mysys/THR_LOCK_lock                    |      0.0504 |        244 |
| wait/synch/mutex/innodb/srv_innodb_monitor_mutex        |      0.0424 |         75 |
+---------------------------------------------------------+-------------+------------+
30 rows in set (0.00 sec)

Query optimization versus caching

Today I wanted to look at the relative merits of different optimization paths that can be taken on a Greenfield project. That is to say, one that has no constraints imposed on it by previous decisions made, and has received little to no optimization work to date.
Specifically, the two optimizations I wanted to compare are optimizing mysql versus caching. I should point out in advance that these optimizations are really orthogonal. The only thing that ties you to doing one versus the other is that they both consume the resource developer time.

Optimizing MySQL

This optimization typically starts by taking a look at the queries that are being sent to MySQL and running EXPLAIN over them. With some investigation it’s frequently common to add an index or make a small tweak to schema.
Advantages:

  1. An optimized query is usually fast for all users accessing the application. Since indexes cut-through data via logarithmic search (aka divide and conquer like you would search a phone book), they also sustain performance somewhat with data growth. A cache masking an unindexed query can sometimes perform even worse as the data grows. With growth, users who don’t hit the cache may have such poor performance the application is not useable.
  2. Optimizing MySQL does not require you to worry about invalidating cache or serving stale data from a cache.
  3. Optimizing MySQL can keep the technology stack simpler, which makes it slightly easier to replicate and work with in development environments.

Disadvantages:
1. Some queries can not be improved via just indexes and may require schema changes which can be difficult to retrofit into some applications.
2. Some schema changes may be for de-normalization (duplication of data). While this is a common technique for DBAs, it needs ownership to make sure that all places are updated by the application, or triggers are installed to guarantee such changes.
3. Some optimizations may be MySQL-specific. That is to say if the underlying software is shipped to work on multiple databases, it is difficult to justify some of the more complex optimizations that go beyond adding indexes.

Adding a cache

This optimization requires one to profile the application, and move expensive processing away from MySQL and into a third-party cache such as memcached or Redis.
Advantages:
1. This works really well when the application has expensive queries that are difficult to optimize with MySQL. For example: large aggregate/GROUP BY queries.
2. Caching can be a good retrofit to increase throughput of the system. i.e. when slowness is a result of many people accessing the application at once.
3. Caching may sit on top of another application easier. For example: your application may just be a front-end to another software package that stores data in MySQL. It might be very difficult to make any database changes to that other application.
Disadvantages:
1. If the data has many access patterns to being served (i.e. appears on many different pages in different formats), then invalidating the cache on update may be difficult and/or require stale data to be served. An alternative to this, is to store data in a more fine-grained cache. This has its own disadvantages, such as added latency from many cache fetches.
2. Caching an object that is expensive to generate may create lurking performance cliffs for users that miss the cache (see Optimizing MySQL advantage #1). Good performance practice suggests that you should try and tighten the variance between users, not just the average (as caching tends to do).
3. Naive caching implementations suffer from subtle bugs, such as cache stampedes. Just last week I helped someone who’s database server was brought down by multiple users trying to regenerate the same cache contents at once. The correct solution would be to introduce some level of locking to serialize the cache regeneration.

Conclusion

I typically recommend users take a look at optimizing MySQL first, as I see this as initially the most elegant solution. But long term most applications do have a use-case for implementing some level of both approaches.

How do you use mysqldump?

The MySQL development team is seeking feedback on how you use mysqldump!
Here is some of the feedback I have to pass along:

  • With InnoDB now the default (since MySQL 5.5) I would love to see mysqldump change to take advantage of this. For example:
    • mysqldump can do a hot backup with all InnoDB tables, all you need to do is include --single-transaction. We need to find a way to make this the default behaviour so that applications are not blocked waiting.
    • Adding indexes should be delayed until after all data has been inserted into the table to take advantage of the InnoDB fast-index creation feature (InnoDB Plugin/5.5+).
    • Option to dump as MyISAM, restore as InnoDB.
  • An option to have progress reports while running. Even in the form 34/60 tables backed up, this is incredibly useful for beginners.
  • Take inspiration from mydumper. The user should be able to dump and restore in parallel 🙂

As with some of my other posts, please leave a comment or get in touch!

Commentary on MySQL slow query collection sources

The other day it struck me that MySQL applications have no fewer than four sources to be able to collect potentially slow queries for analysis, and that I actually find myself using 3/4 methods available.
Here are the methods listed, and my commentary on where I will use each of them:

Application Logging/Monitoring

I enjoy using Application-level tools such as New Relic or xhprof to find slow experiences rather than just slow queries.
What I mean by this, is that there is not a 1:1 between pages and queries, so sometimes just heading straight to the database can be the wrong decision. As I wrote on Monday, latency is a factor with N+1 queries, and I typically want to focus on queries in aggregate that exceed page generation goals. On the same note, I also pay less attention to queries that are part of background tasks and do not impact user experience.
I also like monitoring application tools like a hawk whenever I am running any operational changes (i.e. removing indexes that I am 99.99% confident are unused, disabling the query cache). New Relic has about a 3 minute delay over real-time, but this is usually good enough for me, since it increases my confidence on top of whatever DB dashboards I am already looking at.

Performance Schema

In MySQL 5.6, the performance_schema can now instrument statements. In combination with ps_helper I find this to be easiest way to identify what has been running on a particular database server.
What I love about this, is that it’s enabled by default and can stay enabled full time with a zero second slow query threshold.
Maybe it’s just anecdotal, but I also find with performance_schema that I can react very fast, and won’t miss anything. There used to be this situation where I would need to stalk the PROCESSLIST or make sure I lower the long_query_time in time to see a particular problem.
(I should also note, that the MySQL Enterprise Monitor 3.0 uses the PERFORMANCE_SCHEMA to show statement performance. The Enterprise Tools team is one of the driving forces behind PERFORMANCE_SCHEMA – it’s great to see that the interfaces they use are available to all.)

Slow query log file

I use this for more of a long play analysis, by setting the long_query_time=0, then letting it run for between 20 minutes and a few hours. When I have enough data in the log file, I will restore the long_query_time to a high value and then copy my slow log file across the network to another server for processing.
The best tool to process this file is pt-query-digest – I routinely give it a 20GB+ file, and I love the output format it provides, since it has a histogram with useful statistics.
Even with the introduction of performance_schema, there will probably always be a use for the slow query log in some form, but in 5.6 I find I am using this method less for ‘routine’ investigations. Here is why:

  • I counted it out – what I do is an eight step process.
    1. Rotate the slow query log file (to avoid historical data skew)
    2. lower the long_query_time
    3. bounce applications that use connection pooling
    4. record for 20 minutes (watching dashboards for impact)
    5. increase the long_query_time
    6. rotate the log file again
    7. copy file to a different host
    8. run pt-query-digest
  • I am worried that in one of those eight steps, I will screw up and leave the long_query_time at zero seconds. I’ll find out when I have a midnight page saying I’m critically low on disk space.

What I think would make me use this method more, is if I were able to specify a maximum size per slow log file (i.e. 2GB) and then a maximum number of log files to retain (i.e. 10) and have MySQL manage rotation and deletion of old files for me.

Slow query log table

I don’t use this method. Since the queries in the slow log table do not include the digest, it’s not possible to aggregate them in any meaningful way. What you usually have to do, is write a script to extract the slow query log table into the slow query log file format(!), then run pt-query-digest.
This feature may be helpful for cloud providers who do not want to provide local filesystem access to customers. However, at least in the case of Amazon RDS there is a way that you can access the slow query log files.


Update: A commenter on Google+ let me know that I forgot to mention network traffic inspection as an collection source. An oversight on my part. The best use case I have for this method is to workaround step (3) I mentioned in slow query log analysis – you don’t need to bounce connections to start collecting queries.

Thank you to DUG-TO!

Thank you to the Drupal User Group Toronto for hosting me last night. My slides are now available online:

In response to some discussions:

  • There is a new ‘tree-like’ EXPLAIN format in MySQL 5.6 called FORMAT=JSON. You can see an example in this post.
  • innodb_buffer_pool_size is the in memory setting for how much of your data+indexes can be in RAM, and defaults to 128MB (far too small for most production environments). innodb_flush_method=O_DIRECT is the option to disable filesystem caching. Both are mentioned in my what to tune after installation guide.

Thanks again!

How important is it to merge queries together?

Reading Ernie’s post today inspired me to write about something I’ve been wanting to write about for a while: how much of a performance impact you should expect from network latency.

Hypothetical Numbers

I’m going to throw out some web-application ‘back of the envelope’ numbers which I will then use for examples:

  • The goal is to generate a page within 200-500ms.
  • Network latency between the application server and DB server is 0.5-1ms.

There will be use-cases which have goals far more agressive than this, for example advertising server goals are closer to 50ms. I have also seen network latency well below 1ms. I encourage commenters to correct me where they disagree 😉

Analysis

So if we say that a typical web page requires 5-10 queries to render, you can see that the amount of time spent in network latency could actually be very low:

10 queries @ 1ms each = 10ms, or 5% of our 200ms goal.
10 queries @ 0.5ms each = 5ms, or 2.5% of our 200ms goal.

However, what I want to do is illustrate the effects of the N+1 query pattern. That is, some applications will naively repeat a query inside a looping structure. So using the numbers we have established, lets take a look at the difference between:

  • SELECT * FROM Country – 239 rows in 1 query.
  • SELECT * FROM Country WHERE code = '?' – 239 queries with one row each.

Using performance_schema + ps_helper, I can show the time that it takes to execute on the server:

mysql> select * from statement_analysis\G
*************************** 4. row ***************************
        query: SELECT * FROM `country`
    full_scan: *
   exec_count: 3
    err_count: 0
   warn_count: 0
total_latency: 3.41 ms
  max_latency: 1.17 ms
  avg_latency: 1.14 ms
    rows_sent: 717
rows_sent_avg: 239
 rows_scanned: 717
       digest: 53567ecd08977b34a4532202a10871f4
*************************** 6. row ***************************
        query: SELECT * FROM `country` WHERE CODE = ?
    full_scan:
   exec_count: 5
    err_count: 0
   warn_count: 0
total_latency: 1.19 ms
  max_latency: 249.08 us
  avg_latency: 238.78 us
    rows_sent: 5
rows_sent_avg: 1
 rows_scanned: 5
       digest: cc32c7d6ec9e2803dea1ff95f458520a

Because the SELECT * FROM Country retrieves every row, it is going to be a table scan (which is quite efficient in this case). We can see that the query time is 1.14ms to retrieve all 239 rows versus 0.239ms to retrieve just a single row.
Retrieving all 239 rows from MySQL as primary key lookups takes 0.239ms * 239 = 57ms.
However, once you account for 0.5ms network latency:
* The single step retrieval is 1.14 + 0.5 = 1.64ms.
* Row-at-a-time retrieval is 119.5ms + 57ms = 176.5ms.
And with 1ms network latency:
* The single step retrieval is 1.14 + 1 = 2.14ms.
* Row-at-a-time retrieval is 239ms + 57ms = 296ms.
This example is largely the “worst-case” situation, which I am using to illustrate an example. 2.14ms versus 296ms isn’t even comparable 🙂 To apply context – this difference will basically double my page generation goal of 200-500ms.

Merging Queries

My next example to run through, is what happens when we merge queries together. In this case retrieving multiple rows via primary key at once:

SELECT * FROM Country WHERE code IN ('AUS', 'USA', 'CAN');

instead of:

SELECT * FROM Country WHERE code = 'AUS';
SELECT * FROM Country WHERE code = 'USA';
SELECT * FROM Country WHERE code = 'CAN';

We have the time for the single row query above. Here is the timing information from ps_helper for the IN list query:

*************************** 8. row ***************************
        query: SELECT * FROM `Country` WHERE CODE IN (...)
    full_scan:
   exec_count: 4
    err_count: 0
   warn_count: 0
total_latency: 1.19 ms
  max_latency: 318.66 us
  avg_latency: 296.59 us
    rows_sent: 12
rows_sent_avg: 3
 rows_scanned: 12
       digest: b19ca11697506fac486dd35535c37c32

With 0.5ms network latency total time is:
* (0.29659ms + 1 roundtrip) = 0.8 ms total
* (0.23878ms x 3 + 3 roundtrips) = 2.2ms total
With 1ms network latency:
* (0.29659ms + 1 roundtrip) = 1.3 ms total
* (0.23878ms x 3 + 3 roundtrips) = 3.7ms total
So by merging 3 queries together, I very nearly received a 3x reduction in query time.
Another was to look at this, is that for simple queries the execution cost can be near-free on the MySQL side. Network latency really can matter.

Back of the Envelope Calculations

One of the best DBA questions I was ever asked in an interview, was to try and estimate required disk space for an application using a back-of-the-envelope calculation.
The idea behind a question like this, is that it is a thought exercise. Its aim is to better test competency than some of the simple questions that can be Googled for an answer (i.e. whether or not you know the answer is not important. You will in 2 minutes :)) Unfortunately I have also seen it negatively affect candidates that are not native English speakers, so be aware that your mileage may vary.
Today I wanted to try and recreate the back-of-the-envelope exercise that I was asked to complete. For simplicity the question will be restricted to database size and not include RAM or IOPS estimations.

Scope of Problem

“We are designing a new system to store application users for a very demanding system. There is only one table, but it is expected to grow to 20 million records within the next 3 years. We need your help with hardware provisioning. Here is the table schema:”

CREATE TABLE users (
 id INT NOT NULL PRIMARY KEY auto_increment,
 first_name VARCHAR(60) NOT NULL,
 last_name VARCHAR(60) NOT NULL,
 user_name VARCHAR(30) NOT NULL,
 password BINARY(20) NOT NULL,
 email_address VARCHAR(100) NOT NULL,
 last_login_date TIMESTAMP NOT NULL,
 UNIQUE INDEX (user_name),
 INDEX (last_login_date),
 INDEX (last_name, first_name),
 INDEX (email_address)
) ENGINE=INNODB;

Assumptions you are allowed to make:
* Database is UTF-8.
* Average first name and last name is 10 characters / mostly single byte characters.
* Usernames average 8 characters in length / all single byte characters.
* Assume password is a SHA1 hash.
* Average email address is 20 characters / all single byte characters.
* Users are almost never deleted, and rarely change first_name, last_name or user_name after being created.

My Answer

The first step I would go to, is to find out the size of the datatypes in the table, and in the indexes. For the VARCHAR columns I will use the average lengths provided + 1 byte, and for other columns I can use the MySQL manual. So:

4 + 10+1 + 10+1 + 8+1 + 20 + 20+1 + 4 = 80 bytes per row.

Each index will have the length requirement of the data types in the index + the primary key:

             username: 8+1 + 4 = 13
      last_login_date: 4 + 4 = 8
last_name, first_name: 10+1 + 10+1 + 4 = 26
        email_address: 20+1 + 4 = 25

InnoDB also has some approximately 13 bytes of meta data attached to each row (version-id and rollback pointer, deleted flag). Indexes then fit into pages, which have additional overhead themselves. Lets try and account for page overhead with a 7% space premium (based on 15/16ths fill-factor):

              PRIMARY: (80 + 13) * 1.07 = 99.51
             username: (13 + 13) * 1.07 = 27.82
      last_login_date: (8+ 13) * 1.07 = 22.47
last_name, first_name: (26 + 13) * 1.07 = 41.73
        email_address: (25 + 13) * 1.07 = 40.66

I would consider the space premium of 7% optimistic, since this is the rate of new pages created. Over time pages will split as insertions are out of order and records are deleted. Secondary indexes also feature all MVCC values so they can ‘bloat’ via gaps created after modifications. I’m not sure of a good way to account for this, but I’m going to allocate another 25% on the indexes that I expect this will happen to:

              PRIMARY: 99.51 * 1 ~= 100 # Not affected
             username: 27.82 * 1.25 ~= 35
      last_login_date: 22.47 * 1.25 ~= 28
last_name, first_name: 41.73 * 1.25 ~= 52
        email_address: 40.66 * 1.25 ~= 51

So the final number I arrived at per row is:

100 + 35 + 28 + 52 + 51 = 266 bytes

Multiplied by 20 million this equals 4.95 GB ~= 5GB
Now it’s time to factor in some global overhead that is somewhat fixed:

  • Lets assume innodb_log_file_size is 256M * 2 = 512MB. On a bigger system you can now make this much larger in MySQL 5.6.
  • Data dictionary / insert buffer / doublewrite buffer. These are all going to be very small, since I would expect very little need for insert buffer on a 5GB table = 12MB.
  • UNDO information is probably going to be small for this table, since I would typically expect transactions to be short and modify 1 user at a time. I am not sure how to estimate this exactly, so I am going to budget 500MB.
  • I also have no way to estimate binary logs, and will budget 5GB for the purpose (defaults to 1GB before rotation, should be plenty of space. Largest concern I have is how frequently the last_login_date column is updated, and how much traffic it generates.)

Which equals 6GB of overhead.
The last thing to plan for is free space so we will be able to run an ALTER TABLE operation and not run out of disk space 😉
Typically I like to provision 3x on total database size so I will always be able to stage a backup locally if I need to and still have some juggle room and not trigger Nagios free space alarms. With a database the size of this system, I probably will be able to use the 3x rule. When 3x is not practical to be able to provision for, I will normally settle for 2x globally and 3x of the largest table (which in this particular case there is only 1 table in the whole system).
So with that math, my final recommendation would be:

5GB + 6GB = 11GB * 3 = 33G

Did I pass the test?
How would you answer the question?
If someone writes an answer that you like, please give them an upvote in the comments!

Proposal to remove InnoDB Monitor tables

Before there was information_schema and performance_schema, there were InnoDB Monitor tables.
What the monitor tables are, is a way of communicating directly to the InnoDB storage engine by creating a table of a special name. Additional debugging information is then written to MySQL’s error log. For example:

CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;
/* view to the error log */
DROP TABLE innodb_monitor;

There’s a number of problems with this approach:

  1. By using a magic table name, there is no immediate feedback for typos or spelling errors in monitor table names. The user has to wait to see if the error log is written to, and then debug if not.
  2. The monitors will continually write to the error log as long as the magic table still exists. This can reduce the error log readability.
  3. Parsing and using the error log is non-trivial for automated tooling. It also requires local filesystem access, and the log-file path can be changed by configuration.
  4. It is difficult to apply privileges to the creation of magic tables, other than those required to CREATE or DROP tables. This is not specifically a security risk (local file system access is still required to view the log file), but semantically it makes more sense for these to require either the PROCESS or SUPER privilege.
  5. information_schema is the SQL Standard for meta-data access and should be used wherever possible.

Deprecation Plan

The deprecation plan proposed for MySQL 5.7 is:

  • innodb_monitor. To be replaced with SET GLOBAL innodb_monitor=ON|OFF.
  • innodb_lock_monitor. To be replaced with SET GLOBAL innodb_lock_monitor=ON|OFF.
  • innodb_tablespace_monitor. To be removed. information_schema will become the recommended alternative.
  • innodb_table_monitor. To be removed. information_schema will become the recommended alternative.
  • innodb_mem_validate. To be removed. This depends on UNIV_MEM_DEBUG, which is not normally enabled even in debug builds.

So in all cases the magic table name (point #1 above) is removed, but in some cases the monitors remain to offer compatibility where the corresponding information_schema functionality may differ dramatically in meta-data available.

Conclusion

As with our other upcoming changes planned, we are seeing community feedback as to how this change will affect users.
Do you use the InnoDB Monitors?
Will the deprecation plan affect you negatively?
Please leave a comment, or get in touch with me. Thanks!

Upgrading from the earlier MySQL RPM Format to Yum Repos

The packages that are available in the yum repos contain a number of enhancements over the RPM packages that are available from dev.mysql.com.
Norvald blogged on some of these enhancements earlier. Today I wanted to walk through a safe upgrade path, as they are not quite compatible with each-other.

My Existing Installation

To start with, the packages I have installed came from “Red Hat Enterprise Linux 6 / Oracle Linux 6 (x86, 64-bit), RPM Bundle” on dev.mysql.com. You can check which packages you have installed with:

[root@localhost ~]# rpm -qa | grep -i mysql
MySQL-client-5.6.14-1.el6.x86_64
MySQL-embedded-5.6.14-1.el6.x86_64
MySQL-server-5.6.14-1.el6.x86_64
MySQL-shared-5.6.14-1.el6.x86_64
MySQL-devel-5.6.14-1.el6.x86_64
MySQL-test-5.6.14-1.el6.x86_64
MySQL-shared-compat-5.6.14-1.el6.x86_64

Uninstalling and Installing Yum repos

I recommend first running yum update before removing, and then installing the new repo:

yum update
yum localinstall http://repo.mysql.com/mysql-community-release-el6-3.noarch.rpm

After this step, stop MySQL (note the missing d in the dev.mysql.com packages):

service mysql stop

Now with yum shell it’s possible to uninstall the existing packages (listed in ‘my existing installation’ above) and install the replacement packages from the yum repo in one step:

yum shell
 > remove MySQL-shared MySQL-devel MySQL-test MySQL-server MySQL-client MySQL-shared-compat MySQL-embedded
 > install mysql-server mysql-libs mysql-libs-compat
 > run
 > quit

Here was the summary output from my yum session:

=================================================================================================================================================================
 Package                                     Arch                   Version                      Repository                                                 Size
=================================================================================================================================================================
Installing:
 mysql-community-libs                        x86_64                 5.6.14-3.el6                 mysql-community                                           1.8 M
 mysql-community-libs-compat                 x86_64                 5.6.14-3.el6                 mysql-community                                           1.6 M
 mysql-community-server                      x86_64                 5.6.14-3.el6                 mysql-community                                            51 M
Removing:
 MySQL-client                                x86_64                 5.6.14-1.el6                 @/MySQL-client-5.6.14-1.el6.x86_64                         81 M
 MySQL-devel                                 x86_64                 5.6.14-1.el6                 @/MySQL-devel-5.6.14-1.el6.x86_64                          19 M
 MySQL-embedded                              x86_64                 5.6.14-1.el6                 @/MySQL-embedded-5.6.14-1.el6.x86_64                      432 M
 MySQL-server                                x86_64                 5.6.14-1.el6                 @/MySQL-server-5.6.14-1.el6.x86_64                        235 M
 MySQL-shared                                x86_64                 5.6.14-1.el6                 @/MySQL-shared-5.6.14-1.el6.x86_64                        8.4 M
 MySQL-shared-compat                         x86_64                 5.6.14-1.el6                 @/MySQL-shared-compat-5.6.14-1.el6.x86_64                  11 M
 MySQL-test                                  x86_64                 5.6.14-1.el6                 @/MySQL-test-5.6.14-1.el6.x86_64                          318 M
Installing for dependencies:
 mysql-community-client                      x86_64                 5.6.14-3.el6                 mysql-community                                            18 M
 mysql-community-common                      x86_64                 5.6.14-3.el6                 mysql-community                                           296 k
Transaction Summary
=================================================================================================================================================================
Install       5 Package(s)
Remove        7 Package(s)

MySQL should now be installed from the yum packages. You just have two more steps to complete – start it, and configure it on boot:

service mysqld start # note the added 'd'
chkconfig mysqld on

Your original /etc/my.cnf is even saved. You can compare it to /etc/my.cnf.rpmnew if you would like to consider switching to the new one.
Still having problems? I recommend heading to the MySQL Forums. There is a section dedicated to Install & Repo help.