Proposal to change additional defaults in MySQL 5.7 (February Edition)

Following on from my two earlier posts, in the MySQL team we are proposing a new set of changes to defaults for MySQL 5.7:

Setting Old Default New Default
log_slow_admin_statements OFF ON
log_slow_slave_statements OFF ON
long-query-time 10 2
log-queries-not-using-indexes OFF ON
min-examined-row-limit 0 1000
MySQL Command Line Client +show_warnings
group_concat_max_len 1024 1M
max_allowed_packet 4M 64M
symbolic-links OFF
temp-pool 1 0
table_open_cache_instances 1 16

To explain these changes in more details:

  • The slow query log remains disabled by default, but when enabled our hope is that we can make it immediately useful. A query exceeding 2 seconds will now be considered slow, and since queries that do not use indexes have the highest chance of causing future problems, they will also be logged unless they examine fewer than 1000 rows. The number of 1000 was chosen because it should eliminate the majority of false positives from queries retrieving a static set of records such as navigation menu items, which are well suited to tablescans.

    In proposing this change, we considered the defaults that other products use for considering a query slow. A popular web performance tool uses a default of 0.5 seconds for an acceptable time to render a full page server side, with 2 seconds (4x) slow enough to warrant logging. Our own MySQL Enterprise Monitor considers 100ms acceptable, and 400ms (4x) as slow. We also considered that setting the min-examined-row-limit to a non zero value will now require an additional step for those that set their long-query-time to zero seconds. We would like to thank Daniel Black for suggesting we change our slow query log options.

  • The MySQL command line client has the ability to automatically print warnings out to screen as they occur. We feel like ON is the more useful default, as a user will typically not intend to execute statements that cause warnings. This behavior applies to both interactive and batch modes, and can be disabled with show_warnings=0. We would like to thank Ryuta Kamizono for this suggestion.
  • We find the GROUP_CONCAT() function in MySQL to be incredibly useful for summarizing results in an aggregate query. We also believe that the default maximum length for grouping values has not kept pace with the larger amounts of memory available on modern hardware. We would like to thank Shlomi Noach for this suggestion.
  • In MySQL 5.6 we increased the max_allowed_packet from 1M to a conservative 4M. We have received a lot of feedback on this change, and are proposing to increase the default to 64M for MySQL 5.7. We would like to thank Shlomi Noach for this suggestion.
  • Many of our configuration files for MySQL packages (as well as those that ship with Linux distributions) default to disabling symbolic links for security reasons. We feel that changing a de facto default to a compiled default will improve user experience (as well as security). We would like to thank Honza Horak for this suggestion.
  • The temp-pool option was originally created as a means to work around potential filesystem issues on Linux when using internal MyISAM based temp tables. The underlying OS issues have since been resolved, and disabling this option removes mutex contention for all disk based temp table engines (MyISAM and InnoDB).
  • We are proposing to increase the default table_open_cache_instances to 16 in order to reduce contention on internal mutexes when opening tables.

For those of you who would like to test out these changes (along with previous changes proposed), I have sample configuration files available:

Please let us know what you think of these changes!
You can leave a comment here, or get in touch via email.

Published by


I joined MySQL AB in 2006, left, and am now back at Oracle working on the MySQL team. I’ve also worked at Percona and InPowered.

  • Justin Swanhart

    So if you type in a long comment, THEN click Facebook to log in – BAM! Lost comment 🙁

    Anyway, I’d like to see @session.group_concat_max_len just default to @@max_alowed_packet. You’ve already commited to using up to max_allowed_packet amount of memory per query anyway. Almost every app I’ve seen that uses group_concat() [including mine] set the @session.group_concat_max_len := @@max_allowed_packet anyway.

    • I just had a similar experience replying to a comment 🙁

      This is an interesting question that has not been raised before. If we can eliminate a session variable, we would very likely be in favour of doing so.

      • Justin Swanhart

        You can’t remove the var because it will break old scripts. I’d like it if you set it to zero (new default) it is max_allowed_packet, otherwise it is the size specified.

  • Simon J Mudd

    For people who are moving to 5.7, and especially given the number of settings you are planning on changing, it would be good to have a way to see what the affect of these changes might be compared to an existing 5.6 configuration.

    It would therefore be useful to have a simple tool which could:

    (1) explicitly set the default settings in /etc/my.cnf with 5.6 settings which are not set explicitly at the moment, so that current behaviour will be maintained in 5.7, or

    (2) explicitly reference those settings which are set in /etc/my.cnf but are different to the new 5.7 defaults so that the DBA can see which settings need to be _checked_ as part of the migration from 5.6 to 5.7 and may need to be adjusted

    So you might have a modified /etc/my.cnf (or possible replacement file, say /etc/my.cnf.5.7) with:

    # === Section 1 ===
    # Explicitly configuring MySQL 5.6 settings which differ from 5.7 and were not set explicitly before:
    long-query-time = 10 # MySQL 5.7 default value: 2
    max_allowed_packet = 4M # MySQL 5.7 default value: 64M

    # === End of section 1 ===
    # === Section 2 ===
    # Settings defined in /etc/my.cnf which differ between 5.6 and 5.7
    # sync_binlog = 1 # 5.6 default: 0, 5.7 default: 1
    # innodb_file_format = Barracuda # 5.6 default: Antelope, 5.7 default: Barracuda

    # === End of section 2 ===

    It might be good to do a similar job against a running 5.6 server’s configuration, though probably just checking and modifying / commenting on the /etc/my.cnf file should be enough.

    Doing this should make the upgrade process easier as it will avoid you _needing_ to read through all the documentation on changes in 5.7. Yes, it’s good to do that but it’s a very long read, and such a script would help those who aren’t quite as immersed daily in MySQL and all its internals. Scanning through the 464 global variables to check which have changed and which of those might be configured is a horrendous task so a bit of help here would be good.

  • I avoid using log_slow_slave_statements=ON, because if an update is slow on both the master and the slave, and then I combine logs to process with pt-query-digest, certain queries are likely to be counted twice.

    log_queries_not_using_indexes=ON tends to create noise, because many queries won’t use indexes by design (e.g. SELECT COUNT(*) …) or they’re against tables that will never get larger (e.g. small lookup tables).

    +1 for enabling warnings in the mysql monitor. I do that in all my environments.

    I’m not clear why we don’t set max_allowed_packet to its maximum of 1G. Why not make that the default? Is it just to prevent apps from sending ridiculously long SQL IN()-lists?

    • A bit of trivia perhaps, but I reported the original bug on the slave not logging slow queries: 🙂

      I can still remember filing that bug in 2006. I was working on a case where the slave had a different memory fit, and was executing queries different to the master. My opinion is that if it consumes resources it should default to being included. This will also help correlate other events around the same timing in the slow query log. Advanced users can then set log_slow_slave_statements=OFF if required.

      Re: log_queries_not_using_indexes=ON: Our hope is that we are avoiding slow lookup tables because our proposal also includes min-examined-row-limit=1000. SELECT count(*) without a WHERE clause will be expensive if the table grows to a large size, so I don’t think it’s unreasonable that it be logged. I consider this feature to be an “early warning system” for non-scalable queries.

      RE: max_allowed_packet: I’m not aware of any specific issues with setting it to 1G, but I think it’s good practice to keep buffers to within an order of magnitude of an expected range by default, and allow the special cases to increase if needed. Certainly with schemaless objects “rows” are getting wider, so this is something that may need revision again in the future.

  • Sean Davidson

    log_queries_not_using_indexes=ON: I also have concerns about this one.
    If SHOW [VARIABLES|STATUS] still counts as a query without an index, many monitoring tools (i.e. Nagios, Monyog, many a custom script, etc.) will fill our logs with noise. To be clear, I’m not arguing such statements should or shouldn’t “count”, as I see value in each case. However, while min-examined-row-limit=1000 may be sufficient to eliminate many of the tables in information_schema and performance_schema, not all are or will remain < 1000 rows – and none of them have indexes. Yes, I do know: "This option does not necessarily mean that no index is used", but I don't know if/how a sufficiently selective WHERE/LIKE clause will exempt a query from being counted here. Regardless, many monitoring tools don't qualify such queries/statements, rather preferring to capture the lot and filter on the client-side.
    If these are the very such (imho- gratuitous) queries you want to capture, then by all means, set log_queries_not_using_indexes=ON, but I'm not sure it should be the default.

    • Hi Sean,

      I double checked how SHOW VARIABLES/STATUS behaves on my blog (since I’m running with these proposed defaults with MySQL 5.7):

      – They do not appear to be slow logged.
      – Strangely even if I drop min-examined-row-limit to 10 or 100 they are still not logged.

      I would consider this a convenient bug, and long term I think it makes sense to make them ‘slow eligible’. My rationale is that they consume server resources, so you don’t want any false readings. Some queries to information_schema can be very expensive, so I don’t think they should be excluded. But I would agree with you that 1K rows on performance_schema is very low since it is an in memory engine without indexes. This is something we will need to consider carefully.

      On the same topic, this exercise confirmed conclusively that min-examined-row-limit=100 is too low, as an out of the box wordpress installation will generate a lot of log traffic 🙂