Testing performance of queries in mysql

后端 未结 6 766
南方客
南方客 2020-12-12 13:55

I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details:

  • I have root access
  • I am the
相关标签:
6条回答
  • 2020-12-12 14:32

    Full text queries on InnoDB are slow(LIKE "%query%" statements) , there is nothing that you can do to optimize them. Solutions vary from passing that particular table you are querying to MyISAM so you can create fulltext indexes (which innoDB does not support), to denormalizing the row into searchable indexes (not recommended), Doctrine ORM provides an easy example of how to archieve this : http://www.doctrine-project.org/documentation/manual/1_1/nl/behaviors:core-behaviors:searchable The "proper" solution to your problem would be to index the information youre using full text searches on, with a solution like Sphinx Search or Apache Solr.

    Like previously said, you must consider the cache state when comparing results, a primed cache gives extremely performant queries. You should consider the cache hit percentage of a particular query, even if it is an expensive query, if it has a 99% cache hit ratio, the average performance will be very high.

    Finegrained tuning of queries is not a silver bullet, you might be adding complexity to your application for the sake of optimizations that overall in a production enviroment, are negligible.

    Consider your workload, troubleshoot frequent , unperforming queries (use the slow_query_log in mysql, dont blindly start optimizing queries).

    0 讨论(0)
  • 2020-12-12 14:42

    Assuming that you can not optimize the LIKE operation itself, you should try to optimize the base query without them minimizing number of rows that should be checked.

    Some things that might be useful for that:

    rows column in EXPLAIN SELECT ... result. Then,

    mysql> set profiling=1;
    mysql> select sql_no_cache * from mytable;
     ...
    mysql> show profile;
    +--------------------+----------+
    | Status             | Duration |
    +--------------------+----------+
    | starting           | 0.000063 |
    | Opening tables     | 0.000009 |
    | System lock        | 0.000002 |
    | Table lock         | 0.000005 |
    | init               | 0.000012 |
    | optimizing         | 0.000002 |
    | statistics         | 0.000007 |
    | preparing          | 0.000005 |
    | executing          | 0.000001 |
    | Sending data       | 0.001309 |
    | end                | 0.000003 |
    | query end          | 0.000001 |
    | freeing items      | 0.000016 |
    | logging slow query | 0.000001 |
    | cleaning up        | 0.000001 |
    +--------------------+----------+
    15 rows in set (0.00 sec)
    

    Then,

    mysql> FLUSH STATUS;
    mysql> select sql_no_cache * from mytable;
    ...
    mysql> SHOW SESSION STATUS LIKE 'Select%';
    +------------------------+-------+
    | Variable_name          | Value |
    +------------------------+-------+
    | Select_full_join       | 0     |
    | Select_full_range_join | 0     |
    | Select_range           | 0     |
    | Select_range_check     | 0     |
    | Select_scan            | 1     |
    +------------------------+-------+
    5 rows in set (0.00 sec)
    

    And another interesting value is last_query_cost, which shows how expensive the optimizer estimated the query (the value is the number of random page reads):

    mysql> SHOW STATUS LIKE 'last_query_cost';
    +-----------------+-------------+
    | Variable_name   | Value       |
    +-----------------+-------------+
    | Last_query_cost | 2635.399000 |
    +-----------------+-------------+
    1 row in set (0.00 sec)
    

    MySQL documentation is your friend.

    0 讨论(0)
  • 2020-12-12 14:45

    Have you considered using Maatkit? One of its capabilities I'm slightly familiar with is to capture MySQL network data with tcpdump and process the dump with mk-query-digest. This tool allows you to show some fine grained details about each query. But there's a whole bunch of other tools which should make query analysis easier.

    0 讨论(0)
  • 2020-12-12 14:45

    You could try the mysql workbench, i thought it had a sql statement monitor so you can see how fast it is and why it is fast

    0 讨论(0)
  • 2020-12-12 14:48

    As the linked article suggests, use FLUSH TABLES between test runs to reset as much as you can (notably the query cache).

    Shouldn't your testing take into account that InnoDB will itself have different states during actual performance, such that you become interested in aggregate performance over multiple trials? How "real" is your performance testing going to be if you want to reset InnoDB for every trial? The query you reject because it performs poorly immediately after restart might be far and away the best query after InnoDB has warmed up a little bit.

    If I were you, I'd focus on what the query optimizer is doing separately from InnoDB's performance. There's much written about how to tune InnoDB, but it helps to have good queries to start.

    You could also try measuring performance with equivalent MyISAM tables, where FLUSH TABLES really will reset you to a mostly-identical starting point.

    Have you tried turning query caching off altogether? Even with SQL_NO_CACHE, there's about a 3% penalty just having the query cache on.

    0 讨论(0)
  • 2020-12-12 14:49

    Cited from this page: SQL_NO_CACHE options affect caching of query results in the query cache. If your table is quite small, it is possible, that the table itself is already cached. Since you just avoid caching of the results and not the tables you get the described behavior sometimes. So, as told in the other postings, you should flush your tables in between the queries.

    0 讨论(0)
提交回复
热议问题