query-optimization

PostgreSQL query runs faster with index scan, but engine chooses hash join

微笑、不失礼 提交于 2019-11-28 05:02:33
The query: SELECT "replays_game".* FROM "replays_game" INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id" WHERE "replays_playeringame"."player_id" = 50027 If I set SET enable_seqscan = off , then it does the fast thing, which is: QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1) -> Index Scan using replays_playeringame_player_id on

MySQL explain Query understanding

社会主义新天地 提交于 2019-11-28 03:33:23
I've read on some blogs and in some articles related to optimization, how to optimize queries. I read I need to use indexes and make sure all my primary key and foreign keys are set correctly using a good relational database schema. Now I have a query I need to optimize and I get this on the EXPLAIN : Using where; Using temporary; Using filesort I am using MySQL 5.5 I know I am using WHERE but not with my temporary table nor filesort? What does this mean? Using temporary means that MySQL need to use some temporary tables for storing intermediate data calculated when executing your query. Using

Mysql optimization for REGEXP

﹥>﹥吖頭↗ 提交于 2019-11-28 01:58:04
This query (with different name instead of "jack") happens many times in my slow query log. Why? The Users table has many fields (more than these three I've selected) and about 40.000 rows. select name,username,id from Users where ( name REGEXP '[[:<:]]jack[[:>:]]' ) or ( username REGEXP '[[:<:]]jack[[:>:]]' ) order by name limit 0,5; id is primary and autoincrement. name has an index. username has a unique index. Sometimes it takes 3 seconds! If I explain the select on MySQL I've got this: select type: SIMPLE table: Users type: index possible keys: NULL key: name key len: 452 ref: NULL rows:

Why is django ORM so much slower than raw SQL

笑着哭i 提交于 2019-11-28 01:45:23
问题 I have the following two pieces of code: First, in SQL: self.cursor.execute('SELECT apple_id FROM main_catalog WHERE apple_id=%s', apple_id) if self.cursor.fetchone(): print '##' Next, in Django: if Catalog.objects.filter(apple_id=apple_id).exists(): print '>>>' Doing it the first way is about 4x faster than the second way in a loop of 100k entries. What accounts for Django being so much slower? 回答1: Typically ORMs go to the trouble of instantiating a complete object for each row and

Sql Query Pervious Row Optimisation

主宰稳场 提交于 2019-11-28 01:36:13
Here is my table structure MyTable ----------- ObjectID int (Identity), -- Primary Key FileName varchar(10), CreatedDate datetime ........... ........... ........... I need to get the time taken to create record in a file... ie... Time elapsed between the previous record in the same file and the current record of the same file ie... If the records are ObjectID FileName CreatedDate (just showing the time part here) -------- -------- ----------- 1 ABC 10:23 2 ABC 10:25 3 DEF 10:26 4 ABC 10:30 5 DEF 10:31 6 DEF 10:35 The required output is... ObjectID FileName CreatedDate PrevRowCreatedDate -----

Why are UNION queries so slow in MySQL?

走远了吗. 提交于 2019-11-27 23:57:09
问题 The bounty expires in 5 days . Answers to this question are eligible for a +50 reputation bounty. William Entriken wants to draw more attention to this question. When I optimize my 2 single queries to run in less than 0.02 seconds and then UNION them the resulting query takes over 1 second to run. Also, a UNION ALL takes longer than a UNION DISTINCT . I would assume allowing duplicates would make the query run faster and not slower. Am I really just better off running the 2 queries separately

For autoincrement fields: MAX(ID) vs TOP 1 ID ORDER BY ID DESC

房东的猫 提交于 2019-11-27 23:34:11
I want to find the highest AutoIncremented value from a field. (its not being fetched after an insert where I can use @@SCOPE_IDENTITY etc) Which of these two queries would run faster or gives better performance. Id is the primary key and autoincrement field for Table1 . And this is for Sql Server 2005. SELECT MAX(Id) FROM Table1 SELECT TOP 1 Id FROM Table1 ORDER BY Id DESC [Edit] Yes in this case Id is the field on which I have defined the clustered index. If the index is ID DESC then what.. And yes it would be nice to know how the performance would be affected if 1. Id is a clustered index +

mysql count performance

谁都会走 提交于 2019-11-27 23:20:25
select count(*) from mytable; select count(table_id) from mytable; //table_id is the primary_key both query were running slow on a table with 10 million rows. I am wondering why since wouldn't it easy for mysql to keep a counter that gets updated on all insert,update and delete? and is there a way to improve this query? I used explain but didn't help much. As cherouvim pointed out in the comments, it depends on the storage engine. MyISAM does keep a count of the table rows, and can keep it accurate since the only locks MyISAM supports is a table lock. InnoDB however supports transactions, and

Optimizing MySQL LIKE '%string%' queries in innoDB

房东的猫 提交于 2019-11-27 22:06:23
Having this table: CREATE TABLE `example` ( `id` int(11) unsigned NOT NULL auto_increment, `keywords` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; We would like to optimize the following query: SELECT id FROM example WHERE keywords LIKE '%whatever%' The table is InnoDB, (so no FULLTEXT for now) which would be the best index to use in order to optimize such query? We've tried a simple : ALTER TABLE `example` ADD INDEX `idxSearch` (`keywords`); But an explain query shows that need to scan the whole table if our queries where LIKE 'whatever%' instead, this index performs well, but

Optimizing Execution Plans for Parameterized T-SQL Queries Containing Window Functions

吃可爱长大的小学妹 提交于 2019-11-27 21:15:51
EDIT: I've updated the example code and provided complete table and view implementations for reference, but the essential question remains unchanged. I have a fairly complex view in a database that I am attempting to query. When I attempt to retrieve a set of rows from the view by hard-coding the WHERE clause to specific foreign key values, the view executes very quickly with an optimal execution plan (indexes are used properly, etc.) SELECT * FROM dbo.ViewOnBaseTable WHERE ForeignKeyCol = 20 However, when I attempt to add parameters to the query, all of a sudden my execution plan falls apart.