query-optimization

Can MySQL use index in a RANGE QUERY with ORDER BY?

我的梦境 提交于 2019-11-30 03:26:16
问题 I have a MySQL table: CREATE TABLE mytable ( id INT NOT NULL AUTO_INCREMENT, other_id INT NOT NULL, expiration_datetime DATETIME, score INT, PRIMARY KEY (id) ) I need to run query in the form of: SELECT * FROM mytable WHERE other_id=1 AND expiration_datetime > NOW() ORDER BY score LIMIT 10 If I add this index to mytable: CREATE INDEX order_by_index ON mytable ( other_id, expiration_datetime, score); Would MySQL be able to use the entire order_by_index in the query above? It seems like it

T-SQL query performance puzzle: Why does using a variable make a difference?

倾然丶 夕夏残阳落幕 提交于 2019-11-30 02:42:24
问题 I'm trying to optimize a complex SQL query and getting wildly different results when I make seemingly inconsequential changes. For example, this takes 336 ms to run: Declare @InstanceID int set @InstanceID=1; With myResults as ( Select Row = Row_Number() Over (Order by sv.LastFirst), ContactID From DirectoryContactsByContact(1) sv Join ContainsTable(_s_Contacts, SearchText, 'john') fulltext on (fulltext.[Key]=ContactID) Where IsNull(sv.InstanceID,1) = @InstanceID and len(sv.LastFirst)>1 )

What is the optimal way to compare dates in Microsoft SQL server?

巧了我就是萌 提交于 2019-11-30 01:31:38
I have a SQL datetime field in a very large table. It's indexed and needs to be queried. The problem is that SQL always stores the time component (even though it's always midnight), but the searches are to the day, rather than time. declare @dateVar datetime = '2013-03-11; select t.[DateColumn] from MyTable t where t.[DateColumn] = dateVar; Won't return anything, as the t.[DateColumn] always includes a time component. My question is what is the best way round this? There seem to be two main groups of options: Create a second variable using dateadd and use a between ... and or >= ... and ... <=

Use timestamp(or datetime) as part of primary key (or part of clustered index)

蓝咒 提交于 2019-11-29 22:28:15
问题 I use following query frequently: SELECT * FROM table WHERE Timestamp > [SomeTime] AND Timestamp < [SomeOtherTime] and publish = 1 and type = 2 order by Timestamp I would like to optimize this query, and I am thinking about put timestamp as part of primary key for clustered index, I think if timestamp is part of primary key , data inserted in table has write to disk sequentially by timestamp field.Also I think this improve my query a lot, but am not sure if this would help. table has 3-4

How to find out why the status of a spid is suspended? What resources the spid is waiting for?

笑着哭i 提交于 2019-11-29 21:03:32
I run EXEC sp_who2 78 and I get the following results : How can I find why its status is suspended? This process is a heavy INSERT based on an expensive query. A big SELECT that gets data from several tables and write some 3-4 millions rows to a different table. There are no locks/ blocks. The waittype it is linked to is CXPACKET . which I can understand because there are 9 78s as you can see on the picture below. What concerns me and what I really would like to know is why the number 1 of the SPID 78 is suspended. I understand that when the status of a SPID is suspended it means the process

Rails 3 Database Indexes and other Optimization

↘锁芯ラ 提交于 2019-11-29 20:29:24
I have been building rails apps for a while now, but unfortunately for me, none of my apps have had a large amount of data or traffic. But now I have one that is gaining steam. So I am diving in head first into scaling and optimizing my app. It seems the first and easiest step to do this is with database indexes. I've got a good huge list of indexes that should cover pretty much all of my queries, but when I added them to my database via migrations it only took a few seconds to add them. For some reason I thought they would have to go through all of my entries (of which there are thousands)

Why is MAX() 100 times slower than ORDER BY … LIMIT 1?

匆匆过客 提交于 2019-11-29 19:56:42
问题 I have a table foo with (among 20 others) columns bar , baz and quux with indexes on baz and quux . The table has ~500k rows. Why do the following to queries differ so much in speed? Query A takes 0.3s, while query B takes 28s. Query A select baz from foo where bar = :bar and quux = (select quux from foo where bar = :bar order by quux desc limit 1) Explain id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY foo ref quuxIdx quuxIdx 9 const 2 "Using where" 2 SUBQUERY

Mysql improve SELECT speed

蹲街弑〆低调 提交于 2019-11-29 18:28:42
问题 I'm currently trying to improve the speed of SELECTS for a MySQL table and would appreciate any suggestions on ways to improve it. We have over 300 million records in the table and the table has the structure tag, date, value. The primary key is a combined key of tag and date. The table contains information for about 600 unique tags most containing an average of about 400,000 rows but can range from 2000 to over 11 million rows. The queries run against the table are: SELECT date, value FROM

MySQL MyISAM table performance problem revisited

戏子无情 提交于 2019-11-29 17:41:18
This question is related to this one . I have a page table with the following structure: CREATE TABLE mydatabase.page ( pageid int(10) unsigned NOT NULL auto_increment, sourceid int(10) unsigned default NULL, number int(10) unsigned default NULL, data mediumtext, processed int(10) unsigned default NULL, PRIMARY KEY (pageid), KEY sourceid (sourceid) ) ENGINE=MyISAM AUTO_INCREMENT=9768 DEFAULT CHARSET=latin1; The data column contains text whose size is around 80KB - 200KB per record. The total size of the data stored in the data column is around 1.5GB. Executing this query takes 0.08 seconds:

Can I optimize a SELECT DISTINCT x FROM hugeTable query by creating an index on column x?

怎甘沉沦 提交于 2019-11-29 16:24:26
问题 I have a huge table, having a much smaller number (by orders of magnitude) of distinct values on some column x . I need to do a query like SELECT DISTINCT x FROM hugeTable , and I want to do this relatively fast. I did something like CREATE INDEX hugeTable_by_x ON hugeTable(x) , but for some reason, even though the output is small, the query execution is not as fast. The query plan shows that 97% of the time is spent on Index Scan of hugeTable_by_x , with an estimated number of rows equal to