query-optimization

Optimizing MySQL LIKE '%string%' queries in innoDB

你说的曾经没有我的故事 提交于 2019-11-26 20:53:50
问题 Having this table: CREATE TABLE `example` ( `id` int(11) unsigned NOT NULL auto_increment, `keywords` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; We would like to optimize the following query: SELECT id FROM example WHERE keywords LIKE '%whatever%' The table is InnoDB, (so no FULLTEXT for now) which would be the best index to use in order to optimize such query? We've tried a simple : ALTER TABLE `example` ADD INDEX `idxSearch` (`keywords`); But an explain query shows that need

Optimizing Execution Plans for Parameterized T-SQL Queries Containing Window Functions

一世执手 提交于 2019-11-26 20:35:30
问题 EDIT: I've updated the example code and provided complete table and view implementations for reference, but the essential question remains unchanged. I have a fairly complex view in a database that I am attempting to query. When I attempt to retrieve a set of rows from the view by hard-coding the WHERE clause to specific foreign key values, the view executes very quickly with an optimal execution plan (indexes are used properly, etc.) SELECT * FROM dbo.ViewOnBaseTable WHERE ForeignKeyCol = 20

60 million entries, select entries from a certain month. How to optimize database?

若如初见. 提交于 2019-11-26 19:08:33
I have a database with 60 million entries. Every entry contains: ID DataSourceID Some Data DateTime I need to select entries from certain month. Each month contains approximately 2 million entries. select * from Entries where time between "2010-04-01 00:00:00" and "2010-05-01 00:00:00" (query takes approximately 1.5 minutes) I'd also like to select data from certain month from a given DataSourceID. (takes approximately 20 seconds) There are about 50-100 different DataSourceIDs. Is there a way to make this faster? What are my options? How to optimize this database/query? EDIT: There's approx.

Does MySQL eliminate common subexpressions between SELECT and HAVING/GROUP BY clause

大兔子大兔子 提交于 2019-11-26 18:20:41
问题 I often see people answer MySQL questions with queries like this: SELECT DAY(date), other columns FROM table GROUP BY DAY(date); SELECT somecolumn, COUNT(*) FROM table HAVING COUNT(*) > 1; I always like to give the column an alias and refer to that in the GROUP BY or HAVING clause, e.g. SELECT DAY(date) AS day, other columns FROM table GROUP BY day; SELECT somecolumn, COUNT(*) AS c FROM table HAVING c > 1; Is MySQL smart enough to notice that the expressions in the later clauses are the same

Does query plan optimizer works well with joined/filtered table-valued functions?

谁都会走 提交于 2019-11-26 17:51:44
问题 In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you

Why would using a temp table be faster than a nested query?

回眸只為那壹抹淺笑 提交于 2019-11-26 16:56:21
问题 We are trying to optimize some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery. I don't think it's relevant other than to explain why this query has been

Oracle <> , != , ^= operators

a 夏天 提交于 2019-11-26 16:37:00
问题 I want to know the difference of those operators, mainly their performance difference. I have had a look at Difference between <> and != in SQL, it has no performance related information. Then I found this on dba-oracle.com, it suggests that in 10.2 onwards the performance can be quite different. I wonder why? does != always perform better then <> ? NOTE: Our tests, and performance on the live system shows, changing from <> to != has a big impact on the time the queries return in. I am here

Subqueries with EXISTS vs IN - MySQL

浪子不回头ぞ 提交于 2019-11-26 16:22:50
Below two queries are subqueries. Both are the same and both works fine for me. But the problem is Method 1 query takes about 10 secs to execute while Method 2 query takes under 1 sec. I was able to convert method 1 query to method 2 but I don't understand what's happening in the query. I have been trying to figure it out myself. I would really like to learn what's the difference between below two queries and how does the performance gain happen ? what's the logic behind it ? I'm new to these advance techniques. I hope someone will help me out here. Given that I read the docs which does not

MySQL “IN” queries terribly slow with subquery but fast with explicit values

吃可爱长大的小学妹 提交于 2019-11-26 16:15:40
问题 I have a MySQL query (Ubu 10.04,Innodb, Core i7, 16Gb RAM, SSD drives, MySQL params optimized): SELECT COUNT(DISTINCT subscriberid) FROM em_link_data WHERE linkid in (SELECT l.id FROM em_link l WHERE l.campaignid = '2900' AND l.link != 'open') The table em_link_data has about 7million rows, em_link has a few thousand. This query will take about 18 seconds to complete. However, if I substitute the results of the subquery and do this: SELECT COUNT(DISTINCT subscriberid) FROM em_link_data WHERE

How to optimise MySQL queries based on EXPLAIN plan

不想你离开。 提交于 2019-11-26 16:14:46
问题 Looking at a query's EXPLAIN plan, how does one determine where optimisations can best be made? I appreciate that one of the first things to check is whether good indexes are being used, but beyond that I'm a little stumped. Through trial and error in the past I have sometimes found that the order in which joins are conducted can be a good source of improvement, but how can one determine that from looking at the execution plan? Whilst I would very much like to gain a good general