query-optimization

Sql Query Pervious Row Optimisation

China☆狼群 提交于 2019-11-27 04:48:23
问题 Here is my table structure MyTable ----------- ObjectID int (Identity), -- Primary Key FileName varchar(10), CreatedDate datetime ........... ........... ........... I need to get the time taken to create record in a file... ie... Time elapsed between the previous record in the same file and the current record of the same file ie... If the records are ObjectID FileName CreatedDate (just showing the time part here) -------- -------- ----------- 1 ABC 10:23 2 ABC 10:25 3 DEF 10:26 4 ABC 10:30 5

Subquery v/s inner join in sql server

你。 提交于 2019-11-27 04:28:53
I have following queries First one using inner join SELECT item_ID,item_Code,item_Name FROM [Pharmacy].[tblitemHdr] I INNER JOIN EMR.tblFavourites F ON I.item_ID=F.itemID WHERE F.doctorID = @doctorId AND F.favType = 'I' second one using sub query like SELECT item_ID,item_Code,item_Name from [Pharmacy].[tblitemHdr] WHERE item_ID IN (SELECT itemID FROM EMR.tblFavourites WHERE doctorID = @doctorId AND favType = 'I' ) In this item table [Pharmacy].[tblitemHdr] Contains 15 columns and 2000 records. And [Pharmacy].[tblitemHdr] contains 5 columns and around 100 records. in this scenario which query

mysql fix Using where;

别说谁变了你拦得住时间么 提交于 2019-11-27 03:13:44
问题 My SQL Query: SELECT * FROM updates_cats WHERE uid =118697835834 ORDER BY created_date ASC Current Indexes: index1(uid, created_date) EXPLAIN EXTENDED result: 1 SIMPLE updates_cats ref index1 index1 8 const 2 100.00 Using where How can i fix the Extra field where it has Using where so it can use the indexes instead? EDIT: SHOW CREATE TABLE: CREATE TABLE `updates_cats` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `u_cat_id` bigint(20) NOT NULL DEFAULT '0', `uid` bigint(20) NOT NULL, `u_cat

Fetching RAND() rows without ORDER BY RAND() in just one query

戏子无情 提交于 2019-11-27 03:01:08
问题 Using RAND() in MySQL to get a single random row out of a huge table is very slow: SELECT quote FROM quotes ORDER BY RAND() LIMIT 1 Here is an article about this issue and why this is the case. Their solution is to use two queries: SELECT COUNT(*) AS cnt FROM quotes - Use result to generate a number between 0 and COUNT(*) SELECT quote FROM quotes LIMIT $generated_number, 1 I was wondering, whether this would be possible in just one query. So my approach was: SELECT * FROM quotes LIMIT ( ROUND

PL/SQL Performance Tuning for LIKE '%…%' Wildcard Queries

依然范特西╮ 提交于 2019-11-27 02:45:56
问题 We're using Oracle 11g database. As you may or may not know, if you use wildcard query with "%" in front of the string, the column index is not being used and a full table scan is happening. It looks like there isn't a definitive suggestion on how to improve this kind of query, but perhaps you could share some valuable information from your experience on how to optimize the following query: SELECT * FROM myTable WHERE UPPER(CustomerName) like '%ABC%' OR UPPER(IndemnifierOneName) like '%ABC%'

Why does direction of index matter in MongoDB?

血红的双手。 提交于 2019-11-27 02:39:20
To quote the docs : When creating an index, the number associated with a key specifies the direction of the index, so it should always be 1 (ascending) or -1 (descending). Direction doesn't matter for single key indexes or for random access retrieval but is important if you are doing sorts or range queries on compound indexes. However, I see no reason why direction of the index should matter on compound indexes. Can someone please provide a further explanation (or an example)? Jared Kells MongoDB concatenates the compound key in some way and uses it as the key in a BTree. When finding single

Performance difference: condition placed at INNER JOIN vs WHERE clause

為{幸葍}努か 提交于 2019-11-27 02:38:38
问题 Say I have a table order as id | clientid | type | amount | itemid | date ---|----------|------|--------|--------|----------- 23 | 258 | B | 150 | 14 | 2012-04-03 24 | 258 | S | 69 | 14 | 2012-04-03 25 | 301 | S | 10 | 20 | 2012-04-03 26 | 327 | B | 54 | 156 | 2012-04-04 clientid is a foreign-key back to the client table itemid is a foreign key back to an item table type is only B or S amount is an integer and a table processed as id | orderid | processed | date ---|---------|-----------|----

Postgres query optimization (forcing an index scan)

落爺英雄遲暮 提交于 2019-11-27 02:38:04
问题 Below is my query. I am trying to get it to use an index scan, but it will only seq scan. By the way the metric_data table has 130 million rows. The metrics table has about 2000 rows. metric_data table columns: metric_id integer , t timestamp , d double precision , PRIMARY KEY (metric_id, t) How can I get this query to use my PRIMARY KEY index? SELECT S.metric, D.t, D.d FROM metric_data D INNER JOIN metrics S ON S.id = D.metric_id WHERE S.NAME = ANY (ARRAY ['cpu', 'mem']) AND D.t BETWEEN

Meaning of “Select tables optimized away” in MySQL Explain plan

北城以北 提交于 2019-11-27 02:29:41
问题 What is the meaning of Select tables optimized away in MySQL Explain plan? explain select count(comment_count) from wp_posts; +----+-------------+---------------------------+-----------------------------+ | id | select_type | table,type,possible_keys, | Extra | | | | key,key_len,ref,rows | | +----+-------------+---------------------------+-----------------------------+ | 1 | SIMPLE | all NULLs | Select tables optimized away| +----+-------------+---------------------------+--------------------

Whats the fastest way to lookup big tables for points within radius MySQL (latitude longitude)

跟風遠走 提交于 2019-11-27 02:13:10
Currently I have a few tables with 100k+ rows. I am trying to lookup the data like follows. SELECT *, SQRT(POW(69.1 * (latitude - '49.1044302'), 2) + POW(69.1 * ('-122.801094' - longitude) * COS(latitude / 57.3), 2)) AS distance FROM stops HAVING distance < 5 ORDER BY distance limit 100 But currently this method slows with high load. Some queries are taking 20+ seconds to complete. If anyone knows any better ways to optimize this would be great. Well first of all if you have a lot of geospatial data, you should be using mysql's geospatial extensions rather than calculations like this. You can