query-optimization

removing join operations from a grouping query

送分小仙女□ 提交于 2019-12-24 04:18:28
问题 I have a table that looks like: usr_id query_ts 12345 2019/05/13 02:06 123444 2019/05/15 04:06 123444 2019/05/16 05:06 12345 2019/05/16 02:06 12345 2019/05/15 02:06 it contains a user ID with when they ran a query. Each entry in the table represents that ID running 1 query at the given timestamp. I am trying to produce this: usr_id day_1 day_2 … day_30 12345 31 13 15 123444 23 41 14 I would like to show the number of queries ran each day for the last 30 days for each ID, and if no query was

Why does Wordpress have separate 'usersmeta' and 'users' SQL tables. Why not combine them?

馋奶兔 提交于 2019-12-24 02:42:35
问题 Alongside the users table, Wordpress has a usersmeta table with the following columns meta_id user_id meta_key (e.g. first_name) meta_value (e.g. Tom) Each user has 20 rows in the usersmeta table, regardless of whether or not the rows have a filled-in meta_value. That said, would it not be more efficient to add the always-present meta rows to the users table? I'm guessing that the information in the users table is more frequently queried (e.g. user_id, username, pass), so it is more efficient

Is it better to SELECT before JOINING?

倾然丶 夕夏残阳落幕 提交于 2019-12-24 01:24:57
问题 I need to join 3 tables a,b,c and I know that only one row from the table most to the left has to appear in the end result. SELECT * FROM a LEFT JOIN b ON a.id = b.id LEFT JOIN c ON c.id2 = b.id2 WHERE a.id = 12; I have come up with the following query because it seems more efficient, but both queries take the same time to execute. Is this because the first query is optimized? Should I bother to choose the more efficient (second) query or stick to the first one because it's more readable?

Why doesn't MySql automatically optimises BETWEEN query?

半世苍凉 提交于 2019-12-24 00:37:52
问题 I have two query for same output Slow Query: SELECT * FROM account_range WHERE is_active = 1 AND '8033576667466317' BETWEEN range_start AND range_end; Execution Time: ~800 ms . Explain: +----+-------------+---------------+------------+------+-------------------------------------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+---------------+---

How can I get a COUNT(col) … GROUP BY to use an index?

喜欢而已 提交于 2019-12-24 00:08:54
问题 I've got a table (col1, col2, ...) with an index on (col1, col2, ...). The table has got millions of rows in it, and I want to run a query: SELECT col1, COUNT(col2) WHERE col1 NOT IN (<couple of exclusions>) GROUP BY col1 Unfortunately, this is resulting in a full table scan of the table, which takes upwards of a minute. Is there any way of getting oracle to use the index on the columns to return the results much faster? EDIT: more specifically, I'm running the following query: SELECT owner,

Optimize the join performance with Hive partition table

a 夏天 提交于 2019-12-23 23:23:42
问题 I have a Hive orc test_dev_db.TransactionUpdateTable table with some sample data, which will be holding increment data which needs to be updated to main table (test_dev_db.TransactionMainHistoryTable) which is partitioned on columns Country,Tran_date. Hive Incremental load table schema: It holds 19 rows which needs to be merge. CREATE TABLE IF NOT EXISTS test_dev_db.TransactionUpdateTable ( Transaction_date timestamp, Product string, Price int, Payment_Type string, Name string, City string,

Query optimization — takes too long and stops the server

隐身守侯 提交于 2019-12-23 23:16:17
问题 My query generates some reports about speeding, last time, and average speed. This is my query: Select r1 . *, r2.name, r2.notes, r2.serial From (SELECT k.idgps_unit, MIN(k.dt) AS DT_Start, MIN(CASE WHEN k.RowNumber = 1 THEN k.Lat END) AS Latitude_Start, MIN(CASE WHEN k.RowNumber = 1 THEN k.Long END) AS Longitude_Start, MIN(CASE WHEN k.RowNumber = 1 THEN k.Speed_kmh END) AS Speed_Start, MAX(k.dt) AS dt_end, MIN(CASE WHEN k.RowNumber = MaxRowNo THEN k.Lat END) AS Latitude_End, MIN(CASE WHEN k

MySQL Erratic Query Times

末鹿安然 提交于 2019-12-23 22:30:22
问题 I am using MySQL version 5.5.14 to run the following query from a table of 5 Million rows: SELECT P.ID, P.Type, P.Name, P.cty , X(P.latlng) as 'lat', Y(P.latlng) as 'lng' , P.cur, P.ak, P.tn, P.St, P.Tm, P.flA, P.ldA, P.flN , P.lv, P.bd, P.bt, P.nb , P.ak * E.usD as 'usP' FROM PIG P INNER JOIN EEL E ON E.cur = P.cur WHERE act='1' AND flA >= '1615' AND ldA >= '0' AND yr >= (YEAR(NOW()) - 100) AND lv >= '0' AND bd >= '3' AND bt >= '2' AND nb <= '5' AND cDate >= NOW() AND MBRContains(LineString(

Which SQL pattern is faster to avoid inserting duplicate rows?

前提是你 提交于 2019-12-23 18:24:11
问题 I know of two ways to insert without duplication. The first is using a WHERE NOT EXISTS clause: INSERT INTO table_name (col1, col2, col3) SELECT %s, %s, %s WHERE NOT EXISTS ( SELECT * FROM table_name AS T WHERE T.col1 = %s AND T.col2 = %s) the other is doing a LEFT JOIN : INSERT INTO table_name (col1, col2, col3) SELECT %s, %s, %s FROM ( SELECT %s, %s, %s ) A LEFT JOIN table_name B ON B.COL1 = %s AND B.COL2 = %s WHERE B.id IS NULL LIMIT 1 Is there a general rule as to one being faster than

Please help me with this query (sql server 2008)

痴心易碎 提交于 2019-12-23 12:45:05
问题 ALTER PROCEDURE ReadNews @CategoryID INT, @Culture TINYINT = NULL, @StartDate DATETIME = NULL, @EndDate DATETIME = NULL, @Start BIGINT, -- for paging @Count BIGINT -- for paging AS BEGIN SET NOCOUNT ON; --ItemType for news is 0 ;WITH Paging AS ( SELECT news.ID, news.Title, news.Description, news.Date, news.Url, news.Vote, news.ResourceTitle, news.UserID, ROW_NUMBER() OVER(ORDER BY news.rank DESC) AS RowNumber, TotalCount = COUNT(*) OVER() FROM dbo.News news JOIN ItemCategory itemCat ON