query-optimization

Explain plan in mysql performance using Using temporary; Using filesort ; Using index condition

雨燕双飞 提交于 2019-12-02 11:21:18
问题 I read various blogs and documents online but just wanted to know how i can optimize the query. I am unable to decide if we have to rewrite the query or add indexes in order to optimize. Adding create table structure also CREATE TABLE `dsr_table` ( `DSR_VIA` CHAR(3) DEFAULT NULL, `DSR_PULLDATA_FLAG` CHAR(1) DEFAULT 'O', `DSR_BILLING_FLAG` CHAR(1) DEFAULT 'O', `WH_FLAG` CHAR(1) DEFAULT 'O', `ARCHIVE_FLAG` CHAR(1) NOT NULL DEFAULT 'O', `DSR_BOOKING_TYPE` INT(2) DEFAULT NULL, `DSR_BRANCH_CODE`

MySQL Enhancing Performance without Cache

喜你入骨 提交于 2019-12-02 10:56:10
I am using MySQL version 5.5.14 to run the following query from a table of 5 Million rows: SELECT P.ID, P.Type, P.Name, P.cty , X(P.latlng) as 'lat', Y(P.latlng) as 'lng' , P.cur, P.ak, P.tn, P.St, P.Tm, P.flA, P.ldA, P.flN , P.lv, P.bd, P.bt, P.nb , P.ak * E.usD as 'usP' FROM PIG P INNER JOIN EEL E ON E.cur = P.cur WHERE act='1' AND flA >= '1615' AND ldA >= '0' AND yr >= (YEAR(NOW()) - 100) AND lv >= '0' AND bd >= '3' AND bt >= '2' AND nb <= '5' AND cDate >= NOW() AND MBRContains(LineString( Point(-65.6583, -87.8906) , Point(65.6583, 87.8906) ), latlng) AND Type = 'g' AND tn = 'l' AND St + Tm

Mongodb: Performance impact of $HINT

青春壹個敷衍的年華 提交于 2019-12-02 09:07:54
I have a query that uses compound index with sort on "_id". The compound index has "_id" at the end of the index and it works fine until I add a $gt clause to my query. i.e, Initial query db.colletion.find({"field1": "blabla", "field2":"blabla"}).sort({_id:1} Subsequent queries db.colletion.find({"field1": "blabla", "field2":"blabla", _id:{$gt:ObjetId('...')}}).sort({_id:1} what I am noticing is that there are times when my compound index is not used. Instead, Mongo uses the default "BtreeCursor _id_" To avoid this, I have added a HINT to the cursor. I'd like to know if there is going to be

Optimizing a query returning a lot of records, a way to avoid hundreds of join. Is it a smart solution?

久未见 提交于 2019-12-02 08:32:26
I am not so int SQL and I have the following doubt about how to optimize a query. I am using MySql I have this DB schema: And this is the query that returns the last price (the last date into the Market_Commodity_Price_Series table) of a specific commodity into a specific market. It contains a lot of join to retrieve all the related information: SELECT MCPS.id AS series_id, MD_CD.market_details_id AS market_id, MD_CD.commodity_details_id AS commodity_id, MD.market_name AS market_name, MCPS.price_date AS price_date, MCPS.avg_price AS avg_price, CU.ISO_4217_cod AS currency, MU.unit_name AS

MySQL Query Optimisation - JOIN?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-02 08:23:47
问题 One for all you MySQL experts :-) I have the following query: SELECT o.*, p.name, p.amount, p.quantity FROM orders o, products p WHERE o.id = p.order_id AND o.total != '0.00' AND DATE(o.timestamp) BETWEEN '2012-01-01' AND '2012-01-31' ORDER BY o.timestamp ASC orders table = 80,900 rows products table = 125,389 rows o.id and p.order_id are indexed The query takes about 6 seconds to complete - which is way too long. I am looking for a way to optimize it, possibly with temporary tables or a

How to optimize query if table contain 10000 entries using MySQL?

最后都变了- 提交于 2019-12-02 08:13:54
When I execute this query like this they take so much execution time because user_fans table contain 10000 users entries. How can I optimize it? Query SELECT uf.`user_name`,uf.`user_id`, @post := (SELECT COUNT(*) FROM post WHERE user_id = uf.`user_id`) AS post, @post_comment_likes := (SELECT COUNT(*) FROM post_comment_likes WHERE user_id = uf.`user_id`) AS post_comment_likes, @post_comments := (SELECT COUNT(*) FROM post_comments WHERE user_id = uf.`user_id`) AS post_comments, @post_likes := (SELECT COUNT(*) FROM post_likes WHERE user_id = uf.`user_id`) AS post_likes, (@post+@post_comments) AS

Explain plan in mysql performance using Using temporary; Using filesort ; Using index condition

旧时模样 提交于 2019-12-02 07:19:52
I read various blogs and documents online but just wanted to know how i can optimize the query. I am unable to decide if we have to rewrite the query or add indexes in order to optimize. Adding create table structure also CREATE TABLE `dsr_table` ( `DSR_VIA` CHAR(3) DEFAULT NULL, `DSR_PULLDATA_FLAG` CHAR(1) DEFAULT 'O', `DSR_BILLING_FLAG` CHAR(1) DEFAULT 'O', `WH_FLAG` CHAR(1) DEFAULT 'O', `ARCHIVE_FLAG` CHAR(1) NOT NULL DEFAULT 'O', `DSR_BOOKING_TYPE` INT(2) DEFAULT NULL, `DSR_BRANCH_CODE` CHAR(3) NOT NULL, `DSR_CNNO` CHAR(12) NOT NULL, `DSR_BOOKED_BY` CHAR(1) NOT NULL, `DSR_CUST_CODE`

MySQL Query Optimisation - JOIN?

北慕城南 提交于 2019-12-02 04:43:36
One for all you MySQL experts :-) I have the following query: SELECT o.*, p.name, p.amount, p.quantity FROM orders o, products p WHERE o.id = p.order_id AND o.total != '0.00' AND DATE(o.timestamp) BETWEEN '2012-01-01' AND '2012-01-31' ORDER BY o.timestamp ASC orders table = 80,900 rows products table = 125,389 rows o.id and p.order_id are indexed The query takes about 6 seconds to complete - which is way too long. I am looking for a way to optimize it, possibly with temporary tables or a different type of join. I'm afraid my understanding of both of these concepts is pretty limited. Can anyone

MySQL query optimization and EXPLAIN for a noob

╄→гoц情女王★ 提交于 2019-12-02 04:28:25
I've been working with databases for a long time but I'm new to query optimization. I have the following query (some of it code-generated): SELECT DISTINCT COALESCE(gi.start_time, '') start_time, COALESCE(b.name, '') bank, COALESCE(a.id, '') account_id, COALESCE(a.account_number, '') account_number, COALESCE(at.code, '') account_type, COALESCE(a.open_date, '') open_date, COALESCE(a.interest_rate, '') interest_rate, COALESCE(a.maturity_date, '') maturity_date, COALESCE(a.opening_balance, '') opening_balance, COALESCE(a.has_e_statement, '') has_e_statement, COALESCE(a.has_bill_pay, '') has_bill

Which column to put first in index? Higher or lower cardinality?

让人想犯罪 __ 提交于 2019-12-02 03:48:29
For example, if I have a table with a city and a state column, what is the best way to use the index? Obviously city will have the highest cardinality, so should I put that column first in the index, should I put state or doesn't it matter much? MySQL composite index lookups must take place in the order in which the columns are defined within the index. Since you want MySQL to be able to discriminate between records by performing as few comparisons as possible, with all other things being equal you will benefit most from from a composite index in which the columns are ordered from highest- to