query-optimization

Multithreading in MySQL?

狂风中的少年 提交于 2019-12-05 09:35:27
Are MySQL operations multithreaded? Specifically, on running a select, does the select (or join) algorithm spawn multiple threads to run together? Would being multi-threaded prevent being able to support a lot of concurrent users? Several background threads run in a MySQL server. Also, each database connection is served by a single thread. Parallel queries (selects using multiple threads) are not implemented in MySQL. MySQL as is can support "a lot of concurrent useres". For example Facebook started successfully with MySQL. Besides each connection has a thread, there are several management

How to optimize SQL query with window functions

╄→尐↘猪︶ㄣ 提交于 2019-12-05 08:17:54
This question is related to this one. I have table which contains power values for devices and I need to calculate power consumption for given time span and return 10 most power consuming devices. I have generated 192 devices and 7742208 measurement records (40324 for each). This is roughly how much records devices would produce in one month. For this amount of data my current query takes over 40s to execute which is too much because time span and amount of devices and measurements could be much higher. Should I try to solve this with different approach than lag() OVER PARTITION and what other

How Bitmap Heap Scan and Index Scan is decided?

五迷三道 提交于 2019-12-05 07:55:40
I'm testing different queries and I'm curious about how db decide using Bitmap Heap Scan and Index Scan. create index customers_email_idx on customers(email varchar_pattern_ops); As you can see there is a customers table (dellstore example) and I add an index to email column. First query is here: select * from customers where email like 'ITQ%@dell.com'; -> query with Index Scan Explain analyze query is here: QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Index Scan using customers_email_idx on

Which update is faster using join or sequential?

£可爱£侵袭症+ 提交于 2019-12-05 07:49:50
问题 This question is in sequence of my previous Question required update same table on deletion a row. I could write two solutions using Stored Procedure instead of trigger or nested-query . Both use a helper function my_signal(msg). A Stored Procedure to delete employee from Employee Table. Fist Solution : use UPDATE rows in table, without join operation : CREATE PROCEDURE delete_employee(IN dssn varchar(64)) BEGIN DECLARE empDesignation varchar(128); DECLARE empSsn varchar(64); DECLARE empMssn

Group by and group concat , optimization mysql query without using main pk

空扰寡人 提交于 2019-12-05 07:18:59
my example is on MYSQL VERSION is 5.6.34-log Problem summary the below query takes 40 seconds , ORDER_ITEM table has 758423 records And PAYMENT table has 177272 records And submission_entry table has 2165698 records as A Whole Table count. DETAILS HERE: BELOW: I Have This Query, Refer to [1] I Have added SQL_NO_CACHE for testing repeated tests when re query. I Have Optimized indexes Refer to [2] , but no significant improvement. Find Table Structures here [3] Find explain plan used [4] [1] SELECT SQL_NO_CACHE `payment`.`id` AS id, `order_item`.`order_id` AS order_id, GROUP_CONCAT(DISTINCT

mysql NOT IN QUERY optimize

烈酒焚心 提交于 2019-12-05 07:14:07
I have two tables named as: table_product table_user_ownned_auction table_product specific_product_id astatus ... (primary_key,autoinc) -------------------------------------- 1 APAST ... 2 ALIVE ... 3 ALIVE ... 4 APAST ... 5 APAST ... table_user_ownned_auction own_id specific_product_id details ---------------------------------------- 1 1 XXXX 2 5 XXXX I need to select atatus = APAST , and not in table 2. Which means, in above structure table1 has 3 APAST status (1,4,5). But in table 2 specific_product_id (1,5) only stored so i need to select specific_product_id = 4 I used this query SELECT *

Very big data in mysql table. Even select statements take much time

两盒软妹~` 提交于 2019-12-05 05:56:10
问题 I am working on a database and its a pretty big one with 1.3 billion rows and around 35 columns. Here is what i get after checking the status of the table: Name:Table Name Engine:InnoDB Version:10 Row_format:Compact Rows:12853961 Avg_row_length:572 Data_length:7353663488 Max_data_length:0 Index_length:5877268480 Data_free:0 Auto_increment:12933138 Create_time:41271.0312615741 Update_time:NULL Check_time:NULL Collation:utf8_general_ci Checksum:NULL Create_options: Comment:InnoDB free: 11489280

How can i force sql server to execute subquery first and filter the 'where' statement

橙三吉。 提交于 2019-12-05 02:48:18
i have a query like this: select * from ( select * from TableX where col1 % 2 = 0 ) subquery where col1 % 4 = 0 The actual subquery is more complicated. when i execute the subquery alone it returns maybe 200rows quickly, but when i execute the whole query, it takes too long to wait. I know sql server takes some optimization here and merge the where statement into the subquery, and produce the new execution plan which is not that efficient. Althought i can dive into the execution plan and analyze why, like index missing, statistics stale. But i surely know that, my subquery which serves as a

What is an automatic covering index?

强颜欢笑 提交于 2019-12-05 02:32:18
When using EXPLAIN QUERY PLAN in SQLite 3 it sometimes gives me output such as SEARCH TABLE staff AS s USING AUTOMATIC COVERING INDEX (is_freelancer=? AND sap=?) (~6 rows) Where does the index come from and what does it do? The table has no manually created indices on it. "Automatic" means that SQLite creates a temporary index that is used only for this query, and deleted afterwards. This happens when the cost of creating the index is estimated to be smaller than the cost of looking up records in the table without the index. (A covering index is an index that contains all the columns to be

RethinkDB - Find documents with missing field

萝らか妹 提交于 2019-12-05 02:08:30
I'm trying to write the most optimal query to find all of the documents that do not have a specific field. Is there any better way to do this than the examples I have listed below? // Get the ids of all documents missing "location" r.db("mydb").table("mytable").filter({location: null},{default: true}).pluck("id") // Get a count of all documents missing "location" r.db("mydb").table("mytable").filter({location: null},{default: true}).count() Right now, these queries take about 300-400ms on a table with ~40k documents, which seems rather slow. Furthermore, in this specific case, the "location"