query-optimization

Query optimization - using conditions on JOIN instead of with WHERE clause

微笑、不失礼 提交于 2019-12-24 14:07:58
问题 Inside an SP I need to find out the Id's of some clients of the first account whose Code matches any of the second account's clients. I wrote the following query that works - SELECT DISTINCT cil.Id FROM ClientIdList AS cil INNER JOIN Client AS c1 ON cil.Id = c1.Id INNER JOIN Client AS c2 ON c1.Code = c2.Code WHERE c2.AccountId = 2 ORDER BY cil.Id Here ClientIdList is a single-column table-type variable which holds the Ids of the selected clients from the first account (and I need to use this

Improving this MySQL Query - Select as sub-query

两盒软妹~` 提交于 2019-12-24 11:44:08
问题 I have this query SELECT shot.hole AS hole, shot.id AS id, (SELECT s.id FROM shot AS s WHERE s.hole = shot.hole AND s.shot_number > shot.shot_number AND shot.round_id = s.round_id ORDER BY s.shot_number ASC LIMIT 1) AS next_shot_id, shot.distance AS distance_remaining, shot.type AS hit_type, shot.area AS onto FROM shot JOIN course ON shot.course_id = course.id JOIN round ON shot.round_id = round.id WHERE round.uID = 78 This returns 900~ rows in around 0.7 seconds. This is OK-ish, but there

Tips for working with large quantity .txt files (and overall large size) - python?

穿精又带淫゛_ 提交于 2019-12-24 11:22:39
问题 I'm working on a script to parse txt files and store them into a pandas dataframe that I can export to a CSV. My script works easily when I was using <100 of my files - but now when trying to run it on the full sample, I'm running into a lot of issues. Im dealing with ~8000 .txt files with an average size of 300 KB, so in total about 2.5 GB in size. I was wondering if I could get tips on how to make my code more efficient. for opening and reading files, I use: filenames = os.listdir('.') dict

Query works too slow when there is no results. How to improve it?

主宰稳场 提交于 2019-12-24 10:33:19
问题 I have three tables filters (id, name) items(item_id, name) items_filters(item_id, filter_id, value_id) values(id, filter_id, filter_value) about 20000 entries in items. about 80000 entries in items_filters. SELECT i.* FROM items_filters itf INNER JOIN items i ON i.item_id = itf.item_id WHERE (itf.filter_id = 1 AND itf.value_id = '1') OR (itf.filter_id = 2 AND itf.value_id = '7') GROUP BY itf.item_id WITH ROLLUP HAVING COUNT(*) = 2 LIMIT 0,10; It 0.008 time when there is entries that match

Which PDO SQL Query is faster in the long run and heavy data?

情到浓时终转凉″ 提交于 2019-12-24 09:28:36
问题 From a Table has over a million record , When i pull the data from it, I want to check if the requested data exists or not, So which path is more efficient and faster then the other? $Query = ' SELECT n.id FROM names n INNER JOIN ages a ON n.id = a.aid INNER JOIN regions r ON n.id = r.rid WHERE id = :id '; $stmt->prepare($Query); $stmt->execute(['id' => $id]); if ($stmt->rowCount() == 1) { $row = $stmt->fetch(); ...................... } else { exit(); } or $EXISTS = 'SELECT EXISTS ( SELECT n

SQLite trigger optimization

懵懂的女人 提交于 2019-12-24 08:38:15
问题 This is a follow up based on this question about query optimization. In order to make fast selection, as suggested, I tried to pre-compute some data at insertion time using a trigger. Basically, I want to keep the number of occurrences of a given column's value into a given table. The following schema is used to store the occurrences for each of the values: CREATE TABLE valuecount (value text, count int) CREATE INDEX countidx ON t (count DESC) CREATE UNIQUE INDEX valueidx ON valuecount (value

mySQL query optimisation for browse tracker

我与影子孤独终老i 提交于 2019-12-24 08:11:48
问题 I have been reading lots of great answers to different problems over the time on this site but this is the first time I am posting. So in advance thanks for your help. Here is my question: I have a MySQL table that tracks visits to different websites we have. This is the table structure: create table navigation_base ( uid int(11) NOT NULL, date datetime not null, dia date not null, ip int(4) unsigned not null default 0, session_id int unsigned not null, cliente smallint unsigned not null

Is using an IN over a huge data set a good idea?

帅比萌擦擦* 提交于 2019-12-24 07:15:45
问题 Let's say I have a query of the form: SELECT a, b, c, d FROM table1 WHERE a IN ( SELECT x FROM table2 WHERE some_condition); Now the query for the IN can return a huge number of records. Assuming that a is the primary key, so an index is used is this the best way to write such a query? Or it is more optimal to loop over each of the records returned by the subquery? For me it is clear that when I do a where a = X it is clear that I just do an index (tree) traversal. But I am not sure how an IN

MySQL not picking correct row count from index

寵の児 提交于 2019-12-24 07:09:07
问题 I have a following table CREATE TABLE `test_series_analysis_data` ( `email` varchar(255) NOT NULL, `mappingId` int(11) NOT NULL, `packageId` varchar(255) NOT NULL, `sectionName` varchar(255) NOT NULL, `createdAt` datetime(3) DEFAULT NULL, `marksObtained` float NOT NULL, `updatedAt` datetime DEFAULT NULL, `testMetaData` longtext, PRIMARY KEY (`email`,`mappingId`,`packageId`,`sectionName`), KEY `rank_index` (`mappingId`,`packageId`,`sectionName`,`marksObtained`), KEY `mapping_package` (

Optimize MySQL self-join query

陌路散爱 提交于 2019-12-24 05:19:33
问题 I have c_regs table that contains duplicate rows. I've created index on form_number and property_name columns. Unfortunately this query still taking to-o-o-o long to complete, especially with addition of t10 and t11 joins. Is there a way to optimize it? Thanks. select ifnull(x.form_datetime,'') reg_date, ifnull(x.property_value,'') amg_id, x.form_number, x.form_name, x.form_version, ifnull(t1.property_value,'') first_name, ifnull(t2.property_value,'') last_name, ifnull(t3.property_value,'')