query-optimization

MySql.Data.MySqlClient.MySqlException: Timeout expired

会有一股神秘感。 提交于 2019-11-29 13:12:51
In recent times, a particular page in my web app throws the Exception Details: MySql.Data.MySqlClient.MySqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Though I use Ibtais as persistence layer, this error occurs. I have restarted the MySql service instance but stil i get the same error. It didn't happen earlier but happens frequently in recent times. All the web applications deployed on the server uses Ibatis and the DB server remains on the same machine where IIS is installed. There are about 8000 records in which

optimize mysql count query

守給你的承諾、 提交于 2019-11-29 12:12:22
问题 Is there a way to optimize this further or should I just be satisfied that it takes 9 seconds to count 11M rows ? devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "desc record_updates" +--------------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+----------+------+-----+---------+-------+ | record_id | int(11) | YES | MUL | NULL | | | date_updated | datetime | YES | MUL | NULL | | +--------------+------

Generic SQL that both Access and ODBC/Oracle can understand

删除回忆录丶 提交于 2019-11-29 12:07:29
I have a MS Access query that is based on a linked ODBC table (Oracle). I'm troubleshooting the poor performance of the query here: Access not properly translating TOP predicate to ODBC/Oracle SQL . SELECT ri.* FROM user1_road_insp AS ri WHERE ri.insp_id = ( select top 1 ri2.insp_id from user1_road_insp ri2 where ri2.road_id = ri.road_id and year(insp_date) between [Enter a START year:] and [Enter a END year:] order by ri2.insp_date desc, ri2.length desc, ri2.insp_id ); The documentation says: When you spot a problem, you can try to resolve it by changing the local query. This is often

Extremely slow PostgreSQL query with ORDER and LIMIT clauses

梦想与她 提交于 2019-11-29 10:39:55
问题 I have a table, let's call it "foos", with almost 6 million records in it. I am running the following query: SELECT "foos".* FROM "foos" INNER JOIN "bars" ON "foos".bar_id = "bars".id WHERE (("bars".baz_id = 13266)) ORDER BY "foos"."id" DESC LIMIT 5 OFFSET 0; This query takes a very long time to run (Rails times out while running it). There is an index on all IDs in question. The curious part is, if I remove either the ORDER BY clause or the LIMIT clause, it runs almost instantaneously. I'm

Bad optimization/planning on Postgres window-based queries (partition by(, group by?)) - 1000x speedup

假装没事ソ 提交于 2019-11-29 10:17:38
问题 We are running Postgres 9.3.5. (07/2014) We have quite some complex datawarehouse/reporting setup in place (ETL, materialized views, indexing, aggregations, analytical functions, ...). What I discovered right now may be difficult to implement in the optimizer (?), but it makes a huge difference in performance (only sample code with huge similarity to our query to reduce unnecessary complexity): create view foo as select sum(s.plan) over w_pyl as pyl_plan, -- money planned to spend in this pot

Why is django ORM so much slower than raw SQL

我与影子孤独终老i 提交于 2019-11-29 08:05:48
I have the following two pieces of code: First, in SQL: self.cursor.execute('SELECT apple_id FROM main_catalog WHERE apple_id=%s', apple_id) if self.cursor.fetchone(): print '##' Next, in Django: if Catalog.objects.filter(apple_id=apple_id).exists(): print '>>>' Doing it the first way is about 4x faster than the second way in a loop of 100k entries. What accounts for Django being so much slower? Typically ORMs go to the trouble of instantiating a complete object for each row and returning it. Your raw SQL doesn't do that, so it won't take the penalty that incurs. For large result sets where

How to know which count query is the fastest?

本秂侑毒 提交于 2019-11-29 07:30:26
I've been exploring query optimizations in the recent releases of Spark SQL 2.3.0-SNAPSHOT and noticed different physical plans for semantically-identical queries. Let's assume I've got to count the number of rows in the following dataset: val q = spark.range(1) I could count the number of rows as follows: q.count q.collect.size q.rdd.count q.queryExecution.toRdd.count My initial thought was that it's almost a constant operation (surely due to a local dataset) that would somehow have been optimized by Spark SQL and would give a result immediately, esp. the 1st one where Spark SQL is in full

Does size of a VARCHAR column matter when used in queries [duplicate]

旧巷老猫 提交于 2019-11-29 07:19:23
Possible Duplicate: is there an advantage to varchar(500) over varchar(8000)? I understand that a VARCHAR(200) column containing 10 characters takes same amount of space as a VARCHAR(20) column containing same data. I want to know if changing a dozen VARCHAR(200) columns of a specific table to VARCHAR(20) would make the queries run faster, especially when: These columns will never contain more than 20 characters These columns are often used in ORDER BY clause These columns are often used in WHERE clause Some of these columns are indexed so that they can be used in WHERE clause PS: I am using

Does size of a VARCHAR column matter when used in queries [duplicate]

安稳与你 提交于 2019-11-29 07:17:36
Possible Duplicate: is there an advantage to varchar(500) over varchar(8000)? I understand that a VARCHAR(200) column containing 10 characters takes same amount of space as a VARCHAR(20) column containing same data. I want to know if changing a dozen VARCHAR(200) columns of a specific table to VARCHAR(20) would make the queries run faster, especially when: These columns will never contain more than 20 characters These columns are often used in ORDER BY clause These columns are often used in WHERE clause Some of these columns are indexed so that they can be used in WHERE clause PS: I am using

Left join or select from multiple table using comma (,) [duplicate]

一曲冷凌霜 提交于 2019-11-29 05:52:43
This question already has an answer here: SQL left join vs multiple tables on FROM line? 11 answers I'm curious as to why we need to use LEFT JOIN since we can use commas to select multiple tables. What are the differences between LEFT JOIN and using commas to select multiple tables. Which one is faster? Here is my code: SELECT mw.*, nvs.* FROM mst_words mw LEFT JOIN (SELECT no as nonvs, owner, owner_no, vocab_no, correct FROM vocab_stats WHERE owner = 1111) AS nvs ON mw.no = nvs.vocab_no WHERE (nvs.correct > 0 ) AND mw.level = 1 ...and: SELECT * FROM vocab_stats vs, mst_words mw WHERE mw.no =