sql-execution-plan

Are SQL Execution Plans based on Schema or Data or both?

China☆狼群 提交于 2019-12-10 17:23:32
问题 I hope this question is not too obvious...I have already found lots of good information on interpreting execution plans but there is one question I haven't found the answer to. Is the plan (and more specifically the relative CPU cost) based on the schema only, or also the actual data currently in the database? I am try to do some analysis of where indexes are needed in my product's database, but am working with my own test system which does not have close to the amount of data a product in

The Same SQL Query takes longer to run in one DB than another DB under the same server

两盒软妹~` 提交于 2019-12-10 17:22:46
问题 I have a SQL database server and 2 databases under it with the same structure and data. I run the same sql query in the 2 databases, one of them takes longer while the other completes in less than 50% of the time. They both have different execution plans. The query for the view is as below: SELECT DISTINCT i.SmtIssuer, i.SecID, ra.AssetNameCurrency AS AssetIdCurrency, i.IssuerCurrency, seg.ProxyCurrency, shifts.ScenarioDate, ten.TenorID, ten.Tenor, shifts.Shift, shifts.BusinessDate, shifts

Why is Postgres scanning a huge table instead of using my index?

你离开我真会死。 提交于 2019-12-10 16:14:31
问题 I noticed one of my SQL queries is much slower than I expected it to be, and it turns out that the query planner is coming up with a plan that seems really bad to me. My query looks like this: select A.style, count(B.x is null) as missing, count(*) as total from A left join B using (id, type) where A.country_code in ('US', 'DE', 'ES') group by A.country_code, A.style order by A.country_code, total B has a (type, id) index, and A has a (country_code, style) index. A is much smaller than B:

MySQL slow query with join even though EXPLAIN shows good plan

夙愿已清 提交于 2019-12-10 13:53:31
问题 I have the following scenario: In a MySQL database, I have 2 MyISAM tables, one with 4.2 million rows, and another with 320 million rows. The following is the schema for the tables: Table1 (4.2M rows) F1 INTEGER UNSIGNED NOT NULL PRIMARY KEY f2 varchar(40) f3 varchar(40) f4 varchar(40) f5 varchar(40) f6 smallint(6) f7 smallint(6) f8 varchar(40) f9 varchar(40) f10 smallint(6) f11 varchar(10) f12 tinyint(4) f13 smallint(6) f14 text Table2 (320M rows) F1 INTEGER UNSIGNED NOT NULL PRIMARY KEY f2

sqlite3 select min, max together is much slower than select them separately

自作多情 提交于 2019-12-10 13:36:06
问题 sqlite> explain query plan select max(utc_time) from RequestLog; 0|0|0|SEARCH TABLE RequestLog USING COVERING INDEX key (~1 rows) # very fast sqlite> explain query plan select min(utc_time) from RequestLog; 0|0|0|SEARCH TABLE RequestLog USING COVERING INDEX key (~1 rows) # very fast sqlite> explain query plan select min(utc_time), max(utc_time) from RequestLog; 0|0|0|SCAN TABLE RequestLog (~8768261 rows) # will be very very slow While I use min and max separately, it works perfectly. However,

Why does SQL Server query optimizer sometimes overlook obvious clustered primary key?

两盒软妹~` 提交于 2019-12-10 11:23:49
问题 I have been scratching my head on this one. I run as simple select count(id) on a table with id as clustered integer primary key and the SQL Optimizer totally ignores the primary key in it's query execution plan, in favor of an index on a date field.... ??? Actual table: CREATE TABLE [dbo].[msgr]( [id] [int] IDENTITY(1,1) NOT NULL, [dt] [datetime2](3) NOT NULL CONSTRAINT [DF_msgr_dt] DEFAULT (sysdatetime()), [uid] [int] NOT NULL, [msg] [varchar](7000) NOT NULL CONSTRAINT [DF_msgr_msg] DEFAULT

understanding explain plan in oracle

戏子无情 提交于 2019-12-10 09:25:01
问题 I was trying to understand the explain plan in oracle and wanted to know what conditions oracle considers while forming the explain plan I was testing a simple query in HR schema present in oracle 11g select * from countries where region_id in (select region_id from regions where region_name = 'Europe'); When I ran the following queries: explain plan for select * from countries where region_id in (select region_id from regions where region_name = 'Europe'); SELECT * FROM table(dbms_xplan

make the optimizer use all columns of an index

时光怂恿深爱的人放手 提交于 2019-12-09 23:41:09
问题 we have a few tables storing temporal data that have natural a primary key consisting of 3 columns. Example: maximum temperature for this day. This is the Composite Primary key index (in this order): id number(10): the id of the timeserie. day date: the day for which this data was reported kill_at timestamp: the last timestamp before this data was deleted or updated. Simplified logic: When we make a forecast at 10:00am, then the last entry found for this id/day combination has his create_at

mysql explain different results on different servers, same query, same db

坚强是说给别人听的谎言 提交于 2019-12-09 18:01:11
问题 After much work I finally got a rather complicated query to work very smootly and return results very quickly. It was running well on both dev and testing, but now testing has slowed considerably. The explain query which takes 0.06 second on dev and was about the same in testing is now 7 seconds in testing. The explains are slightly different, and I'm not sure why this would be The explain from dev -+---------+------------------------------+------+------------------------------ ---+ | id |

How can LIKE '%…' seek on an index?

巧了我就是萌 提交于 2019-12-09 11:13:37
问题 I would expect these two SELECT s to have the same execution plan and performance. Since there is a leading wildcard on the LIKE , I expect an index scan. When I run this and look at the plans, the first SELECT behaves as expected (with a scan). But the second SELECT plan shows an index seek, and runs 20 times faster. Code: -- Uses index scan, as expected: SELECT 1 FROM AccountAction WHERE AccountNumber LIKE '%441025586401' -- Uses index seek somehow, and runs much faster: declare @empty