问题
In oracle Is there any way to determine howlong the sql query will take to fetch the entire records and what will be the size of it, Without actually executing and waiting for entire result.
I am getting repeatedly to download and provide the data to the users using normal oracle SQL select (not datapump/import etc) . Some times rows will be in millions.
回答1:
Actual run time will not known unless you run it, but you can try to estimate it..
- first you can do explain plan explain only, this will NOT run query -- based on your current stats it will show you more or less how it will be executed
- this will not have actual time and efforts to read the data from datablocks..
- do you have large blocksize
- is this schema normalized/de-normalized for query/reporting?
- how large is row does it fit in same block so only 1 fetch is needed?
of rows you are expecting
- based on amount of data * your network latency
Based on this you can try estimate time
回答2:
This requires good statistics, explain plan for ..., adjusting sys.aux_stats, and then adjusting your expectations.
Good statistics The explain plan estimates are based on optimizer statistics. Make sure that tables and indexes have up-to-date statistics. On 11g this usually means sticking with the default settings and tasks, and only manually gathering statistics after large data loads.
Explain plan for ...Use a statement like this to create and store the explain plan for any SQL statement. This even works for creating indexes and tables.explain plan set statement_id = 'SOME_UNIQUE_STRING' for select * from dba_tables cross join dba_tables;This is usually the best way to visualize an explain plan:
select * from table(dbms_xplan.display); Plan hash value: 2788227900 ------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Time | ------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 12M| 5452M| 00:00:19 | |* 1 | HASH JOIN RIGHT OUTER | | 12M| 5452M| 00:00:19 | | 2 | TABLE ACCESS FULL | SEG$ | 7116 | 319K| 00:00:01 | ...The raw data is stored in
PLAN_TABLE. The first row of the plan usually sums up the estimates for the other steps:select cardinality, bytes, time from plan_table where statement_id = 'SOME_UNIQUE_STRING' and id = 0; CARDINALITY BYTES TIME 12934699 5717136958 19Adjust sys.aux_stats$ The time estimate is based on system statistics stored in
sys.aux_stats. These are numbers for metrics like CPU speed, single-block I/O read time, etc. For example, on my system:select * from sys.aux_stats$ order by sname SNAME PNAME PVAL1 PVAL2 SYSSTATS_INFO DSTART 09-11-2014 11:18 SYSSTATS_INFO DSTOP 09-11-2014 11:18 SYSSTATS_INFO FLAGS 1 SYSSTATS_INFO STATUS COMPLETED SYSSTATS_MAIN CPUSPEED SYSSTATS_MAIN CPUSPEEDNW 3201.10192837466 SYSSTATS_MAIN IOSEEKTIM 10 SYSSTATS_MAIN IOTFRSPEED 4096 SYSSTATS_MAIN MAXTHR SYSSTATS_MAIN MBRC SYSSTATS_MAIN MREADTIM SYSSTATS_MAIN SLAVETHR SYSSTATS_MAIN SREADTIMThe numbers can be are automatically gathered by
dbms_stats.gather_system_stats. They can also be manually modified. It's a SYS table but relatively safe to modify. Create some sample queries, compare the estimated time with the actual time, and adjust the numbers until they match.Discover you probably wasted a lot of time
Predicting run time is theoretically impossible to get right in all cases, and in practice it is horribly difficult to forecast for non-trivial queries. Jonathan Lewis wrote a whole book about those predictions, and that book only covers the "basics".
Complex explain plans are typically "good enough" if the estimates are off by one or two orders of magnitude. But that kind of difference is typically not good enough to show to a user, or use for making any important decisions.
来源:https://stackoverflow.com/questions/29940412/determine-oracle-query-execution-time-and-proposed-datasize-without-actually-exe