Why is Oracle so slow when I pass a java.sql.Timestamp for a DATE column?

前端 未结 4 1876
梦如初夏
梦如初夏 2020-12-15 01:15

I have a table with a DATE column with time (as usual in Oracle since there isn\'t a TIME type). When I query that column from JDBC, I have two opt

相关标签:
4条回答
  • 2020-12-15 01:58

    I had this problem on a project a while ago and setting the connection property oracle.jdbc.V8Compatible=true fixed the problem.

    Dougman's link tells you how to set it:

    You set the connection property by adding it to the java.util.Properties object passed to DriverManager.getConnection or to OracleDataSource.setConnectionProperties. You set the system property by including a -D option in your java command line.

    java -Doracle.jdbc.V8Compatible="true" MyApp

    Note for 11g and this property is apparently not used.

    From http://forums.oracle.com/forums/thread.jspa?messageID=1659839 :

    One additional note for those who are using the 11gR1 (and on) JDBC thin driver: the V8Compatible connection property no longer exist, so you can't rely on that to send your java.sql.Timestamp as a SQLDATE. What you can do however is call:

    setObject(i, aTimestamp, java.sql.Types.DATE) sends data as SQLDATE
    setObject(i, aDate) sends data as SQLDATE
    setDate(i, aDate) sends data as SQLDATE
    setDATE(i, aDATE) (non standard) sends data as SQLDATE
    
    setObject(i, aTimestamp) sends data as SQLTIMESTAMP
    setTimestamp(i, aTimestamp) sends data as SQLTIMESTAMP
    setObject(i, aTimestamp) sends data as SQLTIMESTAMP
    setTIMESTAMP(i, aTIMESTAMP) (non standard) sends data as SQLTIMESTAMP
    
    0 讨论(0)
  • 2020-12-15 02:03

    This is because TIMESTAMP datatype is more accurate than DATE so when you supply TIMESTAMP parameter value into DATE column condition, Oracle has to convert all DATE values into TIMESTAMP to make a comparison (this is the INTERNAL_FUNCTION usage above) and therefore index has to be full scanned.

    0 讨论(0)
  • 2020-12-15 02:09

    I have a similar problem here:

    Non-negligible execution plan difference with Oracle when using jdbc Timestamp or Date

    In my example it essentially comes down to the fact that when using JDBC Timestamp, an INTERNAL_FUNCTION is applied to the filter column, not the bind variable. Thus, the index cannot be used for RANGE SCANS or UNIQUE SCANS anymore:

    // execute_at is of type DATE.
    PreparedStatement stmt = connection.prepareStatement(
        "SELECT /*+ index(my_table my_index) */ * " + 
        "FROM my_table " +
        "WHERE execute_at > ? AND execute_at < ?");
    

    These two bindings result in entirely different behaviour (to exclude bind variable peeking issues, I actually enforced two hard-parses):

    // 1. with timestamps
    stmt.setTimestamp(1, start);
    stmt.setTimestamp(2, end);
    
    // 2. with dates
    stmt.setDate(1, start);
    stmt.setDate(2, end);
    

    1) With timestamps, I get an INDEX FULL SCAN and thus a filter predicate

    --------------------------------------------------------------
    | Id  | Operation                    | Name                  |
    --------------------------------------------------------------
    |   0 | SELECT STATEMENT             |                       |
    |*  1 |  FILTER                      |                       |
    |   2 |   TABLE ACCESS BY INDEX ROWID| my_table              |
    |*  3 |    INDEX FULL SCAN           | my_index              |
    --------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       1 - filter(:1<:2)"
       3 - filter((INTERNAL_FUNCTION(""EXECUTE_AT"")>:1 AND 
                   INTERNAL_FUNCTION(""EXECUTE_AT"")<:2))
    

    2) With dates, I get the much better INDEX RANGE SCAN and an access predicate

    --------------------------------------------------------------
    | Id  | Operation                    | Name                  |
    --------------------------------------------------------------
    |   0 | SELECT STATEMENT             |                       |
    |*  1 |  FILTER                      |                       |
    |   2 |   TABLE ACCESS BY INDEX ROWID| my_table              |
    |*  3 |    INDEX RANGE SCAN          | my_index              |
    --------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       1 - filter(:1<:2)"
       3 - access(""EXECUTE_AT"">:1 AND ""EXECUTE_AT""<:2)
    

    Solving this problem inside third-party APIs

    For the record, this problem can also be solved within third-party APIs for instance in Hibernate:

    • http://blog.jooq.org/2014/12/29/leaky-abstractions-or-how-to-bind-oracle-date-correctly-with-hibernate/

    Or in jOOQ:

    • http://blog.jooq.org/2014/12/22/are-you-binding-your-oracle-dates-correctly-i-bet-you-arent/
    0 讨论(0)
  • 2020-12-15 02:16

    I don't understand what {ts '2009-12-08 00:00:00.000'} actually mean, since this isn't Oracle SQL as far as I know. Can you show exactly what the query is you're running?

    One possible problem is that you're specifying your range with milliseconds. Oracle's DATE type only goes down to seconds. (Use TIMESTAMP type if you need to store fractions of seconds). But what might be happening is that in the first query, Oracle is converting each DATE value to a TIMESTAMP in order to do the comparison to your specified TIMESTAMP. In the second case, it knows TRUNC() will effectively round your value to something that can be expressed as a DATE, so no conversion is needed.

    If you want to avoid such implicit conversions, make sure you're always comparing like with like. eg

    select * 
    from my_table t
    where t.ts between to_date('2009-12-08','YYYY-MM-DD') and to_date('2009-12-09','YYYY-MM-DD')
    
    0 讨论(0)
提交回复
热议问题