Oracle JDBC prefetch: how to avoid running out of RAM

…衆ロ難τιáo~ 提交于 2019-12-05 09:40:48

Basically, oracle's default strategy for later ojdbc jars is to "pre allocate" an array per "prefetch" row that accomodates for the largest size conceivably possible to return from that query. So in my case I had some VARCHAR2(4000) in there, so 50 threads * 3 columns of varchar2's * 4000 was adding up to more than gigabytes of RAM [yikes]. There does not appear to be an option to say "don't pre allocate that array, just use the size needed." Ojdbc even keeps these preallocated buffers around between preparedstatements so it can reuse them. Definitely a memory hog.

The fix was to determine the maximum actual column size, then replace the query with (assuming 50 is the max size) select substr(column_name, 0, 50) as well as profile and only use as high of setFetchSize as actually made significant speed improvements.

Other things you can do: decrease the number of prefetch rows, increase Xmx parameter, only select the columns you need.

Once we were able to use at least prefetch 400 [make sure to profile to see what numbers are good for you, with high latency we saw improvements up to prefetch size 3-4K] on all queries, performance improved dramatically.

I suppose if you wanted to be really aggressive against sparse "really long" rows you might be able to re-query when you run into these [rare] rows.

Details ad nauseum here

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!