Java HeapDumps indicate that used Heap Size is 30% smaller than actual heap definition after OutOfMemory exceptions

▼魔方 西西 提交于 2020-01-07 07:43:11

问题


I have a handful of heap dumps that I am analyzing after the JVM has thrown OutOfMemory exceptions. I'm using Hotspot JDK 1.7 (64bit) on a Windows 2008R2 platform. The application server is a JBoss 4.2.1GA, launched via the Tanuki Java Service Wrapper.

It is launched with the following arguments:

wrapper.java.additional.2=-XX:MaxPermSize=256m
wrapper.java.initmemory=1498
wrapper.java.maxmemory=3000
wrapper.java.additional.19=-XX:+HeapDumpOnOutOfMemoryError

which translate to:

-Xms1498m -Xmx3000m -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError

There are some other GC & JMX configuration parameters as well.

My issue is when I analyze a heap dump created due to an OutOfMemoryException using the Eclipse Memory Analyzer, invariably, MAT shows me heap sizes of 2.3G or 2.4G. I have already enable the option in MAT to Keep Unreachable Objects, so I don't believe that MAT is trimming the heap.

java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded

or

java.lang.OutOfMemoryError: Java heap space

Summary in MAT:

Size: 2.3 GB Classes: 21.7k Objects: 47.6m Class Loader: 5.2k

My actual heap file sizes are roughly 3300KB, so they are in line with my 3000m max heap size setting.

So where is the missing 500-600M of memory in MAT? Why does MAT only show my heap size as 2.4G?

Other posts on SO tend to indicate that it is the JVM doing some GC prior to dumping the heap, but if the missing 500M is due to a GC, why is it even throwing the OOM in the first place? If a GC could actually clear up 500M (or nearly 25% of my heap), is the JVM really out of memory?

Are there ways to tune the heap dumps so I can get a full/complete picture of the heap (including the missing 500M)?

If not, I find I'm really struggling to find how/why I'm encountering these OOMs in the first place.

As requested by someone, I am attaching the output of a jstat -gc <PID> 1000 from a live node: http://pastebin.com/07KMG1tr.


回答1:


Which GC are you using? You are probably missing Eden, try to use jstat - Java Virtual Machine Statistics Monitoring Tool




回答2:


java.lang.OutOfMemoryError: GC overhead limit exceeded

this does not necessarily mean your heap is full, see this Q&A

java.lang.OutOfMemoryError: Java heap space

and this does not mean that your heap has 0 bytes left, it means it that an allocation request could not be satisfied. If something tries to allocate 600MB and there are only 500MB left then that will throw an OOME.

If not, I find I'm really struggling to find how/why I'm encountering these OOMs in the first place.

Obtaining a stack trace to see if the callsite doing the allocation in question does anything suspicious would be a start. Or you could just try bumping the heap size and see if the problem goes away.



来源:https://stackoverflow.com/questions/37660322/java-heapdumps-indicate-that-used-heap-size-is-30-smaller-than-actual-heap-defi

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!