hprof

How do I extract a timestamp from the heap dump

荒凉一梦 提交于 2021-01-28 01:08:21
问题 Unfortunately, I forgot to record the time that I took the heap dump. I hope that somewhere in the heap, the standard library caches something like System.currentTimeMillis() . Unfortunately, I do not have any business objects that cache it. One difficult option I have it to browse all the threads, and see if their local variables stored a timestamp somewhere. However, this is not technique I can apply to all heap dumps. I looked at java.lang.System in openJDK and it doesn't look like we

react-native : can't push to git because of hprof file

耗尽温柔 提交于 2020-05-27 05:11:53
问题 I would like to push my project into the github, however i just notice there is a file called java_pid14920.hprof inside the android folder and cause around 300MB remote: error: File android/java_pid14920.hprof is 301.75 MB; this exceeds GitHub's file size limit of 100.00 MB I wonder it is safe to delete this file ? 回答1: This sounds like a heap profiling output file, which you probably don't want in your repository at all. You'll want to delete it from the entire history and probably add an

jvisualvm: Stuck on “Loading Heap Dump” screen

≡放荡痞女 提交于 2020-01-29 21:24:25
问题 I created a heap dump file with hprof using this command: java -agentlib:hprof -cp "..\..\jars\trove.jar;.\bin" com.mysite.MyApp This successfully created the file "java.hprof.txt" which was about 5MB. I then opened up jvisualvm to view this file, and loaded it in. But visualvm appears to be stuck on the loading screen. The screen below has been up for about 10 minutes now. Did I miss a step? Should I have used different options on the command line with hprof? How can I read this heap dump

Hadoop HPROF profiling no CPU SAMPLES written

自古美人都是妖i 提交于 2020-01-04 03:49:26
问题 I want to use HPROF to profile my Hadoop job. The problem is that I get TRACES but there is no CPU SAMPLES in the profile.out file. The code that I am using inside my run method is: /** Get configuration */ Configuration conf = getConf(); conf.set("textinputformat.record.delimiter","\n\n"); conf.setStrings("args", args); /** JVM PROFILING */ conf.setBoolean("mapreduce.task.profile", true); conf.set("mapreduce.task.profile.params", "-agentlib:hprof=cpu=samples," + "heap=sites,depth=6,force=n

Java Mission Control says “few profiling samples”, why, and what are my other options?

对着背影说爱祢 提交于 2019-12-24 07:35:10
问题 I'm profiling a Java application using Java Mission Control, and it's saying on the main page of the flight recording that "This recording contains few profiling samples even though CPU load is high. The profiling data is thus likely not relevant." It seems to be telling the truth. I asked it to sample every 10 ms for 3 minutes which should be 18000 samples, but I only see 996 samples. It goes on to explain "The profiling data is thus likely not relevant. This might be because the application

Android hprov-dump giving me Error: expecting 1.0.3

北城以北 提交于 2019-12-24 00:54:29
问题 I've used the Dump HPROF File option in eclipses DDMS and made my hprof file called in.hprof, but when I try to do the hprov-conf in.hprof out.hprof from the command line it gives me the error "Error: expecting 1.0.3". Any ideas? 回答1: Never found out why it was giving me the error, but instead of trying to convert it and open it in the external MAT I ended up using the built in tool for Eclipse which worked perfectly and is much simpler. One click instead of exporting, converting and opening

HPjmeter-like graphical tool to view -agentlib:hprof profiling output

只愿长相守 提交于 2019-12-17 23:14:38
问题 What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with: -agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt This generates output in the hprof ("JAVA PROFILE 1.0.1") format. I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:

jvisualvm: Stuck on “Loading Heap Dump…” screen

前提是你 提交于 2019-12-10 17:36:30
问题 I am using jdk64 and my java version is "1.6.0_24". My tomcat is running with -Xmx7196m, and jvisualvm is running with -J-Xms2048m -J-Xmx3072m. I took a heap dump of my tomcat java process and size of my .hprof file is around 5.5 GB. When I try to open this heap dump, it just stuck on Loading Heap Dump... screen. I also looked at the heap consumption of VisualVM while it is trying to open the heap dump, but that goes around 500MB only. NOTE: I did look at jvisualvm: Stuck on “Loading Heap

Why doesn't the -baseline option of jhat work?

℡╲_俬逩灬. 提交于 2019-12-10 16:01:34
问题 How come every object appears to be marked new, instead of just objects that are in the second snapshot but not in my baseline snapshot? Looking around online, I see some suggestions that I need to use hprof instead of jmap to make my memory dumps, but it appears that hprof generates dumps in exactly the same format. This is JDK 1.6.0_14; I have tried on both Windows and UNIX. 回答1: jhat -baseline indeed won't work with dumps produced by jmap . I'm not certain, but I believe this is because

heapdump size vs hprof size

纵饮孤独 提交于 2019-12-10 10:56:56
问题 I recently made a heapdump in a hprof format when my jboss server was running with a xms of 4096m and xmx of 4096m and a permsize of 512m. The hprof file generated is over 5gb. When I load the heapdump in visualvm, mat analyzer or yourkit, I only see a total bytes of approximately 1gb. I've tried changed the reachability scope in yourkit but it does not show more than 1 gb. Any idea what this big difference in filesize vs displayed heapdump size can cause? ps: I'm using jdk1.6.0_23