heap-memory

Why do -Xmx and Runtime.maxMemory not agree

痞子三分冷 提交于 2019-11-27 20:30:45
问题 When you add -Xmx????m to the command line, the JVM gives you a heap which is close to this value but can be out by up to 14%. The JVM can give you a figure much closer to what you want, but only through trial and error. System.out.println(Runtime.getRuntime().maxMemory()); prints -Xmx1000m -> 932184064 -Xmx1024m -Xmx1g -> 954728448 -Xmx1072m -> 999292928 -Xmx1073m -> 1001390080 I am running HotSpot Java 8 update 5. Clearly, the heap can be something just above 1000000000 but why is this

Why application is dying randomly?

房东的猫 提交于 2019-11-27 20:25:06
问题 I am developing an music player app. All works fine except the app dies suddenly. Sometimes this happens when the app starts, and sometimes after running for long time. Sometimes all goes well without app getting died. I observed the log to get to know what is the causing the app to die and found this: 11-02 16:39:39.293: A/libc(3556): @@@ ABORTING: INVALID HEAP ADDRESS IN dlfree 11-02 16:39:39.293: A/libc(3556): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1) The full log is given below,

How detect and remove (during a session) unused @ViewScoped beans that can't be garbage collected

一笑奈何 提交于 2019-11-27 20:17:13
EDIT: The problem raised by this question is very well explained and confirmed in this article by codebulb.ch, including some comparison between JSF @ViewScoped , CDI @ViewSCoped , and the Omnifaces @ViewScoped , and a clear statement that JSF @ViewScoped is 'leaky by design': May 24, 2015 Java EE 7 Bean scopes compared part 2 of 2 EDIT: 2017-12-05 The test case used for this question is still extremely useful, however the conclusions concerning Garbage Collection in the original post (and images) were based on JVisualVM, and I have since found they are not valid. Use the NetBeans Profiler

Agressive garbage collector strategy

时光毁灭记忆、已成空白 提交于 2019-11-27 20:04:54
问题 I am running an application that creates and forgets large amounts of objects, the amount of long existing objects does grow slowly, but this is very little compared to short lived objects. This is a desktop application with high availability requirements, it needs to be turned on 24 hours per day. Most of the work is done on a single thread, this thread will just use all CPU it can get it's hands. In the past we have seen the following under heavy load: The used heap space slowly goes up as

programmatically setting max java heap size

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-27 19:13:36
问题 Is there a way to set the max java heap size programmatically instead of as a vm argument? Something like: System.getProperties().put("<heap variable>", "1000m"); 回答1: Not with any Hotspot JVM. The JVM heap parameters can only be specified on the command line, and are then fixed for the lifetime of the JVM. With Hotspot Java implementations, the only way to "change" the heap size of an application is to relaunch it in a new JVM with different command line parameters. (I vaguely recall that

How much memory do golang maps reserve?

天涯浪子 提交于 2019-11-27 18:44:16
问题 Given a map allocation where the initial space is not specified, for example: foo := make(map[string]int) The documentation suggests that the memory allocation here is implementation dependent. So (how) can I tell how much memory my implementation is allocating to this map? 回答1: You may use the Go testing tool to measure size of arbitrary complex data structures. This is detailed in this answer: How to get variable memory size of variable in golang? To measure the size of a map created by

java.lang.OutOfMemoryError: requested 1958536 bytes for Chunk::new. Out of swap space

送分小仙女□ 提交于 2019-11-27 18:12:37
问题 We are facing the below problem at our production enviournment in unpredictable manner sometimes the server is down in a day or sometimes in a week, below is the exact error dump, below are the settings for the server. JDK: jdk1.6.0_21 Server: Tomcat 7.0.2 OS: Red Hat Enterprise Linux Server release 5.5 In catalina.sh the following setting has been done: JAVA_OPTS="-Xms1024M -Xmx1536M -XX:+HeapDumpOnOutOfMemoryError -XX:+AggressiveOpts -XX:-DisableExplicitGC -XX:AdaptiveSizeThroughPutPolicy=0

How to avoid MATLAB crash when opening too many figures?

半城伤御伤魂 提交于 2019-11-27 17:34:54
问题 Sometimes I start a MATLAB script and realize too late that it is going to output way too many figures. Eventually I get an Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space which can easily be reproduced on my machine using for i=1:inf figure; end I get to around ~90 figures before it crashes with the standard setting (Preferences / Java Heap Memory) of 128 MB Java heap, while doubling the Heap to 256 MB gives me around 200 figures. Do you see any way to

Jersey service file upload causes OutOfMemoryError

流过昼夜 提交于 2019-11-27 16:01:57
问题 I'm developing form submission service with Jersey 2.0. The form includes several text fields and one file field. I need to extract file , file name , file media type and file content type and save them in object store. @Path("upload") @Consumes({MediaType.MULTIPART_FORM_DATA}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public class UploadService { @POST public BlobDo uploadFile(FormDataMultiPart uploadedBody) { String accountSid = uploadedBody.getField("account-sid")

Why is my Java heap dump size much smaller than used memory?

北城余情 提交于 2019-11-27 15:09:56
问题 Problem We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT. However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT. Context Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making