out-of-memory

Tools for OutOfMemoryError java heap space analysis

自作多情 提交于 2019-12-05 22:30:55
I am getting an OutOfMemoryError: Java heap space Are there any tools I can use to find the root cause ? Add these JVM arguments, which would log the Garbage collection details to the log file. -Xloggc:gc_memory_logs.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps The logs would looks like this, 1.703: [GC [PSYoungGen: 132096K->16897K(153600K)] 132096K->16905K(503296K), 0.0171210 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] 3.162: [GC [PSYoungGen: 148993K->21488K(153600K)] 149001K->22069K(503296K), 0.0203860 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 4.545: [GC [PSYoungGen: 153584K-

Crashlytics isn't reporting any foreground OOMs

你。 提交于 2019-12-05 22:05:07
I've created OOM crashes by growing an infinitely large NSArray of NSStrings, and I've even tried calling exit(0) just to make it look like an OOM. While these things to have worked to terminate the app unexpectedly, I don't see any OOMs reported on Crashlytics and it doesn't call the delegate callback, crashlyticsDidDetectReportForLastExecution: , on the next run of the app. I'm running the app on a real device that is not connected to a simulator, and any other kind of crash/error it reports fine. Does anyone have any idea what the issue might be? Mike from Fabric here. We chatted over

Memory Efficient Agglomerative Clustering with Linkage in Python

陌路散爱 提交于 2019-12-05 21:48:28
I want to cluster 2d points (latitude/longitude) on a map. The number of points is 400K so the input matrix would be 400k x 2. When I run scikit-learn's Agglomerative Clustering I run out of memory and my memory is about 500GB. class sklearn.cluster.AgglomerativeClustering(n_clusters=2, affinity='euclidean', memory=Memory(cachedir=None), connectivity=None, n_components=None, compute_full_tree='auto', linkage='ward', pooling_func=<function mean at 0x2b8085912398>)[source] I also tried the memory=Memory(cachedir) option with no success. Does anybody have a suggestion (another library or change

Large HPROF file

我是研究僧i 提交于 2019-12-05 21:29:28
I have a very large Heap Dump (.hprof) file (16GB). When I try to open it in Visual Vm,the VM just hangs. I tried to open it in JProfiler. Jprofiler gave me a Out Of Memory error. Below is how my jprofiler.vmoptions looks like. What should be the ideal configuration, I should be using in order to open the HPROF without issues? I am running on a 8GB Linux box. -Xmx1536m -XX:MaxPermSize=128m -Xss2m JProfiler 8.1 will be able to open much larger HPROF files without tuning the -Xmx VM parameter. To get a pre-release, please contact support@ej-technologies. 来源: https://stackoverflow.com/questions

Volley give me Out of memory exception after I make a lot of request with big amount of data

白昼怎懂夜的黑 提交于 2019-12-05 18:36:29
I have a Page Viewer and inside every page I have list View , this list view will have 10 records using a web service , so the page viewer use three calls of the web service to populate three pages (the current , the left and the right page) but after I make a lot of swipes I am getting this exception : java.lang.OutOfMemoryError: pthread_create (stack size 16384 bytes) failed: Try again at java.lang.VMThread.create(Native Method) at java.lang.Thread.start(Thread.java:1029) at com.android.volley.RequestQueue.start(RequestQueue.java:142) at com.android.volley.toolbox.Volley.newRequestQueue

cudaMalloc always gives out of memory

China☆狼群 提交于 2019-12-05 18:25:41
I'm facing a simple problem, where all my calls to cudaMalloc fail, giving me an out of memory error, even if its just a single byte I'm allocating. The cuda device is available and there is also a lot of memory available (bot checked with the corresponding calls). Any idea what the problem could be? Please try to call cudaSetDevice(), then cudaDeviceSynchronize() and then cudaThreadSynchronize() at the beginning of the code itself. cudaSetDevice(0) if there is only one device. by default the CUDA run time will initialize the device 0. cudaSetDevice(0); cudaDeviceSynchronize();

Garbage collection runs too late - causes OutOfMemory exceptions

旧时模样 提交于 2019-12-05 18:05:34
Was wondering if anyone could shed some light on this. I have an application which has a large memory footprint (& memory churn). There aren't any memory leaks and GCs tend to do a good job of freeing up resources. Occasionally, however, a GC does not happen 'on time', causing an out of memory exception. I was wondering if anyone could shed any light on this? I've used the REDGate profiler, which is very good - the application has a typical 'sawtooth' pattern - the OOMs happen at the top of the sawtooth. Unfortunately the profiler can't be used (AFAIK) to identify sources of memory churn. Is

SplashScreen using PNG image leads to Android.Views.InflateException followed by OutOfMemory

可紊 提交于 2019-12-05 17:55:05
I watched Google IO 2011 conference, read almost every post about OutOfMemory Exception and InflateException, no luck, I cannot find any answer that solve my problem. How can I properly clear memory from a layout containing a background image? I feel like if the InflateException followed by OutOfMemory are related because that background image is not cleared properly. So I'm getting : Android.Views.InflateException: Binary XML file line #24: Error inflating class followed by : Java.Lang.OutOfMemoryError: Which I'm pretty sure is caused by my background image. I simplified my code to narrow the

Spark clean up shuffle spilled to disk

时光毁灭记忆、已成空白 提交于 2019-12-05 17:30:55
I have a looping operation which generates some RDDs, does repartition, then a aggregatebykey operation. After the loop runs onces, it computes a final RDD, which is cached and checkpointed, and also used as the initial RDD for the next loop. These RDDs are quite large and generate lots of intermediate shuffle blocks before arriving a the final RDD for every iteration. I am compressing my shuffles and allowing shuffles to spill to disk. I notice on my worker machines that my working directory where the shuffle files are stores are not being cleaned up. Thus eventually I run out of disk space.

Tensorflow: ran out of memory trying to allocate 3.90GiB. The caller indicates that this is not a failure

江枫思渺然 提交于 2019-12-05 17:18:33
问题 There is a question that I don't understand. Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.90GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. What is the sentence mean? I have read the source code. But I cann't understand because of my poor ability. The memory size of GPU is 6GB, the result of memory use that I use tfprof analysis is about 14GB. That is beyond the memory size of GPU. The