out-of-memory

SBT runs out of memory

喜你入骨 提交于 2019-12-18 07:45:15
问题 I am using SBT 0.12.3 to test some code and often I get this error message while testing interactively with the ~test command. 8. Waiting for source changes... (press enter to interrupt) [info] Compiling 1 Scala source to C:\Users\t\scala-projects\scala test\target\s cala-2.10\classes... sbt appears to be exiting abnormally. The log file for this session is at C:\Users\t\AppData\Local\Temp\sbt566325905 3150896045.log java.lang.OutOfMemoryError: PermGen space at java.util.concurrent.FutureTask

Memory Leaks & OutOfMemoryError

大城市里の小女人 提交于 2019-12-18 07:09:24
问题 So I am trying to find out why my app is crashing for Fatal Exception: java.lang.OutOfMemoryError: Failed to allocate a 128887990 byte allocation with 16777216 free bytes and 76MB until OOM at java.lang.AbstractStringBuilder.enlargeBuffer(AbstractStringBuilder.java:95) at java.lang.AbstractStringBuilder.append0(AbstractStringBuilder.java:146) at java.lang.StringBuilder.append(StringBuilder.java:216) I have read several post but they all point towards a bitmap image, I am not using any bitmaps

Is there any memory restrictions on an ASP.Net application?

[亡魂溺海] 提交于 2019-12-18 07:04:50
问题 I have an ASP.Net MVC application that allows users to upload images. When I try to upload a really large file (400MB) I get an error. I assumed that my image processing code (home brew) was very inefficient, so I decided I would try using a third party library to handle the image processing parts. Because I'm using TDD, I wanted to first write a test that fails. But when I test the controller action with the same large file it is able to do all the image processing without any trouble. The

Prevent OutOfMemory when using java.nio.MappedByteBuffer

自古美人都是妖i 提交于 2019-12-18 06:39:10
问题 Consider application, which create 5-6 threads, each thread in cycle allocate MappedByteBuffer for 5mb page size. MappedByteBuffer b = ch.map(FileChannel.MapMode.READ_ONLY, r, 1024*1024*5); Sooner or later, when application works with big files, oom is thrown java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758) Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map

MemoryError using json.dumps()

馋奶兔 提交于 2019-12-18 05:10:25
问题 I would like to know which one of json.dump() or json.dumps() are the most efficient when it comes to encoding a large array to json format. Can you please show me an example of using json.dump() ? Actually I am making a Python CGI that gets large amount of data from a MySQL database using the ORM SQlAlchemy, and after some user triggered processing, I store the final output in an Array that I finally convert to Json. But when converting to JSON with : print json.dumps({'success': True, 'data

java outOfMemoryError with stringbuilder

▼魔方 西西 提交于 2019-12-18 02:49:48
问题 I'm getting a java outOfMemoryError when I call this method - i'm using it in a loop to parse many large files in sequence. my guess is that result.toString() is not getting garbage collected properly during the loop. if so, how should i fix it? private String matchHelper(String buffer, String regex, String method){ Pattern abbrev_p = Pattern.compile(regex);//norms U.S.A., B.S., PH.D, PH.D. Matcher abbrev_matcher = abbrev_p.matcher(buffer); StringBuffer result = new StringBuffer(); while

Exception in thread “main” java.lang.OutOfMemoryError: Java heap space

China☆狼群 提交于 2019-12-17 23:15:22
问题 I have written a code and I run it a lot but suddenly I got an OutOfMemoryError : Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at javax.media.j3d.BoundingBox.<init>(BoundingBox.java:86) at javax.media.j3d.NodeRetained.<init>(NodeRetained.java:198) at javax.media.j3d.LeafRetained.<init>(LeafRetained.java:40) at javax.media.j3d.LightRetained.<init>(LightRetained.java:44) at javax.media.j3d.DirectionalLightRetained.<init>(DirectionalLightRetained.java:50) at javax.media

Tensorflow runs out of memory while computing: how to find memory leaks?

人走茶凉 提交于 2019-12-17 20:33:13
问题 I'm iteratively deepdreaming images in a directory using the Google's TensorFlow DeepDream implementation (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb). My code is as follows: model_fn = tensorflow_inception_graph.pb # creating TensorFlow session and loading the model graph = tf.Graph() sess = tf.InteractiveSession(graph=graph) with tf.gfile.FastGFile(model_fn, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f

Out of memory error in symfony

一个人想着一个人 提交于 2019-12-17 19:34:16
问题 I'm currently working on Symfony project (const VERSION ='2.5.10') and I am using xampp. PHP version is 5.5.19. My problem is everytime I run my dev environment I get an error : OutOfMemoryException: Error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 3358976 bytes) in C:\xampp\htdocs\Editracker\vendor\symfony\symfony\src\Symfony\Component\HttpKernel\Profiler\FileProfilerStorage.php line 153 and everytime I refresh the page it gives different memory size. I also think

Solr/Lucene fieldCache OutOfMemory error sorting on dynamic field

▼魔方 西西 提交于 2019-12-17 19:29:39
问题 We have a Solr core that has about 250 TrieIntField s (declared as dynamicField ). There are about 14M docs in our Solr index and many documents have some value in many of these fields. We have a need to sort on all of these 250 fields over a period of time. The issue we are facing is that the underlying lucene fieldCache gets filled up very quickly. We have a 4 GB box and the index size is 18 GB. After a sort on 40 or 45 of these dynamic fields, the memory consumption is about 90% and we