out-of-memory

java.lang.OutOfMemoryError: GC overhead limit exceeded when loading an xlsx file

天大地大妈咪最大 提交于 2019-12-23 02:32:54
问题 I understand what the error means, that my program is consuming too much memory and for a long period of the time it is not recovering. My program is just reading 6,2Mb xlsx file when the memory issue occures. When I try to monitor the program, it very quickly reaches 1,2Gb in memory consumption and then it crashes. How can it reach 1,2Gb when reading 6,2Mb file? Is there a way to open the file in chunks? So that it doesn't have to be loaded to the memory? Or any other solution? Exactly this

Out of memory using svmtrain in Matlab

别来无恙 提交于 2019-12-22 21:14:40
问题 I have a set of data that I am trying to learn using SVM. For context, the data has a dimensionality of 35 and contains approximately 30'000 data-points. I have previously trained decision trees in Matlab with this dataset and it took approximately 20 seconds. Not being totally satisfied with the error rate, I decided to try SVM. I first tried svmtrain(X,Y) . After about 5 seconds, I get the following message: ??? Error using ==> svmtrain at 453 Error calculating the kernel function: Out of

Out of memory using svmtrain in Matlab

跟風遠走 提交于 2019-12-22 21:14:11
问题 I have a set of data that I am trying to learn using SVM. For context, the data has a dimensionality of 35 and contains approximately 30'000 data-points. I have previously trained decision trees in Matlab with this dataset and it took approximately 20 seconds. Not being totally satisfied with the error rate, I decided to try SVM. I first tried svmtrain(X,Y) . After about 5 seconds, I get the following message: ??? Error using ==> svmtrain at 453 Error calculating the kernel function: Out of

Causes for ENOMEM from ::popen()

僤鯓⒐⒋嵵緔 提交于 2019-12-22 17:40:07
问题 I have an application that mostly works, but am having one condition wherein the call to ::popen() gets an error with errno set to ENOMEM. The man page for ::popen() refers you to page for ::fork() which itself lists ENOMEM with this brief comment on Linux: The fork() function may fail if: ENOMEM Insufficient storage space is available. I am wondering if I am really running out of memory, or perhaps some other resource like file descriptors? Can fork() give ENOMEM for something other than

OutOfMemoryException when creating huge string in ASP.NET

这一生的挚爱 提交于 2019-12-22 13:46:45
问题 When exporting a lot of data to a string (csv format), I get a OutOfMemoryException. What's the best way to tackle this? The string is returned to a Flex Application. What I'd do is export the csv to the server disk and give back an url to Flex. Like this, I can flush the stream writing to the disk. Update: String is build with a StringBuilder: StringBuilder stringbuilder = new StringBuilder(); string delimiter = ";"; bool showUserData = true; // Get the data from the sessionwarehouse List

memory error by using rbf with scipy

瘦欲@ 提交于 2019-12-22 12:54:47
问题 I want to plot some points with the rbf function like here to get the density distribution of the points: if i run the following code, it works fine: from scipy.interpolate.rbf import Rbf # radial basis functions import cv2 import matplotlib.pyplot as plt import numpy as np # import data x = [1, 1, 2 ,3, 2, 7, 8, 6, 6, 7, 6.5, 7.5, 9, 8, 9, 8.5] y = [0, 2, 5, 6, 1, 2, 9, 2, 3, 3, 2.5, 2, 8, 8, 9, 8.5] d = np.ones(len(x)) print(d) ti = np.linspace(-1,10) xx, yy = np.meshgrid(ti, ti) rbf = Rbf

Cassandra Cluster - Specific Node - specific table high Dropped Mutations

大憨熊 提交于 2019-12-22 12:06:08
问题 My Compression strategy in Production was LZ4 Compression. But I modified it to Deflate For compression change, we had to use nodetool Upgradesstables to forcefully upgrade the compression strategy on all sstables But once upgradesstabloes command completed on all the 5 nodes in the cluster, My requests started to fail, both read and write The issue is traced to a specific node out of the 5 node cluster and to a spcific table on that node. My whole cluster has roughly same amount of data and

Android app crash out of memory on relaunch

痞子三分冷 提交于 2019-12-22 10:56:09
问题 So I'm having an infamous oom error caused by large bitmaps. But I've managed to fix most of the issues. The only issue remaining happens when I click back and close the app and then launch the app right after. Then the app will crash giving me a oom (out of memory) error. This wont happen if I click home. Why is this happening? My guess is that the GC hasn't finished clearing up and now I start it up while the old data is still lying around. And of course it isn't a new app so the old launch

Volley give me Out of memory exception after I make a lot of request with big amount of data

为君一笑 提交于 2019-12-22 10:45:00
问题 I have a Page Viewer and inside every page I have list View , this list view will have 10 records using a web service , so the page viewer use three calls of the web service to populate three pages (the current , the left and the right page) but after I make a lot of swipes I am getting this exception : java.lang.OutOfMemoryError: pthread_create (stack size 16384 bytes) failed: Try again at java.lang.VMThread.create(Native Method) at java.lang.Thread.start(Thread.java:1029) at com.android

Insufficient memory when opening TClientDataSet with a REPLACE in the underlying query

牧云@^-^@ 提交于 2019-12-22 10:44:16
问题 My Delphi code opens a TFDQuery (FireDAC), then opens the TClientDataSet connected to it via a TDataSetProvider : ClientDataSetData.Close; with QueryData do begin Close; SQL.Clear; SQL.Add(ASelectSQL); Open; end; ClientDataSetData.Open; ASelectSQL contains this SQL: SELECT TT_NIV_ID, TT_NIV, REPLACE(TT_NIV_NAME, '|', '!') as TT_NIV_NAME2 FROM TT_SYS_PRJ_NIV The ClientDataSetData.Open gives an insufficient memory error on a dataset with 42200 records. If I inspect the result data (in the