memory-management

Memory Continues to Increase when Loading and Releasing NSImage

别说谁变了你拦得住时间么 提交于 2019-12-30 11:22:16
问题 I have a problem where my application aggressively consumes memory to a "thrashing point" with successive image file loads. For example, consider the following code, which repeatedly loads and releases a 15MB JPEG file (large file size for test purposes): NSURL *inputUrl = [NSURL URLWithString:@"file:///Users/me/Desktop/15MBjpeg.jpg"]; for(int i=0; i<1000; i++) { NSImage *image = [[NSImage alloc] initWithContentsOfURL:inputUrl]; [image release]; } It performs quickly for the first several

Actionscript memory management, garbage collection

我与影子孤独终老i 提交于 2019-12-30 11:18:10
问题 This blog (and others) state that you should set object references to null inside your dispose() methods when cleaning up objects. However, Actionscript 3 (with Flash Player 9) uses mark and sweep to clear out circular references for you. So I am wondering: is there really any reason to null out your object references? 回答1: I never do - as long as you do the obvious: Break all reference to the object (remove from arrays, set variables storing the object to null, remove from display list)

Memory error only in Spyder IDE

十年热恋 提交于 2019-12-30 11:05:21
问题 doing the following causes a MemoryError in my Spyder Python IDE: >>> from numpy import * >>> a_flt = ones((7000,7000), dtype=float64)+4 >>> b_flt = ones((7000,7000), dtype=float64)+1 Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError >>> THis is weird, since the memory usage in the statusbar of Spyder shows that only approx. 25% of my memory is used. Furthermore, when generating even a higher number of these large 7000*7000 arrays in the standard Python IDE

Memory error only in Spyder IDE

强颜欢笑 提交于 2019-12-30 11:05:10
问题 doing the following causes a MemoryError in my Spyder Python IDE: >>> from numpy import * >>> a_flt = ones((7000,7000), dtype=float64)+4 >>> b_flt = ones((7000,7000), dtype=float64)+1 Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError >>> THis is weird, since the memory usage in the statusbar of Spyder shows that only approx. 25% of my memory is used. Furthermore, when generating even a higher number of these large 7000*7000 arrays in the standard Python IDE

How to read blocks of data from a file and then read from that block into a vector?

对着背影说爱祢 提交于 2019-12-30 10:56:33
问题 Suppose I have a file which has x records. One 'block' holds m records. Total number of blocks in file n=x/m. If I know the size of one record, say b bytes (size of one block = b*m), I can read the complete block at once using system command read() (is there any other method?). Now, how do I read each record from this block and put each record as a separate element into a vector. The reason why I want to do this in the first place is to reduce the disk i/o operations. As the disk i/o

Properly Overloading new/delete new[]/delete[]

∥☆過路亽.° 提交于 2019-12-30 09:42:37
问题 This is a follow up to my previous question, Initializing a class using malloc Accepted answer on the question works and gives me new/delete on the avr-gcc, here is the problem but my overloaded new delete wracks havoc on regular gcc, what is the proper way to overload new delete all my classes derive from a common base class so ideally i would like to just override new delete for my object so it does not mess with stl stdlib etc. 回答1: 'new' and 'delete' can overloaded inside the common

Properly Overloading new/delete new[]/delete[]

安稳与你 提交于 2019-12-30 09:40:11
问题 This is a follow up to my previous question, Initializing a class using malloc Accepted answer on the question works and gives me new/delete on the avr-gcc, here is the problem but my overloaded new delete wracks havoc on regular gcc, what is the proper way to overload new delete all my classes derive from a common base class so ideally i would like to just override new delete for my object so it does not mess with stl stdlib etc. 回答1: 'new' and 'delete' can overloaded inside the common

constant app memory increase ( IOAccelResource )

人走茶凉 提交于 2019-12-30 09:37:28
问题 I am trying to wrap my mind around an issue (eluded to in this question). The context is: turn-based game, developed with cocos2d version 2.0, obj-c, no ARC, currently prepping an AppStore update to account for some iOS 7 issues (mine, not iOS7). My own instrumentation, as well as Instruments, show no leaks, no abandoned memory, nothing... flat. This also used to be the case under iOS 4,5,6.1. However, in my test rundown prior to submission, when profiling on device, i see a 1Mb increase per

memory management & std::allocator

十年热恋 提交于 2019-12-30 09:34:14
问题 In reviewing my code I see some "ugly" structure I use, in a class (called "map") I have a vector which contains a "data" class: std::vector<PointerToHUGEClass> vector; Where PointerToHUGEClass is just like the name describes. (though the object pointed too is also owned by the map class, and created with the "new" parameter in the constructor). This works all good (at the moment). However I still feel it is more of a work-around. The only reason I am using a "PointerToHUGEClass" instead of

Is using realloc() on a dynamically allocated 2D array a good idea?

浪子不回头ぞ 提交于 2019-12-30 09:31:35
问题 I am mainly interested in the viability of shrinking such an array. I'm working on a project where I have used single malloc() calls to each create individual moderately large 2D arrays. (Each only few tens of MiB, at the largest.) The thing is that over the life of one of the arrays, its content dramatically shrinks in size (by more than half). Obviously, I could just leave the array size alone for the life of the program. (It's only a x MiB on a system with GiB of RAM available.) But, we