memory-management

why is stack memory size so limited?

我的梦境 提交于 2020-01-08 17:05:17
问题 When you allocate memory on the heap, the only limit is free RAM (or virtual memory). It makes Gb of memory. So why is stack size so limited (around 1 Mb) ? What technical reason prevents you to create really big objects on the stack ? Update : My intent might not be clear, I do not want to allocate huge objects on the stack and I do not need a bigger stack. This question is just pure curiosity. 回答1: My intuition is the following. The stack is not as easy to manage as the heap. The stack need

Memory Allocation “Error: cannot allocate vector of size 75.1 Mb” [duplicate]

℡╲_俬逩灬. 提交于 2020-01-08 12:27:15
问题 This question already has answers here : R memory management / cannot allocate vector of size n Mb (8 answers) Closed last year . In the course of vectorizing some simulation code, I've run into a memory issue. I'm using 32 bit R version 2.15.0 (via RStudio version 0.96.122) under Windows XP. My machine has 3.46 GB of RAM. > sessionInfo() R version 2.15.0 (2012-03-30) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United Kingdom

How to know that which is process allocate memory to?

亡梦爱人 提交于 2020-01-07 06:59:08
问题 I want to know the zone that process allocate memory to. I know that I can use cat /proc/pid/status And also cat /proc/zoneinfo Both of this command that doesn't answer where the memory of process allocated to. Is there any other command? And if there is how can I implement it in the kernel? 来源: https://stackoverflow.com/questions/43674673/how-to-know-that-which-is-process-allocate-memory-to

Better Memory (Heap) management on Solaris 10

假如想象 提交于 2020-01-07 06:55:11
问题 I have c code with embedded SQL for Oracle through Pro*C. Whenever I do an insert or update (below given an update example), update TBL1 set COL1 = :v, . . . where rowid = :v To manage bulk insertions and updates, I have allocated several memory chunks to insert as bulk and commit once. There are other memory allocations too going on as and when necessary. How do I better manage the memory (heap) for dynamic memory allocations? One option is to have the heap size configurable during the GNU

Does reusing a list slice to get length cost additional memory?

非 Y 不嫁゛ 提交于 2020-01-07 06:50:49
问题 I proposed a something in a comment in this answer. Martijn Pieters said that my suggestion would be memory intensive, and he's usually right, but I like to see things for myself, so I tried to profile it. Here's what I got: #!/usr/bin/env python """ interpolate.py """ from memory_profiler import profile @profile def interpolate1(alist): length = (1 + len(alist)) // 2 alist[::2] = [0] * length @profile def interpolate2(alist): length = len(alist[::2]) alist[::2] = [0] * length a = [] b = []

C# memory usage for creating objects in a for loop

余生颓废 提交于 2020-01-07 06:38:16
问题 I have a complex database conversion console app that reads from an old database, does a bunch of things, and puts into the new database. I'm having an escalating memory problem where my mem usage (as monitored in task manager) constantly climbs and eventually slows down the process to a halt. I've boiled it down to the simplest possible test POC to try and understand what's going on. for (int i = 0; i < 100000; i++) { TestObj testc = new TestObj { myTest = "testing asdf" }; } public class

How to correctly notify a delegate that the instance is no longer needed?

心已入冬 提交于 2020-01-07 06:15:54
问题 This is my pattern: 1) SpecialView creates a MessageView and holds a strong reference to it. 2) User taps a button in MessageView which causes it to fade out. MessageView then tells it's delegate, SpecialView, that it faded out completely. 3) SpecialView releases MessageView. The problem is this: - (void)fadedOut:(NSString*)animationID finished:(NSNumber*)finished context:(void*)context { [self.delegate messageViewFadedOut:self]; // delegate releases us... // self maybe got deallocated...

Destructing a linked list

走远了吗. 提交于 2020-01-07 03:42:11
问题 I was trying to implement a linked list for solving an algorithm problem. It basically worked, however, it turned out that I was using too much memory. I would appreciate if someone point out defects of following destructor design. template<typename T> struct Node { Node(): item(0),next(0) {} Node(T x): item(x),next(0) {} T item; Node* next; }; template <typename T> struct List { List() : head(0),tail(0) {} Node<T>* head; Node<T>* tail; void insert(T x) { Node<T>* newNode = new Node<T>(x); if

MemberwiseClone equivalent to an existing object?

蹲街弑〆低调 提交于 2020-01-07 03:42:04
问题 There are quite a few questions on here about MemberwiseClone, but I can't find anything covering this exactly. As I understand it, MemberwiseClone basically just copies an area of memory for an object, dumps it somewhere else and calls it a new instance of that object. Obviously very quick, and for large objects it is the quickest way to make a copy. For small objects with simple constructors, it's quicker to create an instance and copy the individual properties. Now, I have a tight loop in

C# - Converting Bitmap to byte[] using Marshal.Copy not working consistently?

家住魔仙堡 提交于 2020-01-07 03:06:25
问题 I have been trying to implement the image comparing algorithm seen here: http://www.dotnetexamples.com/2012/07/fast-bitmap-comparison-c.html The problem I have been having is that when I try to compare a large amount of images one after another using the method pasted below (a slightly modified version from the link above), my results seem to be inaccurate. In particular, if I try to compare too many different images, even the ones that are the same will occasionally be detected as different.