dynamic-memory-allocation

Shrinking with realloc

自古美人都是妖i 提交于 2019-12-01 17:52:16
I encountered this small piece of code in this question , & wanted to know, Can the realloc() function ever move a memory block to another location, when the memory space pointed to is shrinked? int * a = malloc( 10*sizeof(int) ); int * b = realloc( a, 5*sizeof(int) ); If possible, under what conditions, can I expect b to have an address different from that in a ? It's possible for realloc to move memory on any call. True in many implementations a shrink would just result in the change of the reserved size in the heap and wouldn't move memory. However in a heap which optimized for low

Shrinking with realloc

拈花ヽ惹草 提交于 2019-12-01 16:13:06
问题 I encountered this small piece of code in this question, & wanted to know, Can the realloc() function ever move a memory block to another location, when the memory space pointed to is shrinked? int * a = malloc( 10*sizeof(int) ); int * b = realloc( a, 5*sizeof(int) ); If possible, under what conditions, can I expect b to have an address different from that in a ? 回答1: It's possible for realloc to move memory on any call. True in many implementations a shrink would just result in the change of

How to schedule collection cycles for custom mark-sweep collector?

我们两清 提交于 2019-12-01 06:15:28
I've written a simple garbage collector for a Postscript virtual machine, and I'm having difficulty designing a decent set of rules for when to do a collection (when the free list is too short?) and when to allocate new space (when there's a lot of space to use?). I've written bottom-up so far, but this question involves top-level design. So I feel I'm on shaky ground. All objects are managed and access is only through operator functions, so this is a collector in C, not for C. The primary allocator function is called gballoc : unsigned gballoc(mfile *mem, unsigned sz) { unsigned z = adrent

How to schedule collection cycles for custom mark-sweep collector?

…衆ロ難τιáo~ 提交于 2019-12-01 05:24:50
问题 I've written a simple garbage collector for a Postscript virtual machine, and I'm having difficulty designing a decent set of rules for when to do a collection (when the free list is too short?) and when to allocate new space (when there's a lot of space to use?). I've written bottom-up so far, but this question involves top-level design. So I feel I'm on shaky ground. All objects are managed and access is only through operator functions, so this is a collector in C, not for C. The primary

Sized Deallocation Feature In Memory Management in C++1y

余生长醉 提交于 2019-11-30 19:16:02
Sized Deallocation feature has been proposed to include in C++1y. However I wanted to understand how it would affect/improve the current c++ low-level memory management ? This proposal is in N3778 , which states following about the intent of this. With C++11 , programmers may define a static member function operator delete that takes a size parameter indicating the size of the object to be deleted. The equivalent global operator delete is not available. This omission has unfortunate performance consequences. Modern memory allocators often allocate in size categories, and, for space efficiency

nothrow or exception?

时间秒杀一切 提交于 2019-11-30 18:54:13
I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something. Since #include <new> //... T * t = new (std::nothrow) T(); if(t) { //... } //... Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t) , why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)?? What are the benefits (if any) of the normal new allocation

Why shouldn't we have dynamic allocated memory with different size in embedded system

半世苍凉 提交于 2019-11-30 18:53:52
问题 I have heard in embedded system, we should use some preallocated fixed-size memory chunks(like buddy memory system?). Could somebody give me a detailed explanation why? Thanks, 回答1: In embedded systems you have very limited memory. Therefore, if you occasionally lose only one byte of memory (because you allocate it , but you dont free it), this will eat up the system memory pretty quickly (1 GByte of RAM, with a leak rate of 1/hour will take its time. If you have 4kB RAM, not as long)

How much memory should you be able to allocate?

China☆狼群 提交于 2019-11-30 14:04:39
问题 Background: I am writing a C++ program working with large amounts of geodata, and wish to load large chunks to process at a single go. I am constrained to working with an app compiled for 32 bit machines. The machine I am testing on is running a 64 bit OS (Windows 7) and has 6 gig of ram. Using MS VS 2008. I have the following code: byte* pTempBuffer2[3]; try { //size_t nBufSize = nBandBytes*m_nBandCount; pTempBuffer2[0] = new byte[nBandBytes]; pTempBuffer2[1] = new byte[nBandBytes];

Does ::operator new(size_t) use malloc()?

允我心安 提交于 2019-11-30 12:56:42
Does ::operator new(size_t) call malloc() internally, or does it use system calls / OS-specific library calls directly? What does the C++ standard say? In this answer it says that: malloc() is guaranteed to return an address aligned for any standard type. ::operator new(n) is only guaranteed to return an address aligned for any standard type no larger than n , and if T isn't a character type then new T[n] is only required to return an address aligned for T . And that suggests that new() cannot be required to call malloc() . Note: There's an SO question about everything operator new does other

Heap/dynamic vs. static memory allocation for C++ singleton class instance

人盡茶涼 提交于 2019-11-30 11:17:56
My specific question is that when implementing a singleton class in C++, is there any substantial differences between the two below codes regarding performance, side issues or something: class singleton { // ... static singleton& getInstance() { // allocating on heap static singleton* pInstance = new singleton(); return *pInstance; } // ... }; and this: class singleton { // ... static singleton& getInstance() { // using static variable static singleton instance; return instance; } // ... }; (Note that dereferencing in the heap-based implementation should not affect performance, as AFAIK there