问题
For example, if deallocations of dynamic memory are always done in opposite direction to allocations. In that case, is it guaranteed that heap will not become fragmented?
And from theoretical point of view: Does there exist some realistic way for a nontrivial application to manage memory to completely avoid heap fragmentation? (After each atomic change in heap, is heap still unfragmented?)
回答1:
Exist some realistical way for nontrivial application how to manage memory to completely avoid heap fragmentation
Yes. Allocate everything on the stack. This may seem like a counsel of perfection but I've done it in non-trivial programs.
EDIT For absolute clarity and avoidance of doubt, I said 'the stack'. By that I clearly mean automatic storage, i.e. the local variable stack, not any stack class. Some of the comments about this are basically obtuse.
回答2:
The only way to really avoid heap fragmentation, is to do memory management yourself, keep track of all the pointers you use, and defragment your heap from time to time. But be aware that there is a reason why this is not done: It does not pay off. The effort in code and performance you need to spend on defragmenting your heap is way too large.
Just an anecdotal bit of history information:
The old MacOS (pre-MacOS-X era!) used so called handles for memory objects: pointers into lists of pointers to the actual memory regions. That had the advantage that the OS could move a memory object by modifying its pointer in the table; any references by the handle would remain intact. But you had to lock a handle everytime you wanted to access its data across a system call. It should be mentioned that this was before multicore became mainstream, so there was no real concurrency going on. With a multithreaded application, one would have to lock handles every time, at least if the other threads were allowed to call into the system. I repeat, there is a reason handles did not survive in MacOS-X...
回答3:
I think it's theoretically possible to achieve this (assuming the heap implementation is also doing "the right thing" [e.g. merging blocks immediately as they are freed])
But in any practical application that is solving some real problem, it's unlikely. Certainly any usage of std::string
or std::vector
is almost certain to allocate/free the memory in "unordered ways".
If you have a scenario where heap fragmentation is a potential problem, it's almost certainly better to use a solution that reduces/eliminates this (e.g. fixed size bucket allocation, separate heaps for various types of allocations - these are just two of MANY different solutions).
回答4:
It is possible to partition the heap into regions where memory allocation of particular size is allowed. This way your heap would not be partitioned but you'll have memory consumption overhead and a risk to ran out of free chunks, but sometimes it is well justified (say when software has to stay online 24/7/365. Sizes can be powers of 2, or most typically used sizes for the application, or just new memory regions allocated on demand upon the first allocation of size N up to some sensible N.
It will look similar to this (*
- allocated block, - free block)
size=1: [*][ ][*][*][*][ ][ ][ ][ ][ ]
size=2: [ ][ ][ ][**][**][ ][ ][ ]
size=4: [ ][****][ ][ ][ ]
size=8: [ ][ ][ ]
....
A visualizer I've written a while ago when experimented with the subject: http://bobah.net/d4d/tools/cpp-heapmap
回答5:
In managed languages such as Java heap defragmentation happens all the time. This is possible because Java "pointers" are actually References and Java is aware of them and is able to move them when it moves objects in the heap.
In C and C++ pointers can be anything, they can be calculated, combined, updated, etc. This means that once something is allocated it cannot be moved as you never know what might actually be a pointer to it - and that immediately makes defragmentation impossible in the general case.
The only way to reduce fragmentation is to only store elements of the same size in the same area of memory, but this is so inflexible it's not practicable for many real life programming situations.
回答6:
As mentioned by delnan above, another issue is virtual memory page fragmentation that can occur when there are a lot of allocations and freeing of memory. A windows .net application relies on .net's garbage collection scheme to repack allocated memory, but this results in pauses in the running program. I'm not sure how long each pause is.
回答7:
I did much of my software design and programming in hard real time systems. These are very critical real-time system that control oil refineries, power plants, Steel mills, Copper Smelters, Petro Chemical plants, Oil and Natural gas pipelines storage and transport facilities. We could not allow any memory fragmentation that would force loss of functionality or a reboot of any control servers as the results would be at a minimum financial loss and at worst catastrophic damage and loss of life.
We simply created three fixed buffer sizes: small, medium and large and on start up we preallocated all that memory into those 3 exact sizes and we managed the memory ourselves via a simple linked list. No garbage collection was ever required but we did, of course, have to explicitly allocate and deallocate the buffers.
来源:https://stackoverflow.com/questions/21946870/is-it-possible-to-completely-avoid-heap-fragmentation