Heap fragmentation in 64 bit land

折月煮酒 提交于 2019-12-18 19:17:39

问题


In the past, when I've worked on long-running C++ daemons I've had to deal with heap fragmentation issues. Tricks like keeping a pool of my large allocations were necessary to keep from running out of contiguous heap space.

Is this still an issue with a 64 bit address space? Perf is not a concern for me, so I would prefer to simplify my code and not deal with things like buffer pools anymore. Does anyone have any experience or stories about this issue? I'm using Linux, but I imagine many of the same issues apply to Windows.


回答1:


Is this still an issue with a 64 bit address space?

No, it is not still an issue.

You are correct that it was an issue on 32-bit systems, but it no longer is an issue on 64-bit systems.

The virtual address space is so large on 64-bit systems (2^48 bytes at the moment on todays x86_64 processors, and set to increase gradually to 2^64 as new x86_64 processors come out), that running out of contiguous virtual address space due to fragmentation is practically impossible (for all but some highly contrived corner cases).

(It is a common error of intuition caused by the fact that 64 is "only" double 32, that causes people to think that a 64-bit address space is somehow roughly double a 32-bit one. In fact a full 64-bit address space is 4 billion times as big as a 32-bit address space.)

Put another way if it took your 32-bit daemon one week to fragment to a stage where it couldn't allocate an x byte block, than it would take at minimum one thousand years to fragment today's x86_64 processors 48-bit address spaces, and it would take 80 million years to fragment the future planned full 64-bit address space.




回答2:


Heap fragmentation is just as much of an issue under 64 bit as under 32 bit. If you make lots of requests with varying lifetimes, then you are going to get a fragmented heap. Unfortunately, 64 bit operating systems don't really help with this, as they still can't really shuffle the small bits of free memory around to make larger contiguous blocks.

If you want to deal with heap fragmentation, you still have to use the same old tricks.

The only way that a 64 bit OS could help here is if there is some amount of memory that is 'large enough' that you would never fragment it.




回答3:


If your process genuinely needs gigabytes of virtual address space, then upgrading to 64-bit really does instantly remove the need for workarounds.

But it's worth working out how much memory you expect your process to be using. If it's only in the region of a gigabyte or less, there's no way even crazy fragmentation would make you run out of 32-bit address space - memory leaks might be the problem.

(Windows is more restrictive, by the way, since it reserves an impolite amount of address space in each process for the OS).



来源:https://stackoverflow.com/questions/314909/heap-fragmentation-in-64-bit-land

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!