Memory Fragmentation Profiler

≡放荡痞女 提交于 2019-11-28 22:36:31

问题


Are there any good memory fragmentation profilers? (linux gcc version would be nice). Valgrind cannot analyze this because it uses custom malloc/free functions.

Thanks, Andrew


回答1:


I would start with mtrace. When you have a trace, glibc comes with a perl script mtrace(1) which finds leaks. However, the trace format is easy to understand, so it should be straight-forward process this into fragmentation analysis.




回答2:


I'm afraid the answer is Valgrind.

You can tell Valgrind which functions are used to make allocations and how it does it using valgrind extensions to code (so you need to modify and recompile your application, but the changes compile to noops if you're not debugging), the details are in Valgrind manual Memory pools: working with custom allocators.

Once you've done this, you have two tools that allow you to diagnose your heap usage: massif and DHAT (quick reminder, Valgrind is a set of tools, it just runs the one we all know and love, Memcheck, as default).

Massif "is a heap profiler. It measures how much heap memory your program uses. This includes both the useful space, and the extra bytes allocated for book-keeping and alignment purposes. It can also measure the size of your program's stack(s), although it does not do so by default."

It can create "graphs", so it is kind of graphical:

19.63^                                               ###                      
     |                                               #                        
     |                                               #  ::                    
     |                                               #  : :::                 
     |                                      :::::::::#  : :  ::               
     |                                      :        #  : :  : ::             
     |                                      :        #  : :  : : :::          
     |                                      :        #  : :  : : :  ::        
     |                            :::::::::::        #  : :  : : :  : :::     
     |                            :         :        #  : :  : : :  : :  ::   
     |                        :::::         :        #  : :  : : :  : :  : :: 
     |                     @@@:   :         :        #  : :  : : :  : :  : : @
     |                   ::@  :   :         :        #  : :  : : :  : :  : : @
     |                :::: @  :   :         :        #  : :  : : :  : :  : : @
     |              :::  : @  :   :         :        #  : :  : : :  : :  : : @
     |            ::: :  : @  :   :         :        #  : :  : : :  : :  : : @
     |         :::: : :  : @  :   :         :        #  : :  : : :  : :  : : @
     |       :::  : : :  : @  :   :         :        #  : :  : : :  : :  : : @
     |    :::: :  : : :  : @  :   :         :        #  : :  : : :  : :  : : @
     |  :::  : :  : : :  : @  :   :         :        #  : :  : : :  : :  : : @
   0 +----------------------------------------------------------------------->KB     0                                                                   29.48

Number of snapshots: 25
 Detailed snapshots: [9, 14 (peak), 24]

What's more, there's a Massif Visualizer that is produces really pretty graphs.

DHAT allows you to diagnose how exactly the application uses its heap, which parts are short lived, which are kept through whole program's life, but used only in the beginning, etc. Unfortunately it doesn't have any nice graphs or graphical tools that go with it, the output is pure text. Thankfully you can tell it how much data you want to get and how to sort it so it's not as bad as it sounds.




回答3:


I have trouble understanding how any tool you might find would understand the segment data structures of your custom memory management. You might be able to get the busy distribution(hooking into malloc/free) but the free distribution (which essentially is the fragmentation) seems up in the air.

So why not add busy/free statistics/histograms to your custom memory manager. If bins are indexed by something proportional to log2(size) its O(1) to keep these statistics as when you split and coalesce you know the sizes and you can find the bin by direct lookup using an index proportional to log2(size)

eg histogram bin intervals

[2^n,2^(n+1) ) ...

(eg if you want finer bins use log base square root 2(size) which can be calculated with 4 integer instructions on x86 [bit scan, compare, set, add])

another set of reasonable bin sizes to use are the following open intervals

[2^n, 2^n+2^(n-1) ),[2^n+2^(n-1),2^(n+1) )...

again easily calculable [bit scan, shift, and, add])




回答4:


nedmalloc is a very good custom allocator, comes with source, optimized to avoid fragmentation.

I would plug that in, and start looking at its internal logging for fragmentation statistics.



来源:https://stackoverflow.com/questions/1386776/memory-fragmentation-profiler

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!