std::unordered_map very high memory usage

我们两清 提交于 2019-11-27 07:46:36

问题


Yesterday i tried to use std::unordered_map and this code confused me how much memory it used.

typedef list<string> entityId_list;
struct tile_content {
   char cost;
   entityId_list entities;
};
unordered_map<int, tile_content> hash_map;

for (size_t i = 0; i < 19200; i++) {
   tile_content t;
   t.cost = 1;
   map[i] = t;
}

All this parts of code was compiled in MS VS2010 in debug mode. What I've been seen in my task manager was about 1200 kb of "clean" process, but after filling hash_map it uses 8124 kb of memory. Is it normal behavior of unordered_map? Why so much memory used?


回答1:


The unordered_map structure is designed to hold large numbers of objects in a way that makes adds, deletes, lookups, and orderless traverses efficient. It's not meant to be memory-efficient for small data structures. To avoid the penalties associated with resizing, it allocates many hash chain heads when it's first created.




回答2:


That's roughly 6MB for ~20k objects, so 300 bytes per object. Given the hash table may well be sized to have several times more buckets than current entries, each bucket may itself be a pointer to a list or vector of colliding objects, each heap allocation involved in all of that has probably been rounded up to the nearest power of two, and you've got debug on which may generate some extra bloat, it all sounds about right to me.

Anyway, you're not going to get sympathy for the memory or CPU efficiency of anything in debug build ;-P. Microsoft can inject any slop they like in there, and the user has no right of expectations around performance. If you find it's bad in an optimised build, then you've got something to talk about.

More generally, how it scales with size() is very important, but it's entirely legitimate to wonder how a program would go with a huge number of relatively small unordered maps. Worth noting that below a certain size() even brute force searches in a vector, binary searches in a sorted vector, or a binary tree may out-perform an unordered map, as well as being more memory efficient.




回答3:


This doesn't necessarily mean that the hash map uses so much memory, but that the process has requested that much memory from the OS.

This memory is then used to satisfy malloc/new requests by the program. Some (or most, I am not sure about this) memory allocators require more memory from the OS than needed at that point in time for efficiency.

To know how much memory is used by the unordered_map I would use a memory profiler like perftools.



来源:https://stackoverflow.com/questions/9375450/stdunordered-map-very-high-memory-usage

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!