Is there a contiguously stored hash map data structure?

…衆ロ難τιáo~ 提交于 2019-12-08 02:54:18

问题


Think of collections of different types like Position, Color, Name. Instances of those can be connected by using the same key in the collection. Keys are global unique identifiers of 64 bit length. Currently, I use a hash map but this is not ideal.

// custom types
struct Position { float x, y, z; bool static; };
enum Color { RED, BLUE, GREEN };

// collections
std::unordered_map<uint64_t, Position> positions;
std::unordered_map<uint64_t, Color> colors;
std::unordered_map<uint64_t, std::string> names;

// add some data
// ...

// only access positions collection,
// so the key is not needed here
for (auto i = positions.begin(); i != positions.end(); ++i) {
    if (i->second.static) continue;
    i->second.x = (rand() % 1000) / 1000.f;
    i->second.y = (rand() % 1000) / 1000.f;
    i->second.z = (rand() % 1000) / 1000.f;
}

// access to all three collections,
// so we need the key here
for (auto i = positions.begin(); i != positions.end(); ++i) {
    uint64_t id   = *i->first;
    auto position = i->second;
    auto color    = colors.find(id);
    auto name     = names.find(id);
    if (color == colors.end() || name == names.end()) continue;
    draw(*name, *position, *color);
}

I try to separate the collections, but as you can see, it also happens that collected instances of multiple collections are needed at the same time. Of course, I also need to add or delete single elements from time to time, but these cases are not performance critical.

Now I'd like to optimize iterating over individual collections. Therefore, I try to get the collections contiguously stored, which is part of the idea of data oriented design. However, I still need to access individual instances very quickly. Directly using an array doesn't work since that would allocate way too much memory and not all instances of one type have counterparts of another type. On the other hand hash maps are not optimal for iteration.

I think the data structure have to internally use an array. Which data type should I use here? And is there such a data structure implemented in the C++ standard library?


回答1:


Create an unordered_map to offsets into a std::vector.

Store your elements in the std::vector. When you want to remove an element, swap it with the last element of the std::vector, and delete the last element. Go through the unordered_maps that store indexes into your std::vector and repair those that pointed to the last element to point to where the last element now is.

Removing an element is now O(n). If you remove a bunch of elements at once, you can do all of them in one O(n) pass with a little creativity.

Adding an element remains O(1).

Iterating over all of the elements involves iterating over the std::vector. If you need the hash value while iterating, store it redundantly there, or calculate it on the fly.

Wrap the std::vector and std::unordered_map behind a type that enforces the above rules, as otherwise the invariants will never persist, that exposes a map-like interface externally.

As an alternative to O(n) removal, create a parallel std::vector storing std::unordered_map<???>::iterators. Keep careful track of when a rehash occurs in the std::unordered_map (rehashing is deterministic, but you have to work out when it happens yourself), and when it does rebuild it (as all of the iterators will be invalidated by a rehash). Rehashes occur rarely, so this has an amortized constant cost1. When you want to erase, you can now update the index in the unordered_map in O(1) time (remember to update the second std::vector as well -- swapping position from last to the deleted element). You could store this in the same std::vector as the raw data, but that would bulk it up (which will slow down iteration).


1 Rehashes happen when the container passes reaches max_load_factor()*bucket_count() size. The bucket_count then grows exponentially, and the elements are moved around. Much like a std::vector's growth algorithm, this guarantees a linear-in-number-of-elements total elements moved -- so it maintains amortized constant insertion. The manual rebuild of your reverse table is no more expensive than the moving of all of the existing elements, so it also contributes an amortized constant insertion time factor.



来源:https://stackoverflow.com/questions/22284364/is-there-a-contiguously-stored-hash-map-data-structure

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!