Efficient implementation of binary heaps

不问归期 提交于 2019-12-02 15:42:59

An interesting paper/article on this topic considers the behavior of caching/paging on the overall layout of the heap; The idea being that it's vastly more costly to pay for a cache miss or page in than nearly any other part of a datastructure's implementation. The paper discusses a heap layout that addresses this.

You're Doing It Wrong by Poul-Henning Kamp

As an elaboration on @TokenMacGuy's post, you might want to look into cache-oblivious data structures. The idea is to build data structures that, for arbitrary caching systems, minimize the number of cache misses. They're tricky, but they actually might be useful from your perspective since they perform well even when dealing with multi-layer cache systems (for example, registers / L1 / L2 / VM).

There's actually a paper detailing an optimal cache-oblivious priority queue that might be of interest. This data structure would have all sorts of advantages in terms of speed, since it would try to minimize the number of cache misses at every level.

I dont know if you missed this link on the wiki page for Binary heap or you decided that it isnt worth it, but either way: http://en.wikipedia.org/wiki/B-heap

On the first point: even having a "spare spot" for your array based implementation isn't a waste. Many operations need a temporary element anyway. Rather than initializing a new element each time, having a dedicated element at index [0] is handy.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!