Linked lists, arrays, and hardware memory caches

元气小坏坏 提交于 2019-12-04 05:17:13

Arrays perform better not only because of caching, but also because of prefetching.

Caching has 2 main benefits - sequential elements may reside in the same line, so you may fetch once and use the entire line multiple times (while in a linked list your next element is elsewhere so you don't enjoy the benefit). This benefit is reduced the bigger the elements become, and is gone once your element passes a line size.

The second benefit is more subtle - you get better utilization of the cache since it's organized in a way that benefits sequential allocation. This means that an array up to the cache size may still fit, while a randomly allocated list may have some collisions that would cause thrashing even if the list size is smaller than the cache.

However, aside from caching, the bigger benefit from spatially allocated structures is from prefetching. Most CPUs would automatically prefetch the next lines in a stream of accesses such as an array access, and would therefore eliminate all misses in a scenario of sequential access.

On the other hand, all these benefits are just optimizations, so they can only speed up the performance linearly, but can never mitigate an asymptotic complexity difference such as the O(1) insertion that the list provides. Eventually, you will need to benchmark your code to see if such cases are required and create a bottleneck - if so, a hybrid approach may be needed.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!