Overhead and implementation of using shared_ptr

 ̄綄美尐妖づ 提交于 2019-12-04 00:15:28
Matthieu M.

First question: using operator->

All the implementations I have seen have a local cache of T* right in the shared_ptr<T> class so that the field is on the stack, operator-> has thus a comparable cost to using a stack local T*: no overhead at all.

Second question: mutex/atomics

I expect libstdc++ to use atomics on x86 platform, whether through standard facilities or specific g++ intrinsics (in the older versions). I believe the Boost implementation already did so.

I cannot, however, comment on ARM.

Note: C++11 introducing move semantics, many copies are naturally avoided in the usage of shared_ptr.

Note: read about correct usage of shared_ptr here, you can use references to shared_ptr (const or not) to avoid most of the copies/destruction in general, so the performance of those is not too important.

GCC's shared_ptr will use no locking or atomics in single-threaded code. In multi-threaded code it will use atomic operations if an atomic compare-and-swap instruction is supported by the CPU, otherwise the reference counts are protected by a mutex. On i486 and later it uses atomics, i386 doesn't support cmpxchg so uses a mutex-based implementation. I believe ARM uses atomics for the ARMv7 architecture and later.

(The same applies to both std::shared_ptr and std::tr1::shared_ptr.)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!