memory-model

C++11 memory model and accessing different members of the same struct in different threads

安稳与你 提交于 2019-12-04 16:10:53
问题 Assume you've got the following definitions: struct X { char a, b; }; X x; And now assume you have two threads, one of which reads and writes x.a but never accesses x.b while the other one reads and writes x.b but never accesses x.a . Neither thread uses any locks or other synchronization primitives. Is this guaranteed to work in C++11? Or does it count as accessing the same object, and therefore need a lock? 回答1: It's safe. Quoting C++11: [intro.memory]p3: A memory location is either an

Cache coherence literature generally only refers store buffers but not read buffers. Yet one somehow needs both?

為{幸葍}努か 提交于 2019-12-04 13:41:07
When reading about consistency models (namely the x86 TSO), authors in general resort to models where there are a bunch of CPUs, their associated store buffers and their private caches. If my understanding is correct, store buffers can be described as queues where CPUs may put any store instruction they want to commit to memory. So as the name states, they are store buffers. But when I read those papers, they tend to talk about the interaction of loads and stores, with statements such as "a later load can pass an earlier store" which is slightly confusing, as they almost seem to be talking as

C++ memory_order_consume, kill_dependency, dependency-ordered-before, synchronizes-with

冷暖自知 提交于 2019-12-04 12:50:12
I am reading C++ Concurrency in Action by Anthony Williams. Currently I at point where he desribes memory_order_consume. After that block there is: Now that I’ve covered the basics of the memory orderings, it’s time to look at the more complex parts It scares me a little bit, because I don't fully understand several things: How dependency-ordered-before differs from synchronizes-with? They both create happens-before relationship. What is exact difference? I am confused about following example: int global_data[]={ … }; std::atomic<int> index; void f() { int i=index.load(std::memory_order

C++ memory model: do seq_cst loads synchronize with seq_cst stores?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 09:47:11
In the C++ memory model, there is a total order on all loads and stores of all sequentially consistent operations. I'm wondering how this interacts with operations that have other memory orderings that are sequenced before/after sequentially consistent loads. For example, consider two threads: std::atomic<int> a(0); std::atomic<int> b(0); std::atomic<int> c(0); ////////////// // Thread T1 ////////////// // Signal that we've started running. a.store(1, std::memory_order_relaxed); // If T2's store to b occurs before our load below in the total // order on sequentially consistent operations, set

How can memory_order_relaxed work for incrementing atomic reference counts in smart pointers?

我是研究僧i 提交于 2019-12-04 08:30:49
问题 Consider the following code snippet taken from Herb Sutter's talk on atomics: The smart_ptr class contains a pimpl object called control_block_ptr containing the reference count refs . // Thread A: // smart_ptr copy ctor smart_ptr(const smart_ptr& other) { ... control_block_ptr = other->control_block_ptr; control_block_ptr->refs.fetch_add(1, memory_order_relaxed); ... } // Thread D: // smart_ptr destructor ~smart_ptr() { if (control_block_ptr->refs.fetch_sub(1, memory_order_acq_rel) == 0) {

What does “store-buffer forwarding” mean in the Intel developer's manual?

痞子三分冷 提交于 2019-12-04 02:13:22
The Intel 64 and IA-32 Architectures Software Developer's Manual says the following about re-ordering of actions by a single processor (Section 8.2.2, "Memory Ordering in P6 and More Recent Processor Families"): Reads may be reordered with older writes to different locations but not with older writes to the same location. Then below when discussing points where this is relaxed compared to earlier processors, it says: Store-buffer forwarding, when a read passes a write to the same memory location. As far as I can tell, "store-buffer forwarding" isn't precisely defined anywhere (and neither is

Memory Model: preventing store-release and load-acquire reordering

本小妞迷上赌 提交于 2019-12-03 19:48:07
问题 It is known that, unlike Java's volatiles, .NET's ones allow reordering of volatile writes with the following volatile reads from another location. When it is a problem MemoryBarier is recommended to be placed between them, or Interlocked.Exchange can be used instead of volatile write. It works but MemoryBarier could be a performance killer when used in highly optimized lock-free code. I thought about it a bit and came with an idea. I want somebody to tell me if I took the right way. So, the

Why is (or isn't) setting fields in a constructor thread-safe?

爱⌒轻易说出口 提交于 2019-12-03 16:29:34
问题 Let's say you have a simple class like this: class MyClass { private readonly int a; private int b; public MyClass(int a, int b) { this.a = a; this.b = b; } public int A { get { return a; } } public int B { get { return b; } } } I could use this class in a multi-threaded manner: MyClass value = null; Task.Run(() => { while (true) { value = new MyClass(1, 1); Thread.Sleep(10); } }); while (true) { MyClass result = value; if (result != null && (result.A != 1 || result.B != 1)) { throw new

Concurrency and memory models

萝らか妹 提交于 2019-12-03 12:44:49
I'm watching this video by Herb Sutter on GPGPU and the new C++ AMP library. He is talking about memory models and mentions Weak Memory Models and then Strong Memory Models and I think he's referring to read/write ordering etc, but I am however not sure. Google turns up some interesting results (mostly science papers) on memory models, but can someone explain what is a Weak Memory Model and what is a Strong Memory Model and their relation to concurrency? In terms of concurrency, a memory model specifies the constraints on data accesses, and the conditions under which data written by one thread

C++11 memory model and accessing different members of the same struct in different threads

岁酱吖の 提交于 2019-12-03 10:07:27
Assume you've got the following definitions: struct X { char a, b; }; X x; And now assume you have two threads, one of which reads and writes x.a but never accesses x.b while the other one reads and writes x.b but never accesses x.a . Neither thread uses any locks or other synchronization primitives. Is this guaranteed to work in C++11? Or does it count as accessing the same object, and therefore need a lock? It's safe. Quoting C++11: [intro.memory]p3: A memory location is either an object of scalar type or a maximal sequence of adjacent bit-fields all having non-zero width. [ Note: Various