memory-model

C++11: What prevents stores from lifting past the start of a lock's critical section?

可紊 提交于 2019-12-07 15:57:24
问题 My understanding is that a spinlock can be implemented using C++11 atomics with an acquire-CAS on lock and a release-store on unlock, something like this: class SpinLock { public: void Lock() { while (l_.test_and_set(std::memory_order_acquire)); } void Unlock() { l_.clear(std::memory_order_release); } private: std::atomic_flag l_ = ATOMIC_FLAG_INIT; }; Consider its use in a function that acquires a lock and then does a blind write to some shared location: int g_some_int_; void BlindWrite(int

Atomic pointers in c++ and passing objects between threads

馋奶兔 提交于 2019-12-07 03:06:07
问题 My question involves std::atomic and the data that this pointer points to. If in thread 1 I have Object A; std:atomic<Object*> ptr; int bar = 2; A.foo = 4; //foo is an int; ptr.store(*A); and if in thread 2 I observe that ptr points to A, can I be guaranteed that ptr->foo is 4 and bar is 2? Does the default memory model for the atomic pointer (Sequentially consistent) guarantee that assignments on non atomic (in this case A.foo) that happen before an atomic store will be seen by other threads

std::atomic<int> memory_order_relaxed VS volatile sig_atomic_t in a multithreaded program

↘锁芯ラ 提交于 2019-12-07 02:21:57
问题 Does volatile sig_atomic_t give any memory order guarantees? E.g. if I need to just load/store an integer is it ok to use? E.g. here: volatile sig_atomic_t x = 0; ... void f() { std::thread t([&] {x = 1;}); while(x != 1) {/*waiting...*/} //done! } is it correct code? Are there conditions it may not work? Note: This is a over-simplifed example, i.e. I am not looking for a better solution for the given piece of code. I just want to understand what kind of behaviour I could expect from volatile

What's “sequentially consistent executions are free of data races”?

一曲冷凌霜 提交于 2019-12-07 00:04:12
问题 In JLS, §17.4.5. Happens-before Order, it says that A program is correctly synchronized if and only if all sequentially consistent executions are free of data races. It only give us definition about "sequentially consistent", it does not give us definition about "sequentially consistent executions". Only after knowing about what is "sequentially consistent executions", we may make further discussion about the topic. So what's "sequentially consistent executions" and what's "sequentially

C++ memory model - does this example contain a data race?

China☆狼群 提交于 2019-12-06 18:54:04
问题 I was reading Bjarne Stroustrup's C++11 FAQ and I'm having trouble understanding an example in the memory model section. He gives the following code snippet: // start with x==0 and y==0 if (x) y = 1; // thread 1 if (y) x = 1; // thread 2 The FAQ says there is not a data race here. I don't understand. The memory location x is read by thread 1 and written to by thread 2 without any synchronization (and the same goes for y ). That's two accesses, one of which is a write. Isn't that the

Can atomic loads be merged in the C++ memory model?

≡放荡痞女 提交于 2019-12-06 17:04:03
问题 Consider the C++ 11 snippet below. For GCC and clang this compiles to two (sequentially consistent) loads of foo. (Editor's note: compilers do not optimize atomics, see this Q&A for more details, especially http://wg21.link/n4455 standards discussion about the problems this could create which the standard doesn't give programmers tools to work around. This language-lawyer Q&A is about the current standard, not what compilers do.) Does the C++ memory model allow the compiler to merge these two

Possible to use C11 fences to reason about writes from other threads?

天涯浪子 提交于 2019-12-06 14:22:58
Adve and Gharachorloo's report , in Figure 4b, provides the following example of a program that exhibits unexpected behavior in the absence of sequential consistency: My question is whether it is possible, using only C11 fences and memory_order_relaxed loads and stores, to ensure that register1, if written, will be written with the value 1. The reason this might be hard to guarantee in the abstract is that P1, P2, and P3 could be at different points in a pathological NUMA network with the property that P2 sees P1's write before P3 does, yet somehow P3 sees P2's write very quickly. The reason

Memory ordering restrictions on x86 architecture

耗尽温柔 提交于 2019-12-06 12:38:13
问题 In his great book 'C++ Concurrency in Action' Anthony Williams writes the following (page 309): For example, on x86 and x86-64 architectures, atomic load operations are always the same, whether tagged memory_order_relaxed or memory_order_seq_cst (see section 5.3.3). This means that code written using relaxed memory ordering may work on systems with an x86 architecture, where it would fail on a system with a finer- grained set of memory-ordering instructions such as SPARC. Do I get this right

C++ memory model: do seq_cst loads synchronize with seq_cst stores?

断了今生、忘了曾经 提交于 2019-12-06 04:44:45
问题 In the C++ memory model, there is a total order on all loads and stores of all sequentially consistent operations. I'm wondering how this interacts with operations that have other memory orderings that are sequenced before/after sequentially consistent loads. For example, consider two threads: std::atomic<int> a(0); std::atomic<int> b(0); std::atomic<int> c(0); ////////////// // Thread T1 ////////////// // Signal that we've started running. a.store(1, std::memory_order_relaxed); // If T2's

C++ memory_order_consume, kill_dependency, dependency-ordered-before, synchronizes-with

╄→гoц情女王★ 提交于 2019-12-06 04:43:15
问题 I am reading C++ Concurrency in Action by Anthony Williams. Currently I at point where he desribes memory_order_consume. After that block there is: Now that I’ve covered the basics of the memory orderings, it’s time to look at the more complex parts It scares me a little bit, because I don't fully understand several things: How dependency-ordered-before differs from synchronizes-with? They both create happens-before relationship. What is exact difference? I am confused about following example