memory-model

Java memory model : compiler rearranging code lines

。_饼干妹妹 提交于 2019-12-03 07:27:27
It is well known that Java Language allows compliers to re-arrange lines of compiled code as long as the re-order makes no difference to the code semantics. However , the compiler is required to only bother about sematics as seen from the current thread . If this re-order affects semantics in a multithreaded situation , it usually causes concurrency issues ( memory visibility ) My question(s) : What is achieved by allowing this freedm to the compiler ? Is it really possible for the compiler to produce code which is more efficient by rearranging the code ? I am yet to see a practical case for

Why is (or isn't) setting fields in a constructor thread-safe?

我只是一个虾纸丫 提交于 2019-12-03 05:45:36
Let's say you have a simple class like this: class MyClass { private readonly int a; private int b; public MyClass(int a, int b) { this.a = a; this.b = b; } public int A { get { return a; } } public int B { get { return b; } } } I could use this class in a multi-threaded manner: MyClass value = null; Task.Run(() => { while (true) { value = new MyClass(1, 1); Thread.Sleep(10); } }); while (true) { MyClass result = value; if (result != null && (result.A != 1 || result.B != 1)) { throw new Exception(); } Thread.Sleep(10); } My question is: will I ever see this (or other similar multi-threaded

Could the JIT collapse two volatile reads as one in certain expressions?

◇◆丶佛笑我妖孽 提交于 2019-12-03 03:39:43
问题 Suppose we have a volatile int a . One thread does while (true) { a = 1; a = 0; } and another thread does while (true) { System.out.println(a+a); } Now, would it be illegal for a JIT compiler to emit assembly corresponding to 2*a instead of a+a ? On one hand the very purpose of a volatile read is that it should always be fresh from memory. On the other hand, there's no synchronization point between the two reads, so I can't see that it would be illegal to treat a+a atomically, in which case I

What are the similarities between the Java memory model and the C++11 memory model?

不羁的心 提交于 2019-12-03 02:12:57
问题 The new c++ standard introduces the notion of a memory model. There were already questions on SO about it, what does it mean, how does it change the way we write code in c++ and so on. I'm interested in getting to know how does the C++ memory model relate to the older, well known java memory model (1.5). Is it the same? Is it similar? Do they have any significant differences? If so, why? The java memory model has been around since a long time and many people know it quite decently, so I guess

How do “acquire” and “consume” memory orders differ, and when is “consume” preferable?

蓝咒 提交于 2019-12-03 00:10:50
问题 The C++11 standard defines a memory model (1.7, 1.10) which contains memory orderings , which are, roughly, "sequentially-consistent", "acquire", "consume", "release", and "relaxed". Equally roughly, a program is correct only if it is race-free, which happens if all actions can be put in some order in which one action happens-before another one. The way that an action X happens-before an action Y is that either X is sequenced before Y (within one thread), or X inter-thread-happens-before Y .

How can memory_order_relaxed work for incrementing atomic reference counts in smart pointers?

一世执手 提交于 2019-12-02 23:29:43
Consider the following code snippet taken from Herb Sutter's talk on atomics: The smart_ptr class contains a pimpl object called control_block_ptr containing the reference count refs . // Thread A: // smart_ptr copy ctor smart_ptr(const smart_ptr& other) { ... control_block_ptr = other->control_block_ptr; control_block_ptr->refs.fetch_add(1, memory_order_relaxed); ... } // Thread D: // smart_ptr destructor ~smart_ptr() { if (control_block_ptr->refs.fetch_sub(1, memory_order_acq_rel) == 0) { delete control_block_ptr; } } Herb Sutter says the increment of refs in Thread A can use memory_order

What are the similarities between the Java memory model and the C++11 memory model?

寵の児 提交于 2019-12-02 17:16:11
The new c++ standard introduces the notion of a memory model. There were already questions on SO about it, what does it mean, how does it change the way we write code in c++ and so on. I'm interested in getting to know how does the C++ memory model relate to the older, well known java memory model (1.5). Is it the same? Is it similar? Do they have any significant differences? If so, why? The java memory model has been around since a long time and many people know it quite decently, so I guess it might be helpful, not only for me, to learn the C++ memory model, by comparing it with the java one

How do “acquire” and “consume” memory orders differ, and when is “consume” preferable?

瘦欲@ 提交于 2019-12-02 13:55:14
The C++11 standard defines a memory model (1.7, 1.10) which contains memory orderings , which are, roughly, "sequentially-consistent", "acquire", "consume", "release", and "relaxed". Equally roughly, a program is correct only if it is race-free, which happens if all actions can be put in some order in which one action happens-before another one. The way that an action X happens-before an action Y is that either X is sequenced before Y (within one thread), or X inter-thread-happens-before Y . The latter condition is given, among others, when X synchronizes with Y , or X is dependency-ordered

Is it possible to create an instance of a class on the stack?

时光毁灭记忆、已成空白 提交于 2019-12-02 10:29:52
I know that in C++ you can create an instance of a class on the stack like MyClass mc = MyClass(8.2); or on the heap like MyClass * mc = new MyClass(8.2); Can you do the same thing in C#? The only way I ever create a class in C# is by new ing it. No, it's not possible. All instances of all classes are always allocated on the heap. It is value types, including user defined struct types, that hold values, rather than references to values elsewhere, that can store a value in whatever location that variable happens to store its value in, which may not be the heap. 来源: https://stackoverflow.com

Regarding instruction ordering in executions of cache-miss loads before cache-hit stores on x86

ぐ巨炮叔叔 提交于 2019-12-02 02:03:00
问题 Given the small program shown below (handcrafted to look the same from a sequential consistency / TSO perspective), and assuming it's being run by a superscalar out-of-order x86 cpu: Load A <-- A in main memory Load B <-- B is in L2 Store C, 123 <-- C is L1 I have a few questions: Assuming a big enough instruction-window, will the three instructions be fetched, decoded, executed at the same time? I assume not, as that would break execution in program order. The 2nd load is going to take