memory-model

Is it possible to observe a partially-constructed object from another thread?

拈花ヽ惹草 提交于 2019-11-27 19:21:37
I've often heard that in the .NET 2.0 memory model, writes always use release fences. Is this true? Does this mean that even without explicit memory-barriers or locks, it is impossible to observe a partially-constructed object (considering reference-types only) on a thread different from the one on which it is created? I'm obviously excluding cases where the constructor leaks the this reference. For example, let's say we had the immutable reference type: public class Person { public string Name { get; private set; } public int Age { get; private set; } public Person(string name, int age) {

Thread.VolatileRead Implementation

一个人想着一个人 提交于 2019-11-27 19:03:50
I'm looking at the implementation of the VolatileRead/VolatileWrite methods (using Reflector), and i'm puzzled by something. This is the implementation for VolatileRead: [MethodImpl(MethodImplOptions.NoInlining)] public static int VolatileRead(ref int address) { int num = address; MemoryBarrier(); return num; } How come the memory barrier is placed after reading the value of "address"? dosen't it supposed to be the opposite? (place before reading the value, so any pending writes to "address" will be completed by the time we make the actual read. The same thing goes to VolatileWrite, where the

Is writing a reference atomic on 64bit VMs

帅比萌擦擦* 提交于 2019-11-27 18:57:36
The java memory model mandates that writing a int is atomic: That is, if you write a value to it (consisting of 4 bytes) in one thread and read it in another, you will get all bytes or none, but never 2 new bytes and 2 old bytes or such. This is not guaranteed for long . Here, writing 0x1122334455667788 to a variable holding 0 before could result in another thread reading 0x112233440000000 or 0x0000000055667788 . Now the specification does not mandate object references to be either int or long-sized. For type safety reasons I suspect they are guaranteed to be written atomically, but on a 64bit

Does Delphi have any equivalent to C's volatile variable?

Deadly 提交于 2019-11-27 17:42:05
问题 In C and C++ a variable can be marked as volatile, which means the compiler will not optimize it because it may be modified external to the declaring object. Is there an equivalent in Delphi programming? If not a keyword, maybe a work around? My thought was to use Absolute , but I wasn't sure, and that may introduce other side effects. 回答1: Short answer: no. However, I am not aware of any situation in which the conservative approach of the compiler will change the number of reads or writes if

In C/C++, are volatile variables guaranteed to have eventually consistent semantics betwen threads?

自闭症网瘾萝莉.ら 提交于 2019-11-27 16:20:21
问题 Is there any guarantee by any commonly followed standard (ISO C or C++, or any of the POSIX/SUS specifications) that a variable (perhaps marked volatile), not guarded by a mutex, that is being accessed by multiple threads will become eventually consistent if it is assigned to? To provide a specific example, consider two threads sharing a variable v, with initial value zero. Thread 1: v = 1 Thread 2: while(v == 0) yield(); Is thread 2 guaranteed to terminate eventually? Or can it conceivably

Release/Acquire semantics wrt std::mutex

本小妞迷上赌 提交于 2019-11-27 14:16:44
问题 I am reading the C++ memory model defined in n3485 and it talks about release/acquire semantics, which from what I understand, and also from the definitions given in this blog: Acquire semantics is a property which can only apply to operations which read from shared memory, whether they are read-modify-write operations or plain loads. The operation is then considered a read-acquire. Acquire semantics prevent memory reordering of the read-acquire with any read or write operation which follows

Does empty synchronized(this){} have any meaning to memory visibility between threads?

☆樱花仙子☆ 提交于 2019-11-27 13:33:26
问题 I read this in an upvoted comment on StackOverflow: But if you want to be safe, you can add simple synchronized(this) {} at the end of you @PostConstruct [method] [note that variables were NOT volatile] I was thinking that happens-before is forced only if both write and read is executed in synchronized block or at least read is volatile. Is the quoted sentence correct? Does an empty synchronized(this) {} block flush all variables changed in current method to "general visible" memory? Please

Is synchronizing with `std::mutex` slower than with `std::atomic(memory_order_seq_cst)`?

走远了吗. 提交于 2019-11-27 10:20:04
问题 The main reason for using atomics over mutexes, is that mutexes are expensive but with the default memory model for atomics being memory_order_seq_cst , isn't this just as expensive? Question: Can concurrent a program using locks be as fast as concurrent lock-free program? If so, it may not be worth the effort unless I want to use memory_order_acq_rel for atomics. Edit: I may be missing something but lock-based cant be faster than lock-free because each lock will have to be a full memory

What do each memory_order mean?

久未见 提交于 2019-11-27 10:19:57
I read a chapter and I didn't like it much. I'm still unclear what the differences is between each memory order. This is my current speculation which I understood after reading the much more simple http://en.cppreference.com/w/cpp/atomic/memory_order The below is wrong so don't try to learn from it memory_order_relaxed: Does not sync but is not ignored when order is done from another mode in a different atomic var memory_order_consume: Syncs reading this atomic variable however It doesnt sync relaxed vars written before this. However if the thread uses var X when modifying Y (and releases it).

What does `std::kill_dependency` do, and why would I want to use it?

不羁的心 提交于 2019-11-27 09:45:31
问题 I've been reading about the new C++11 memory model and I've come upon the std::kill_dependency function (§29.3/14-15). I'm struggling to understand why I would ever want to use it. I found an example in the N2664 proposal but it didn't help much. It starts by showing code without std::kill_dependency . Here, the first line carries a dependency into the second, which carries a dependency into the indexing operation, and then carries a dependency into the do_something_with function. r1 = x.load