memory-barriers

What are examples of memory barriers in C++?

对着背影说爱祢 提交于 2019-12-04 02:02:22
I see C++11 mutexes lock is not void lock() volatile . How does the compiler know which functions are memory barriers and which are not? Are all functions barriers even if they are not volatile? What are some less known memory barriers and memory barriers everyone should know? The runtime library has to implement a mutex in a way so that the compiler knows! The language standard doesn't say anything about how to do this. Likely, it involves a call to some operating system service that works as a memory barrier. Or the compiler can have an extension, like void _ReadWriteBarrier(); The actual

Force order of execution of C statements?

与世无争的帅哥 提交于 2019-12-03 23:08:29
I have a problem with the MS C compiler reordering certain statements, critical in a multithreading context, at high levels of optimization. I want to know how to force ordering in specific places while still using high levels of optimization. (At low levels of optimization, this compiler does not reorder statements) The following code: ChunkT* plog2sizeChunk=... SET_BUSY(plog2sizeChunk->pPoolAndBusyFlag); // set "busy" bit on this chunk of storage x = plog2sizeChunk->pNext; produces this: 0040130F 8B 5A 08 mov ebx,dword ptr [edx+8] 00401312 83 22 FE and dword ptr [edx],0FFFFFFFEh in which the

C++11 Atomic memory order with non-atomic variables

怎甘沉沦 提交于 2019-12-03 15:40:29
I am unsure about how the memory ordering guarantees of atomic variables in c++11 affect operations to other memory. Let's say I have one thread which periodically calls the write function to update a value, and another thread which calls read to get the current value. Is it guaranteed that the effects of d = value; will not be seen before effects of a = version; , and will be seen before the effects of b = version; ? atomic<int> a {0}; atomic<int> b {0}; double d; void write(int version, double value) { a = version; d = value; b = version; } double read() { int x,y; double ret; do { x = b;

how is a memory barrier in linux kernel is used

拜拜、爱过 提交于 2019-12-03 13:51:01
There is an illustration in kernel source Documentation/memory-barriers.txt, like this: CPU 1 CPU 2 ======================= ======================= { B = 7; X = 9; Y = 8; C = &Y } STORE A = 1 STORE B = 2 <write barrier> STORE C = &B LOAD X STORE D = 4 LOAD C (gets &B) LOAD *C (reads B) Without intervention, CPU 2 may perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1: +-------+ : : : : | | +------+ +-------+ | Sequence of update | |------>| B=2 |----- --->| Y->8 | | of perception on | | : +------+ \ +-------+ | CPU 2 | CPU 1 | : | A=1 | \ -

Does mutex_unlock function as a memory fence?

别说谁变了你拦得住时间么 提交于 2019-12-03 13:19:36
问题 The situation I'll describe is occurring on an iPad 4 (ARMv7s), using posix libs to mutex lock/unlock. I've seen similar things on other ARMv7 devices, though (see below), so I suppose any solution will require a more general look at the behaviour of mutexes and memory fences for ARMv7. Pseudo code for the scenario: Thread 1 – Producing Data: void ProduceFunction() { MutexLock(); int TempProducerIndex = mSharedProducerIndex; // Take a copy of the int member variable for Producers Index

Is this a correct use of Thread.MemoryBarrier()?

家住魔仙堡 提交于 2019-12-03 10:51:51
Assume I have a field that controls execution of some loop: private static bool shouldRun = true; And I have a thread running, that has code like: while(shouldRun) { // Do some work .... Thread.MemoryBarrier(); } Now, another thread might set shouldRun to false , without using any synchronization mechanism. As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun , and effectively preventing an infinite loop from happening. Is my understanding about Thread.MemoryBarrier correct ? Given I

Is memory barrier or atomic operation required in a busy-wait loop?

狂风中的少年 提交于 2019-12-03 10:46:28
Consider the following spin_lock() implementation, originally from this answer : void spin_lock(volatile bool* lock) { for (;;) { // inserts an acquire memory barrier and a compiler barrier if (!__atomic_test_and_set(lock, __ATOMIC_ACQUIRE)) return; while (*lock) // no barriers; is it OK? cpu_relax(); } } What I already know: volatile prevents compiler from optimizing out *lock re-read on each iteration of the while loop; volatile inserts neither memory nor compiler barriers ; such an implementation actually works in GCC for x86 (e.g. in Linux kernel) and some other architectures; at least one

How do I write a memory barrier for a TMS320F2812 DSP?

扶醉桌前 提交于 2019-12-03 09:03:27
I've looked through the TI C/C++ compiler v6.1 user's guide ( spru514e ) but didn't find anything. The asm statement doesn't seem to provide anything in this regard, the manual even warns against changing values of variables (p132). The GNU extension for declaring effects on variables is not implemented (p115). I also didn't find any intrinsic for memory barriers (like __memory_changed() in Keil's armcc). Searching the web or the TI forums also turned up nothing. Any other hints how to proceed? Memory barriers are about the ordering of memory accesses, but you also have to ensure that values

volatile variables and memory barrier in java

泄露秘密 提交于 2019-12-03 07:46:20
问题 I've got a data structure which consists of linked nodes. You can think of it as of a simple LinkedList. Each node of the list consists of some value and a next field pointing the other node or null if it is the last node. The first node works as a root, it has no value it only points to the next node. All the other nodes are practically immutable that is once they are created neither their value nor their next field change during lifetime, unless the structure is being disposed which relates

Thread Synchronisation 101

做~自己de王妃 提交于 2019-12-03 05:02:57
问题 Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I