memory-barriers

how is a memory barrier in linux kernel is used

和自甴很熟 提交于 2020-01-01 05:17:07
问题 There is an illustration in kernel source Documentation/memory-barriers.txt, like this: CPU 1 CPU 2 ======================= ======================= { B = 7; X = 9; Y = 8; C = &Y } STORE A = 1 STORE B = 2 <write barrier> STORE C = &B LOAD X STORE D = 4 LOAD C (gets &B) LOAD *C (reads B) Without intervention, CPU 2 may perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1: +-------+ : : : : | | +------+ +-------+ | Sequence of update | |------>

Is this a correct use of Thread.MemoryBarrier()?

本小妞迷上赌 提交于 2020-01-01 04:14:05
问题 Assume I have a field that controls execution of some loop: private static bool shouldRun = true; And I have a thread running, that has code like: while(shouldRun) { // Do some work .... Thread.MemoryBarrier(); } Now, another thread might set shouldRun to false , without using any synchronization mechanism. As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun , and effectively

Does a memory barrier acts both as a marker and as an instruction?

佐手、 提交于 2019-12-29 08:15:20
问题 I have read different things about how a memory barrier works. For example, the user Johan 's answer in this question says that a memory barrier is an instruction that the CPU executes. While the user Peter Cordes 's comment in this question says the following about how the CPU reorders instructions: It reads faster than it can execute, so it can see a window of upcoming instructions. For details, see some of the links in the x86 tag wiki, like Agner Fog's microarch pdf, and also David Kanter

C++ Memory Barriers for Atomics

依然范特西╮ 提交于 2019-12-28 05:07:09
问题 I'm a newbie when it comes to this. Could anyone provide a simplified explanation of the differences between the following memory barriers? The windows MemoryBarrier(); The fence _mm_mfence(); The inline assembly asm volatile ("" : : : "memory"); The intrinsic _ReadWriteBarrier(); If there isn't a simple explanation some links to good articles or books would probably help me get it straight. Until now I was fine with just using objects written by others wrapping these calls but I'd like to

Could the side effect of atomic operation be seen immediately by other threads?

…衆ロ難τιáo~ 提交于 2019-12-24 22:29:20
问题 In this question one replier says Atomicity means that operation either executes fully and all it's side effects are visible , or it does not execute at all. However, below is an example given in Concurrency in Action $Lising 5.5 #include <thread> #include <atomic> #include <iostream> std::atomic<int> x(0),y(0),z(0); std::atomic<bool> go(false); unsigned const loop_count=10; struct read_values { int x,y,z; }; read_values values1[loop_count]; read_values values2[loop_count]; read_values

Managing cache with memory mapped I/O

非 Y 不嫁゛ 提交于 2019-12-24 01:14:27
问题 I have a question regarding memory mapped io. Suppose, there is a memory mapped IO peripheral whose value is being read by CPU. Once read, the value is stored in cache. But the value in memory has been updated by external IO peripheral. In such cases how will CPU determine cache has been invalidated and what could be the workaround for such case? 回答1: That's strongly platform dependent. And actually, there are two different cases. Case #1. Memory-mapped peripheral. This means that access to

what's the purpose of compiler barrier?

独自空忆成欢 提交于 2019-12-23 23:07:43
问题 The following is excerpted from Concurrent Programming on windows, Chapter 10 Page 528~529, a c++ template Double check implementation T getValue(){ if (!m_pValue){ EnterCriticalSection(&m_crst); if (! m_pValue){ T pValue = m_pFactory(); _WriteBarrier(); m_pValue = pValue; } LeaveCriticalSection(&m_crst); } _ReadBarrier(); return m_pValue; } As the author state: A _WriteBarrier is found after instantiating the object, but before writing a pointer to it in the m_pValue field. That's required

Deep understanding of volatile in Java

浪子不回头ぞ 提交于 2019-12-22 10:49:15
问题 Does Java allows output 1, 0 ? I've tested it very intensively and I cannot get that output. I get only 1, 1 or 0, 0 or 0, 1 . public class Main { private int x; private volatile int g; // Executed by thread #1 public void actor1(){ x = 1; g = 1; } // Executed by thread #2 public void actor2(){ put_on_screen_without_sync(g); put_on_screen_without_sync(x); } } Why? On my eye it is possible to get 1, 0 . My reasoning. g is volatile so it causes that memory order will be ensured. So, it looks

Are memory barriers necessary for atomic reference counting shared immutable data?

╄→гoц情女王★ 提交于 2019-12-22 03:15:50
问题 I have some immutable data structures that I would like to manage using reference counts, sharing them across threads on an SMP system. Here's what the release code looks like: void avocado_release(struct avocado *p) { if (atomic_dec(p->refcount) == 0) { free(p->pit); free(p->juicy_innards); free(p); } } Does atomic_dec need a memory barrier in it? If so, what kind of memory barrier? Additional notes: The application must run on PowerPC and x86, so any processor-specific information is

Why is the standard C# event invocation pattern thread-safe without a memory barrier or cache invalidation? What about similar code?

限于喜欢 提交于 2019-12-21 07:27:38
问题 In C#, this is the standard code for invoking an event in a thread-safe way: var handler = SomethingHappened; if(handler != null) handler(this, e); Where, potentially on another thread, the compiler-generated add method uses Delegate.Combine to create a new multicast delegate instance, which it then sets on the compiler-generated field (using interlocked compare-exchange). (Note: for the purposes of this question, we don't care about code that runs in the event subscribers. Assume that it's