What is the difference between atomic and critical in OpenMP?

眉间皱痕 提交于 2019-11-26 06:25:05

问题


What is the difference between atomic and critical in OpenMP?

I can do this

#pragma omp atomic
g_qCount++;

but isn\'t this same as

#pragma omp critical
g_qCount++;

?


回答1:


The effect on g_qCount is the same, but what's done is different.

An OpenMP critical section is completely general - it can surround any arbitrary block of code. You pay for that generality, however, by incurring significant overhead every time a thread enters and exits the critical section (on top of the inherent cost of serialization).

(In addition, in OpenMP all unnamed critical sections are considered identical (if you prefer, there's only one lock for all unnamed critical sections), so that if one thread is in one [unnamed] critical section as above, no thread can enter any [unnamed] critical section. As you you might guess, you can get around this by using named critical sections).

An atomic operation has much lower overhead. Where available, it takes advantage on the hardware providing (say) an atomic increment operation; in that case there's no lock/unlock needed on entering/exiting the line of code, it just does the atomic increment which the hardware tells you can't be interfered with.

The upsides are that the overhead is much lower, and one thread being in an atomic operation doesn't block any (different) atomic operations about to happen. The downside is the restricted set of operations that atomic supports.

Of course, in either case, you incur the cost of serialization.




回答2:


In OpenMP, all the unnamed critical sections are mutually exclusive.

The most important difference between critical and atomic is that atomic can protect only a single assignment and you can use it with specific operators.




回答3:


Critical section:

  • Ensures serialisation of blocks of code.
  • Can be extended to serialise groups of blocks with proper use of "name" tag.

  • Slower!

Atomic operation:

  • Is much faster!

  • Only ensures the serialisation of a particular operation.




回答4:


The fastest way is neither critical nor atomic. Approximately, addition with critical section is 200 times more expensive than simple addition, atomic addition is 25 times more expensive then simple addition.

The fastest option (not always applicable) is to give each thread its own counter and make reduce operation when you need total sum.




回答5:


The limitations of atomic are important. They should be detailed on the OpenMP specs. MSDN offers a quick cheat sheet as I wouldn't be surprised if this will not change. (Visual Studio 2012 has an OpenMP implementation from March 2002.) To quote MSDN:

The expression statement must have one of the following forms:

xbinop=expr

x++

++x

x--

--x

In the preceding expressions: x is an lvalue expression with scalar type. expr is an expression with scalar type, and it does not reference the object designated by x. binop is not an overloaded operator and is one of +, *, -, /, &, ^, |, <<, or >>.

I recommend to use atomic when you can and named critical sections otherwise. Naming them is important; you'll avoid debugging headaches this way.




回答6:


Already great explanations here. However, we can dive a bit deeper. To understand the core difference between the atomic and critical section concepts in OpenMP, we have to understand the concept of lock first. Let's review why we need to use locks.

A parallel program is being executed by multiple threads. Deterministic results will happen if and only if we perform synchronization between these threads. Of course, synchronization between threads is not always required. We are referring to those cases that synchronization is necessary.

In order to synchronize the threads in a multi-threaded program, we'll use lock. When the access is required to be restricted by only one thread at a time, locks come into play. The lock concept implementation may vary from processor to processor. Let's find out how a simple lock may work from an algorithmic point of view.

1. Define a variable called lock.
2. For each thread:
   2.1. Read the lock.
   2.2. If lock == 0, lock = 1 and goto 3    // Try to grab the lock
       Else goto 2.1    // Wait until the lock is released
3. Do something...
4. lock = 0    // Release the lock

The given algorithm can be implemented in the hardware language as follows. We'll be assuming a single processor and analyze the behavior of locks in that. For this practice, let's assume one of the following processors: MIPS, Alpha, ARM or Power.

try:    LW R1, lock
        BNEZ R1, try
        ADDI R1, R1, #1
        SW R1, lock

This program seems to be OK, but It is not. The above code suffers from the previous problem; synchronization. Let's find the problem. Assume the initial value of lock to be zero. If two threads run this code, one might reach the SW R1, lock before the other one reads the lock variable. Thus, both of them think that the lock is free. To solve this issue, there is another instruction provided rather than simple LW and SW. It is called Read-Modify-Write instruction. It is a complex instruction (consisting of subinstructions) which assures the lock acquisition procedure is done by only a single thread at a time. The difference of Read-Modify-Write compared to the simple Read and Write instructions is that it uses a different way of Loading and Storing. It uses LL(Load Linked) to load the lock variable and SC(Store Conditional) to write to the lock variable. An additional Link Register is used to assure the procedure of lock acquisition is done by a single thread. The algorithm is given below.

1. Define a variable called lock.
2. For each thread:
   2.1. Read the lock and put the address of lock variable inside the Link Register.
   2.2. If (lock == 0) and (&lock == Link Register), lock = 1 and reset the Link Register then goto 3    // Try to grab the lock
       Else goto 2.1    // Wait until the lock is released
3. Do something...
4. lock = 0    // Release the lock

When the link register is reset, if another thread has assumed the lock to be free, it won't be able to write the incremented value to the lock again. Thus, the concurrency of access to the lock variable is acquired.

The core difference between critical and atomic comes from the idea that:

Why to use locks (a new variable) while we can use the actual variable (which we are performing an operation on it), as a lock variable?

Using a new variable for locks will lead to critical section, while using the actual variable as a lock will lead to atomic concept. The critical section is useful when we are performing a lot of computations (more than one line) on the actual variable. That's because, if the result of those computations fails to be written on the actual variable, the whole procedure should be repeated to compute the results. This can lead to a poor performance compared to waiting for the lock to be released before entering a highly-computational region. Thus, it is recommended to use the atomic directive whenever you want to perform a single computation (x++, x--, ++x, --x, etc.) and use critical directive when a more computationally complex region is being done by the intensive section.




回答7:


atomic is relatively performance efficient when you need to enable mutual exclusion for only a single instruction similar is not true about omp critical.




回答8:


atomic is a single statement Critical section, i.e. you lock for one statement execution

critical section is a lock on a block of code

A good compiler will translate your second code the same way it does the first



来源:https://stackoverflow.com/questions/7798010/what-is-the-difference-between-atomic-and-critical-in-openmp

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!