Influence on the static scheduling overhead in OpenMP

此生再无相见时 提交于 2019-11-27 15:57:56

You need to separate the overhead for OpenMP to create a team/pool of threads and the overhead for each thread to operate separate sets of iterators in a for loop.

Static scheduling is easy to implement by hand (which is sometimes very useful). Let's consider what I consider the two most important static scheduling schedule(static) and schedule(static,1) then we can compare this to schedule(dynamic,chunk).

#pragma omp parallel for schedule(static)
for(int i=0; i<N; i++) foo(i);

is equivalent to (but not necessarily equal to)

#pragma omp parallel
{
    int start = omp_get_thread_num()*N/omp_get_num_threads();
    int finish = (omp_get_thread_num()+1)*N/omp_get_num_threads();
    for(int i=start; i<finish; i++) foo(i);
}

and

#pragma omp parallel for schedule(static,1)
for(int i=0; i<N; i++) foo(i);

is equivalent to

#pragma omp parallel 
{
    int ithread = omp_get_thread_num();
    int nthreads = omp_get_num_threads();
    for(int i=ithread; i<N; i+=nthreads) foo(i);
}

From this you can see that it's quite trivial to implement static scheduling and so the overhead is negligible.

On the other hand if you want to implement schedule(dynamic) (which is the same as schedule(dynamic,1)) by hand it's more complicated:

int cnt = 0;
#pragma omp parallel
for(int i=0;;) {
    #pragma omp atomic capture
    i = cnt++;
    if(i>=N) break;
    foo(i);                                    
}

This requires OpenMP >=3.1. If you wanted to do this with OpenMP 2.0 (for MSVC) you would need to use critical like this

int cnt = 0;
#pragma omp parallel
for(int i=0;;) {
    #pragma omp critical   
    i = cnt++;
    if(i>=N) break;
    foo(i);
} 

Here is an equivalent to schedule(dynamic,chunk) (I have not optimized this using atomic accesss):

int cnt = 0;
int chunk = 5;
#pragma omp parallel
{
    int start, finish;
    do {
        #pragma omp critical
        {
            start = cnt;
            finish = cnt+chunk < N ? cnt+chunk : N;
            cnt += chunk;
        }
        for(int i=start; i<finish; i++) foo(i);
    } while(finish<N);
}

Clearly using atomic accesses is going to cause more overhead. This also shows why using larger chunks for schedule(dynamic,chunk) can reduce the overhead.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!