With OpenMP parallelized nested loops run slow

北慕城南 提交于 2019-12-20 04:54:17

问题


I've got a part of a fortran program consisting of some nested loops which I want to parallelize with OpenMP.

integer :: nstates , N, i, dima, dimb, dimc, a_row, b_row, b_col, c_row, row, col
double complex, dimension(4,4):: mat
double complex, dimension(:), allocatable :: vecin,vecout 

nstates = 2
N = 24

allocate(vecin(nstates**N), vecout(nstates**N))
vecin = ...some data
vecout = 0

mat = reshape([...some data...],[4,4])

dimb=nstates**2

!$OMP PARALLEL DO PRIVATE(dima,dimc,row,col,a_row,b_row,c_row,b_col) 
do i=1,N-1
    dima=nstates**(i-1)
    dimc=nstates**(N-i-1)

    do a_row = 1, dima
        do b_row = 1,dimb
            do c_row = 1,dimc
                row = ((a_row-1)*dimb + b_row - 1)*dimc + c_row
                do b_col = 1,dimb
                    col = ((a_row-1)*dimb + b_col - 1)*dimc + c_row
                    !$OMP ATOMIC
                    vecout(row) = vecout(row) + vecin(col)*mat(b_row,b_col)
                end do
            end do
        end do
    end do
end do
!$OMP END PARALLEL DO 

The program runs and the result I get is also correct, it's just incredible slow. Much slower than without OpenMP. I don't know much about OpenMP. Have I done something wrong with the use of PRIVATE or OMP ATOMIC? I would be grateful for every advice how to improve the performance of my code.


回答1:


If your arrays are too large and you get stack overflows with automatic reduction, you can implement the reduction yourself with allocatable temporary arrays.

As Francois Jacq pointed out, you also have a race condition caused by dima and dimb which should be private.

double complex, dimension(:), allocatable :: tmp

!$OMP PARALLEL PRIVATE(dima,dimb,row,col,a_row,b_row,c_row,b_col,tmp)

allocate(tmp(size(vecout)))
tmp = 0

!$OMP DO
do i=1,N-1
    dima=nstates**(i-1)
    dimc=nstates**(N-i-1)

    do a_row = 1, dima
        do b_row = 1,dimb
            do c_row = 1,dimc
                row = ((a_row-1)*dimb + b_row - 1)*dimc + c_row
                do b_col = 1,dimb
                    col = ((a_row-1)*dimb + b_col - 1)*dimc + c_row
                    tmp(row) = tmp(row) + vecin(col)*mat(b_row,b_col)
                end do
            end do
        end do
    end do
end do
!$OMP END DO

!$OMP CRITICAL
vecout = vecout + tmp
!$OMP END CRITICAL
!$OMP END PARALLEL



回答2:


Could you try something like :

do b_col=1,dimb
   do i=1,N-1
      dima=nstates**(i-1)
      dimc=nstates**(N-i-1)
      !$OMP PARALLEL DO COLLAPSE(3) PRIVATE(row,col,a_row,b_row,c_row)
      do a_row = 1, dima
         do b_row = 1,dimb
            do c_row = 1,dimc
                row = ((a_row-1)*dimb + b_row - 1)*dimc + c_row
                col = ((a_row-1)*dimb + b_col - 1)*dimc + c_row
                vecout(row) = vecout(row) + vecin(col)*mat(b_row,b_col)
            enddo
         enddo
      enddo
   enddo
enddo

The advantage is that the // loop does not cause collision now : all the indexes row are different.



来源:https://stackoverflow.com/questions/29385643/with-openmp-parallelized-nested-loops-run-slow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!