segmentation fault openmp error

六月ゝ 毕业季﹏ 提交于 2019-12-06 08:17:28
Grizzly

I'm not sure if it is the only problem with your code, but standard containers (such as std::map) are not threadsafe, at least if you write to them. So if you have any write access to your maps, such as dist[(*datIt).first][(*datIt2).first]=ecl; you need to wrap any access to the maps in some sort of synchronization structure, either using #pragm omp critical or mutexes (omp mutex or, if you use boost or C++11 boost::mutex or std::mutex are options too):

//before the parallel:
omp_lock_t lock;
omp_init_lock(&lock);
...

omp_set_lock(&lock);
dist[(*datIt).first][(*datIt2).first]=ecl;
dist[(*datIt2).first][(*datIt).first]=ecl;
omp_unset_lock(&lock);
...

//after the parallel:
omp_destroy_lock(&lock);

Since you only read from dat it should be fine without syncronization (in C++11 at least, C++03 has no guarantees about threadsafety whatsoever (since it has no concept of threads). It should typically be fine to use it without synchronization still, but technically its implementation dependent behaviour.

Furthermore since you didn't specify the data sharing all variables declared outside the parallel region are shared by default. Therefore your write access to datIt and datIt2 also present race conditions. For datIt2 this can be avoided by either specifying it as private, or even better declaring it at the point of first use:

map< int,string >::iterator datIt2=dat.find((*datIt).first);

Solving this for datIt is a bit more problematic, since it seems you want to iterate over the whole length of the map. The easiest way (which is not overly costly by using O(n) advance each iteration) seems to operate on a private copy of datIt, which is advanced accordingly (not guaranteeing 100% correctness, just a quick outline):

#pragma omp  parallel //construct the distance matrix
{  
   map< int,string >::iterator datItLocal=datIt;
   int lastIdx = 0;
   for(int i=0;i<size;i++)
   {
      std::advance(datItLocal, i - lastIdx);
      lastIdx = i;
      //use datItLocal instead of datIt everytime you reference datIt in the parallel
     //remove ++datIt
   }
}

This way the map is iterated omp_get_num_threads() times, but it should work. If that is an unacceptable overhead for you, look at this answer of mine for alternative solutions for looping on a bidirectional iterator in openmp.

As a sidenote: Maybe I missed something, but to me it seems that considering datIt is an iterator into dat, dat.find(datIt->first) is a bit redunant. There should be only one element in the map with the given key, and datIt points to it, so this seems like an expensive way of saying datIt2=datIt (correct me if I'm wrong).

In addition to Girzzly's answer, you do not specify anything private or shared. This usually means you do not controll the access of your memory. You have to define datIt firstprivate and datIt2 private when you enter the parallel region otherwise everythread overwrites their shared value, which leads to segfaults.

Instead of using locks I would use a critical section, which is more openmpish.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!