MPI struct datatype with an array

℡╲_俬逩灬. 提交于 2019-12-25 07:52:51

问题


I would like to easily send an someObject in one MPI_SEND/RECV call in mpi.

   type someObject
     integer :: foo
     real :: bar,baz
     double precision :: a,b,c
     double precision, dimension(someParam) :: x, y
   end type someObject

I started using a MPI_TYPE_STRUCT, but then realized the sizes of the arrays x and y are dependent upon someParam. I initially thought of nesting a MPI_TYPE_CONTIGUOUS in the struct to represent the arrays, but cannot seem to get this to work. If this is even possible?

  ! Setup description of the 1 MPI_INTEGER field
  offsets(0) = 0
  oldtypes(0) = MPI_INTEGER
  blockcounts(0) = 1
  ! Setup description of the 2 MPI_REAL fields
  call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
  offsets(1) = blockcounts(0) * extent
  oldtypes(1) = MPI_REAL
  blockcounts(1) = 2
  ! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
  call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
  offsets(2) = offsets(1) + blockcounts(1) * extent
  oldtypes(2) = MPI_DOUBLE_PRECISION
  blockcounts(2) = 3
  ! Setup x and y MPI_DOUBLE_PRECISION array fields
  call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
  call MPI_TYPE_COMMIT(sOarraytype, ierr)
  call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
  offsets(3) = offsets(2) + blockcounts(2) * extent
  oldtypes(3) = sOarraytype
  blockcounts(3) = 2 ! x and y

  ! Now Define structured type and commit it
  call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
  call MPI_TYPE_COMMIT(sOtype, ierr)

What I would like to do:

...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
  x(i) = i
  y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
  WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...

So far I am just getting segmentation faults, without much indication of what I'm doing wrong or if this is even possible. The documentation never said I couldn't use a contiguous datatype inside a struct datatype.


回答1:


From what it seems you can't nest those kinds of datatypes and was a completely wrong solution.

Thanks to: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html and http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes.pdf for guidance.

the right way to define the MPI_TYPE_STRUCT is as follows:

type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
data x/someParam * 0/, w/someParam * 0/
integer sOtype, oldtypes(0:7), blocklengths(0:7), offsets(0:7), iextent, rextent, dpextent
! Define MPI datatype for someObject object
! set up extents
call MPI_TYPE_EXTENT(MPI_INTEGER, iextent, ierr)
call MPI_TYPE_EXTENT(MPI_REAL, rextent, ierr)
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, dpextent, ierr)
! setup blocklengths /foo,bar,baz,a,b,c,x,y/
data blocklengths/1,1,1,1,1,1,someParam,someParam/
! setup oldtypes
oldtypes(0) = MPI_INTEGER
oldtypes(1) = MPI_REAL
oldtypes(2) = MPI_REAL
oldtypes(3) = MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5) = MPI_DOUBLE_PRECISION
oldtypes(6) = MPI_DOUBLE_PRECISION
oldtypes(7) = MPI_DOUBLE_PRECISION
! setup offsets
offsets(0) = 0
offsets(1) = iextent * blocklengths(0)
offsets(2) = offsets(1) + rextent*blocklengths(1)
offsets(3) = offsets(2) + rextent*blocklengths(2)
offsets(4) = offsets(3) + dpextent*blocklengths(3)
offsets(5) = offsets(4) + dpextent*blocklengths(4)
offsets(6) = offsets(5) + dpextent*blocklengths(5)
offsets(7) = offsets(6) + dpextent*blocklengths(6)
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(8, blocklengths, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)

That allows me to send and receive the object with the way I originally wanted!




回答2:


The MPI struct type is a big headache. If this code is not in a performance-critical part of your code, look into the MPI_PACKED type. The packing call is relatively slow (basically one function call per element you're sending!), so don't use it for very large messages, but is easy fairly easy to use and very flexible in what you can send.



来源:https://stackoverflow.com/questions/8231937/mpi-struct-datatype-with-an-array

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!