openmpi

How to build boost::mpi library with Open MPI on Windows with Visual Studio 2010

久未见 提交于 2019-12-02 05:22:34
I installed Open MPI 1.5.4 (64 bit) and I am trying to rebuild boost libraries (1.48) with bjam. I changed user-config.jam file, by adding using mpi line with explicit compiler path (although mpic++ is already in PATH environment variable): using mpi : "C:/Program Files (x86)/OpenMPI_v1.5.4-x64/bin/mpic++.exe" ; Then I tried to run from command prompt the following command: bjam toolset=msvc --build-type=complete --with-mpi --address-model=64 stage Unfortunately, the build process still needs more hints. Part of the error reporting looks like: MPI auto-detection failed: unknown wrapper

MPI_ERR_TRUNCATE: On Broadcast

◇◆丶佛笑我妖孽 提交于 2019-12-01 20:11:45
问题 I have an int I intend to broadcast from root ( rank==(FIELD=0) ). int winner if (rank == FIELD) { winner = something; } MPI_Barrier(MPI_COMM_WORLD); MPI_Bcast(&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); if (rank != FIELD) { cout << rank << " informed that winner is " << winner << endl; } But it appears I get [JM:6892] *** An error occurred in MPI_Bcast [JM:6892] *** on communicator MPI_COMM_WORLD [JM:6892] *** MPI_ERR_TRUNCATE: message truncated [JM:6892] ***

MPI_ERR_TRUNCATE: On Broadcast

喜你入骨 提交于 2019-12-01 18:30:49
I have an int I intend to broadcast from root ( rank==(FIELD=0) ). int winner if (rank == FIELD) { winner = something; } MPI_Barrier(MPI_COMM_WORLD); MPI_Bcast(&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); if (rank != FIELD) { cout << rank << " informed that winner is " << winner << endl; } But it appears I get [JM:6892] *** An error occurred in MPI_Bcast [JM:6892] *** on communicator MPI_COMM_WORLD [JM:6892] *** MPI_ERR_TRUNCATE: message truncated [JM:6892] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort Found that I can increase the buffer size in Bcast MPI

MPI not running in parallel in a FORTRAN code

空扰寡人 提交于 2019-12-01 12:54:13
I am trying to install an OpenMPI on my Ubuntu (14.04) machine, and I thought that I had succeeded, because I can run codes with mpirun , but recently I have noticed that it's not truly running in parallel. I installed openmpi with the following options: ./configure CXX=g++ CC=gcc F77=gfortran \ F90=gfortran \ FC=gfortran \ --enable-mpi-f77 \ --enable-mpi-f90 \ --prefix=/opt/openmpi-1.6.5 make all sudo make install As I said, I have run a code ( not written by myself ) and it seemed to work in parallel, because I checked with top and it was running in several nodes. But now I have written a

MPI-3 Shared Memory for Array Struct

余生颓废 提交于 2019-12-01 12:33:52
I have a simple C++ struct that basically wraps a standard C array: struct MyArray { T* data; int length; // ... } where T is a numeric type like float or double . length is the number of elements in the array. Typically my arrays are very large (tens of thousands up to tens of millions of elements). I have an MPI program where I would like to expose two instances of MyArray , say a_old and a_new , as shared memory objects via MPI 3 shared memory. The context is that each MPI rank reads from a_old . Then, each MPI rank writes to certain indices of a_new (each rank only writes to its own set of

Unable to run MPI when transfering large data

不羁的心 提交于 2019-12-01 12:27:10
I used MPI_Isend to transfer an array of chars to slave node. When the size of the array is small it worked, but when I enlarge the size of the array, it hanged there. Code running on the master node (rank 0) : MPI_Send(&text_length,1,MPI_INT,dest,MSG_TEXT_LENGTH,MPI_COMM_WORLD); MPI_Isend(text->chars, 360358,MPI_CHAR,dest,MSG_SEND_STRING,MPI_COMM_WORLD,&request); MPI_Wait(&request,&status); Code running on slave node (rank 1): MPI_Recv(&count,1,MPI_INT,0,MSG_TEXT_LENGTH,MPI_COMM_WORLD,&status); MPI_Irecv(host_read_string,count,MPI_CHAR,0,MSG_SEND_STRING,MPI_COMM_WORLD,&request); MPI_Wait(

a custom interrupt handler for mpirun

浪子不回头ぞ 提交于 2019-12-01 10:51:02
Apparently, mpirun uses a SIGINT handler which "forwards" the SIGINT signal to each of the processes it spawned. This means you can write an interrupt handler for your mpi-enabled code, execute mpirun -np 3 my-mpi-enabled-executable and then SIGINT will be raised for each of the three processes. Shortly after that, mpirun exits. This works fine when you have a small custom handler which only prints an error message and then exits. However , when your custom interrupt handler is doing a non-trivial job (e.g. doing serious computations or persisting data), the handler does not run to completion.

MPI-3 Shared Memory for Array Struct

假如想象 提交于 2019-12-01 10:49:26
问题 I have a simple C++ struct that basically wraps a standard C array: struct MyArray { T* data; int length; // ... } where T is a numeric type like float or double . length is the number of elements in the array. Typically my arrays are very large (tens of thousands up to tens of millions of elements). I have an MPI program where I would like to expose two instances of MyArray , say a_old and a_new , as shared memory objects via MPI 3 shared memory. The context is that each MPI rank reads from

a custom interrupt handler for mpirun

旧巷老猫 提交于 2019-12-01 08:18:57
问题 Apparently, mpirun uses a SIGINT handler which "forwards" the SIGINT signal to each of the processes it spawned. This means you can write an interrupt handler for your mpi-enabled code, execute mpirun -np 3 my-mpi-enabled-executable and then SIGINT will be raised for each of the three processes. Shortly after that, mpirun exits. This works fine when you have a small custom handler which only prints an error message and then exits. However , when your custom interrupt handler is doing a non

How to build boost with mpi support on homebrew?

有些话、适合烂在心里 提交于 2019-12-01 06:10:19
According to this post ( https://github.com/mxcl/homebrew/pull/2953 ), the flag " --with-mpi " should enable boost_mpi build support for the related homebrew formula, so I am trying to install boost via homebrew like this: brew install boost --with-mpi However, the actual boost mpi library is not being build and can not be found. There is currently some work being done around this, according to: https://github.com/mxcl/homebrew/pull/15689 In summary, I can currently build boost, but it seems the " --with-mpi " flag is being ignored. Could someone please check, if I should be able to build