#include "petscsys.h" PetscErrorCode PetscCommBuildTwoSided(MPI_Comm comm,PetscMPIInt count,MPI_Datatype dtype,PetscMPIInt nto,const PetscMPIInt *toranks,const void *todata,PetscMPIInt *nfrom,PetscMPIInt **fromranks,void *fromdata)Collective
| comm | - communicator | |
| count | - number of entries to send/receive (must match on all ranks) | |
| dtype | - datatype to send/receive from each rank (must match on all ranks) | |
| nto | - number of ranks to send data to | |
| toranks | - ranks to send to (array of length nto) | |
| todata | - data to send to each rank (packed) |
| nfrom | - number of ranks receiving messages from | |
| fromranks | - ranks receiving messages from (length nfrom; caller should PetscFree()) | |
| fromdata | - packed data from each rank, each with count entries of type dtype (length nfrom, caller responsible for PetscFree()) |
| -build_twosided <allreduce|ibarrier|redscatter> | - algorithm to set up two-sided communication. Default is allreduce for communicators with <= 1024 ranks, otherwise ibarrier. |
Basic data types as well as contiguous types are supported, but non-contiguous (e.g., strided) types are not.
| 1. | - Hoefler, Siebert and Lumsdaine, The MPI_Ibarrier implementation uses the algorithm in Scalable communication protocols for dynamic sparse data exchange, 2010. |