Provided by: python-mpi4py-doc_3.1.3-1build2_all 

NAME
mpi4py - MPI for Python
Author Lisandro Dalcin
Contact
dalcinl@gmail.com
Date March 16, 2022
Abstract
This document describes the MPI for Python package. MPI for Python provides Python bindings for the
Message Passing Interface (MPI) standard, allowing Python applications to exploit multiple processors on
workstations, clusters and supercomputers.
This package builds on the MPI specification and provides an object oriented interface resembling the
MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters,
gathers) communication of any picklable Python object, as well as efficient communication of Python
objects exposing the Python buffer interface (e.g. NumPy arrays and builtin bytes/array/memoryview
objects).
INTRODUCTION
Over the last years, high performance computing has become an affordable resource to many more
researchers in the scientific community than ever before. The conjunction of quality open source software
and commodity hardware strongly influenced the now widespread popularity of Beowulf class clusters and
cluster of workstations.
Among many parallel computational models, message-passing has proven to be an effective one. This
paradigm is specially suited for (but not limited to) distributed memory architectures and is used in
today’s most demanding scientific and engineering application related to modeling, simulation, design,
and signal processing. However, portable message-passing parallel programming used to be a nightmare in
the past because of the many incompatible options developers were faced to. Fortunately, this situation
definitely changed after the MPI Forum released its standard specification.
High performance computing is traditionally associated with software development using compiled
languages. However, in typical applications programs, only a small part of the code is time-critical
enough to require the efficiency of compiled languages. The rest of the code is generally related to
memory management, error handling, input/output, and user interaction, and those are usually the most
error prone and time-consuming lines of code to write and debug in the whole development process.
Interpreted high-level languages can be really advantageous for this kind of tasks.
For implementing general-purpose numerical computations, MATLAB [1] is the dominant interpreted
programming language. In the open source side, Octave and Scilab are well known, freely distributed
software packages providing compatibility with the MATLAB language. In this work, we present MPI for
Python, a new package enabling applications to exploit multiple processors using standard MPI “look and
feel” in Python scripts.
[1] MATLAB is a registered trademark of The MathWorks, Inc.
What is MPI?
MPI, [mpi-using] [mpi-ref] the Message Passing Interface, is a standardized and portable message-passing
system designed to function on a wide variety of parallel computers. The standard defines the syntax and
semantics of library routines and allows users to write portable programs in the main scientific
programming languages (Fortran, C, or C++).
Since its release, the MPI specification [mpi-std1] [mpi-std2] has become the leading standard for
message-passing libraries for parallel computers. Implementations are available from vendors of
high-performance computers and from well known open source projects like MPICH [mpi-mpich] and Open MPI
[mpi-openmpi].
What is Python?
Python is a modern, easy to learn, powerful programming language. It has efficient high-level data
structures and a simple but effective approach to object-oriented programming with dynamic typing and
dynamic binding. It supports modules and packages, which encourages program modularity and code reuse.
Python’s elegant syntax, together with its interpreted nature, make it an ideal language for scripting
and rapid application development in many areas on most platforms.
The Python interpreter and the extensive standard library are available in source or binary form without
charge for all major platforms, and can be freely distributed. It is easily extended with new functions
and data types implemented in C or C++. Python is also suitable as an extension language for customizable
applications.
Python is an ideal candidate for writing the higher-level parts of large-scale scientific applications
[Hinsen97] and driving simulations in parallel architectures [Beazley97] like clusters of PC’s or SMP’s.
Python codes are quickly developed, easily maintained, and can achieve a high degree of integration with
other libraries written in compiled languages.
Related Projects
As this work started and evolved, some ideas were borrowed from well known MPI and Python related open
source projects from the Internet.
• OOMPI
• It has no relation with Python, but is an excellent object oriented approach to MPI.
• It is a C++ class library specification layered on top of the C bindings that encapsulates MPI into a
functional class hierarchy.
• It provides a flexible and intuitive interface by adding some abstractions, like Ports and Messages,
which enrich and simplify the syntax.
• Pypar
• Its interface is rather minimal. There is no support for communicators or process topologies.
• It does not require the Python interpreter to be modified or recompiled, but does not permit
interactive parallel runs.
• General (picklable) Python objects of any type can be communicated. There is good support for numeric
arrays, practically full MPI bandwidth can be achieved.
• pyMPI
• It rebuilds the Python interpreter providing a built-in module for message passing. It does permit
interactive parallel runs, which are useful for learning and debugging.
• It provides an interface suitable for basic parallel programing. There is not full support for
defining new communicators or process topologies.
• General (picklable) Python objects can be messaged between processors. There is not support for
numeric arrays.
• Scientific Python
• It provides a collection of Python modules that are useful for scientific computing.
• There is an interface to MPI and BSP (Bulk Synchronous Parallel programming).
• The interface is simple but incomplete and does not resemble the MPI specification. There is support
for numeric arrays.
Additionally, we would like to mention some available tools for scientific computing and software
development with Python.
• NumPy is a package that provides array manipulation and computational capabilities similar to those
found in IDL, MATLAB, or Octave. Using NumPy, it is possible to write many efficient numerical data
processing applications directly in Python without using any C, C++ or Fortran code.
• SciPy is an open source library of scientific tools for Python, gathering a variety of high level
science and engineering modules together as a single package. It includes modules for graphics and
plotting, optimization, integration, special functions, signal and image processing, genetic
algorithms, ODE solvers, and others.
• Cython is a language that makes writing C extensions for the Python language as easy as Python itself.
The Cython language is very close to the Python language, but Cython additionally supports calling C
functions and declaring C types on variables and class attributes. This allows the compiler to generate
very efficient C code from Cython code. This makes Cython the ideal language for wrapping for external
C libraries, and for fast C modules that speed up the execution of Python code.
• SWIG is a software development tool that connects programs written in C and C++ with a variety of
high-level programming languages like Perl, Tcl/Tk, Ruby and Python. Issuing header files to SWIG is
the simplest approach to interfacing C/C++ libraries from a Python module.
[mpi-std1]
MPI Forum. MPI: A Message Passing Interface Standard. International Journal of Supercomputer
Applications, volume 8, number 3-4, pages 159-416, 1994.
[mpi-std2]
MPI Forum. MPI: A Message Passing Interface Standard. High Performance Computing Applications,
volume 12, number 1-2, pages 1-299, 1998.
[mpi-using]
William Gropp, Ewing Lusk, and Anthony Skjellum. Using MPI: portable parallel programming with the
message-passing interface. MIT Press, 1994.
[mpi-ref]
Mark Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra. MPI - The Complete
Reference, volume 1, The MPI Core. MIT Press, 2nd. edition, 1998.
[mpi-mpich]
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A high-performance, portable implementation of the MPI
message passing interface standard. Parallel Computing, 22(6):789-828, September 1996.
[mpi-openmpi]
Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack J. Dongarra, Jeffrey M. Squyres,
Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J.
Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, Concept, and Design of a Next
Generation MPI Implementation. In Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest,
Hungary, September 2004.
[Hinsen97]
Konrad Hinsen. The Molecular Modelling Toolkit: a case study of a large scientific application in
Python. In Proceedings of the 6th International Python Conference, pages 29-35, San Jose, Ca.,
October 1997.
[Beazley97]
David M. Beazley and Peter S. Lomdahl. Feeding a large-scale physics application to Python. In
Proceedings of the 6th International Python Conference, pages 21-29, San Jose, Ca., October 1997.
OVERVIEW
MPI for Python provides an object oriented approach to message passing which grounds on the standard
MPI-2 C++ bindings. The interface was designed with focus in translating MPI syntax and semantics of
standard MPI-2 bindings for C++ to Python. Any user of the standard C/C++ MPI bindings should be able to
use this module without need of learning a new interface.
Communicating Python Objects and Array Data
The Python standard library supports different mechanisms for data persistence. Many of them rely on disk
storage, but pickling and marshaling can also work with memory buffers.
The pickle modules provide user-extensible facilities to serialize general Python objects using ASCII or
binary formats. The marshal module provides facilities to serialize built-in Python objects using a
binary format specific to Python, but independent of machine architecture issues.
MPI for Python can communicate any built-in or user-defined Python object taking advantage of the
features provided by the pickle module. These facilities will be routinely used to build binary
representations of objects to communicate (at sending processes), and restoring them back (at receiving
processes).
Although simple and general, the serialization approach (i.e., pickling and unpickling) previously
discussed imposes important overheads in memory as well as processor usage, especially in the scenario of
objects with large memory footprints being communicated. Pickling general Python objects, ranging from
primitive or container built-in types to user-defined classes, necessarily requires computer resources.
Processing is also needed for dispatching the appropriate serialization method (that depends on the type
of the object) and doing the actual packing. Additional memory is always needed, and if its total amount
is not known a priori, many reallocations can occur. Indeed, in the case of large numeric arrays, this
is certainly unacceptable and precludes communication of objects occupying half or more of the available
memory resources.
MPI for Python supports direct communication of any object exporting the single-segment buffer interface.
This interface is a standard Python mechanism provided by some types (e.g., strings and numeric arrays),
allowing access in the C side to a contiguous memory buffer (i.e., address and length) containing the
relevant data. This feature, in conjunction with the capability of constructing user-defined MPI
datatypes describing complicated memory layouts, enables the implementation of many algorithms involving
multidimensional numeric arrays (e.g., image processing, fast Fourier transforms, finite difference
schemes on structured Cartesian grids) directly in Python, with negligible overhead, and almost as fast
as compiled Fortran, C, or C++ codes.
Communicators
In MPI for Python, Comm is the base class of communicators. The Intracomm and Intercomm classes are
sublcasses of the Comm class. The Comm.Is_inter method (and Comm.Is_intra, provided for convenience but
not part of the MPI specification) is defined for communicator objects and can be used to determine the
particular communicator class.
The two predefined intracommunicator instances are available: COMM_SELF and COMM_WORLD. From them, new
communicators can be created as needed.
The number of processes in a communicator and the calling process rank can be respectively obtained with
methods Comm.Get_size and Comm.Get_rank. The associated process group can be retrieved from a
communicator by calling the Comm.Get_group method, which returns an instance of the Group class. Set
operations with Group objects like like Group.Union, Group.Intersection and Group.Difference are fully
supported, as well as the creation of new communicators from these groups using Comm.Create and
Comm.Create_group.
New communicator instances can be obtained with the Comm.Clone, Comm.Dup and Comm.Split methods, as well
methods Intracomm.Create_intercomm and Intercomm.Merge.
Virtual topologies (Cartcomm, Graphcomm and Distgraphcomm classes, which are specializations of the
Intracomm class) are fully supported. New instances can be obtained from intracommunicator instances with
factory methods Intracomm.Create_cart and Intracomm.Create_graph.
Point-to-Point Communications
Point to point communication is a fundamental capability of message passing systems. This mechanism
enables the transmission of data between a pair of processes, one side sending, the other receiving.
MPI provides a set of send and receive functions allowing the communication of typed data with an
associated tag. The type information enables the conversion of data representation from one architecture
to another in the case of heterogeneous computing environments; additionally, it allows the
representation of non-contiguous data layouts and user-defined datatypes, thus avoiding the overhead of
(otherwise unavoidable) packing/unpacking operations. The tag information allows selectivity of messages
at the receiving end.
Blocking Communications
MPI provides basic send and receive functions that are blocking. These functions block the caller until
the data buffers involved in the communication can be safely reused by the application program.
In MPI for Python, the Comm.Send, Comm.Recv and Comm.Sendrecv methods of communicator objects provide
support for blocking point-to-point communications within Intracomm and Intercomm instances. These
methods can communicate memory buffers. The variants Comm.send, Comm.recv and Comm.sendrecv can
communicate general Python objects.
Nonblocking Communications
On many systems, performance can be significantly increased by overlapping communication and computation.
This is particularly true on systems where communication can be executed autonomously by an intelligent,
dedicated communication controller.
MPI provides nonblocking send and receive functions. They allow the possible overlap of communication and
computation. Non-blocking communication always come in two parts: posting functions, which begin the
requested operation; and test-for-completion functions, which allow to discover whether the requested
operation has completed.
In MPI for Python, the Comm.Isend and Comm.Irecv methods initiate send and receive operations,
respectively. These methods return a Request instance, uniquely identifying the started operation. Its
completion can be managed using the Request.Test, Request.Wait and Request.Cancel methods. The management
of Request objects and associated memory buffers involved in communication requires a careful, rather
low-level coordination. Users must ensure that objects exposing their memory buffers are not accessed at
the Python level while they are involved in nonblocking message-passing operations.
Persistent Communications
Often a communication with the same argument list is repeatedly executed within an inner loop. In such
cases, communication can be further optimized by using persistent communication, a particular case of
nonblocking communication allowing the reduction of the overhead between processes and communication
controllers. Furthermore , this kind of optimization can also alleviate the extra call overheads
associated to interpreted, dynamic languages like Python.
In MPI for Python, the Comm.Send_init and Comm.Recv_init methods create persistent requests for a send
and receive operation, respectively. These methods return an instance of the Prequest class, a subclass
of the Request class. The actual communication can be effectively started using the Prequest.Start
method, and its completion can be managed as previously described.
Collective Communications
Collective communications allow the transmittal of data between multiple processes of a group
simultaneously. The syntax and semantics of collective functions is consistent with point-to-point
communication. Collective functions communicate typed data, but messages are not paired with an
associated tag; selectivity of messages is implied in the calling order. Additionally, collective
functions come in blocking versions only.
The more commonly used collective communication operations are the following.
• Barrier synchronization across all group members.
• Global communication functions
• Broadcast data from one member to all members of a group.
• Gather data from all members to one member of a group.
• Scatter data from one member to all members of a group.
• Global reduction operations such as sum, maximum, minimum, etc.
In MPI for Python, the Comm.Bcast, Comm.Scatter, Comm.Gather, Comm.Allgather, Comm.Alltoall methods
provide support for collective communications of memory buffers. The lower-case variants Comm.bcast,
Comm.scatter, Comm.gather, Comm.allgather and Comm.alltoall can communicate general Python objects. The
vector variants (which can communicate different amounts of data to each process) Comm.Scatterv,
Comm.Gatherv, Comm.Allgatherv, Comm.Alltoallv and Comm.Alltoallw are also supported, they can only
communicate objects exposing memory buffers.
Global reducion operations on memory buffers are accessible through the Comm.Reduce, Comm.Reduce_scatter,
Comm.Allreduce, Intracomm.Scan and Intracomm.Exscan methods. The lower-case variants Comm.reduce,
Comm.allreduce, Intracomm.scan and Intracomm.exscan can communicate general Python objects; however, the
actual required reduction computations are performed sequentially at some process. All the predefined
(i.e., SUM, PROD, MAX, etc.) reduction operations can be applied.
Support for GPU-aware MPI
Several MPI implementations, including Open MPI and MVAPICH, support passing GPU pointers to MPI calls to
avoid explict data movement between the host and the device. On the Python side, GPU arrays have been
implemented by many libraries that need GPU computation, such as CuPy, Numba, PyTorch, and PyArrow. In
order to increase library interoperability, two kinds of zero-copy data exchange protocols are defined
and agreed upon: DLPack and CUDA Array Interface. For example, a CuPy array can be passed to a Numba
CUDA-jit kernel.
MPI for Python provides an experimental support for GPU-aware MPI. This feature requires:
1. mpi4py is built against a GPU-aware MPI library.
2. The Python GPU arrays are compliant with either of the protocols.
See the tutorial section for further information. We note that
• Whether or not a MPI call can work for GPU arrays depends on the underlying MPI implementation, not on
mpi4py.
• This support is currently experimental and subject to change in the future.
Dynamic Process Management
In the context of the MPI-1 specification, a parallel application is static; that is, no processes can be
added to or deleted from a running application after it has been started. Fortunately, this limitation
was addressed in MPI-2. The new specification added a process management model providing a basic
interface between an application and external resources and process managers.
This MPI-2 extension can be really useful, especially for sequential applications built on top of
parallel modules, or parallel applications with a client/server model. The MPI-2 process model provides a
mechanism to create new processes and establish communication between them and the existing MPI
application. It also provides mechanisms to establish communication between two existing MPI
applications, even when one did not start the other.
In MPI for Python, new independent process groups can be created by calling the Intracomm.Spawn method
within an intracommunicator. This call returns a new intercommunicator (i.e., an Intercomm instance) at
the parent process group. The child process group can retrieve the matching intercommunicator by calling
the Comm.Get_parent class method. At each side, the new intercommunicator can be used to perform point to
point and collective communications between the parent and child groups of processes.
Alternatively, disjoint groups of processes can establish communication using a client/server approach.
Any server application must first call the Open_port function to open a port and the Publish_name
function to publish a provided service, and next call the Intracomm.Accept method. Any client
applications can first find a published service by calling the Lookup_name function, which returns the
port where a server can be contacted; and next call the Intracomm.Connect method. Both Intracomm.Accept
and Intracomm.Connect methods return an Intercomm instance. When connection between client/server
processes is no longer needed, all of them must cooperatively call the Comm.Disconnect method.
Additionally, server applications should release resources by calling the Unpublish_name and Close_port
functions.
One-Sided Communications
One-sided communications (also called Remote Memory Access, RMA) supplements the traditional two-sided,
send/receive based MPI communication model with a one-sided, put/get based interface. One-sided
communication that can take advantage of the capabilities of highly specialized network hardware.
Additionally, this extension lowers latency and software overhead in applications written using a
shared-memory-like paradigm.
The MPI specification revolves around the use of objects called windows; they intuitively specify regions
of a process’s memory that have been made available for remote read and write operations. The published
memory blocks can be accessed through three functions for put (remote send), get (remote write), and
accumulate (remote update or reduction) data items. A much larger number of functions support different
synchronization styles; the semantics of these synchronization operations are fairly complex.
In MPI for Python, one-sided operations are available by using instances of the Win class. New window
objects are created by calling the Win.Create method at all processes within a communicator and
specifying a memory buffer . When a window instance is no longer needed, the Win.Free method should be
called.
The three one-sided MPI operations for remote write, read and reduction are available through calling the
methods Win.Put, Win.Get, and Win.Accumulate respectively within a Win instance. These methods need an
integer rank identifying the target process and an integer offset relative the base address of the remote
memory block being accessed.
The one-sided operations read, write, and reduction are implicitly nonblocking, and must be synchronized
by using two primary modes. Active target synchronization requires the origin process to call the
Win.Start and Win.Complete methods at the origin process, and target process cooperates by calling the
Win.Post and Win.Wait methods. There is also a collective variant provided by the Win.Fence method.
Passive target synchronization is more lenient, only the origin process calls the Win.Lock and Win.Unlock
methods. Locks are used to protect remote accesses to the locked remote window and to protect local
load/store accesses to a locked local window.
Parallel Input/Output
The POSIX standard provides a model of a widely portable file system. However, the optimization needed
for parallel input/output cannot be achieved with this generic interface. In order to ensure efficiency
and scalability, the underlying parallel input/output system must provide a high-level interface
supporting partitioning of file data among processes and a collective interface supporting complete
transfers of global data structures between process memories and files. Additionally, further
efficiencies can be gained via support for asynchronous input/output, strided accesses to data, and
control over physical file layout on storage devices. This scenario motivated the inclusion in the MPI-2
standard of a custom interface in order to support more elaborated parallel input/output operations.
The MPI specification for parallel input/output revolves around the use objects called files. As defined
by MPI, files are not just contiguous byte streams. Instead, they are regarded as ordered collections of
typed data items. MPI supports sequential or random access to any integral set of these items.
Furthermore, files are opened collectively by a group of processes.
The common patterns for accessing a shared file (broadcast, scatter, gather, reduction) is expressed by
using user-defined datatypes. Compared to the communication patterns of point-to-point and collective
communications, this approach has the advantage of added flexibility and expressiveness. Data access
operations (read and write) are defined for different kinds of positioning (using explicit offsets,
individual file pointers, and shared file pointers), coordination (non-collective and collective), and
synchronism (blocking, nonblocking, and split collective with begin/end phases).
In MPI for Python, all MPI input/output operations are performed through instances of the File class.
File handles are obtained by calling the File.Open method at all processes within a communicator and
providing a file name and the intended access mode. After use, they must be closed by calling the
File.Close method. Files even can be deleted by calling method File.Delete.
After creation, files are typically associated with a per-process view. The view defines the current set
of data visible and accessible from an open file as an ordered set of elementary datatypes. This data
layout can be set and queried with the File.Set_view and File.Get_view methods respectively.
Actual input/output operations are achieved by many methods combining read and write calls with different
behavior regarding positioning, coordination, and synchronism. Summing up, MPI for Python provides the
thirty (30) methods defined in MPI-2 for reading from or writing to files using explicit offsets or file
pointers (individual or shared), in blocking or nonblocking and collective or noncollective versions.
Environmental Management
Initialization and Exit
Module functions Init or Init_thread and Finalize provide MPI initialization and finalization
respectively. Module functions Is_initialized and Is_finalized provide the respective tests for
initialization and finalization.
NOTE:
MPI_Init() or MPI_Init_thread() is actually called when you import the MPI module from the mpi4py
package, but only if MPI is not already initialized. In such case, calling Init or Init_thread from
Python is expected to generate an MPI error, and in turn an exception will be raised.
NOTE:
MPI_Finalize() is registered (by using Python C/API function Py_AtExit()) for being automatically
called when Python processes exit, but only if mpi4py actually initialized MPI. Therefore, there is no
need to call Finalize from Python to ensure MPI finalization.
Implementation Information
• The MPI version number can be retrieved from module function Get_version. It returns a two-integer
tuple (version, subversion).
• The Get_processor_name function can be used to access the processor name.
• The values of predefined attributes attached to the world communicator can be obtained by calling the
Comm.Get_attr method within the COMM_WORLD instance.
Timers
MPI timer functionalities are available through the Wtime and Wtick functions.
Error Handling
In order facilitate handle sharing with other Python modules interfacing MPI-based parallel libraries,
the predefined MPI error handlers ERRORS_RETURN and ERRORS_ARE_FATAL can be assigned to and retrieved
from communicators using methods Comm.Set_errhandler and Comm.Get_errhandler, and similarly for windows
and files.
When the predefined error handler ERRORS_RETURN is set, errors returned from MPI calls within Python code
will raise an instance of the exception class Exception, which is a subclass of the standard Python
exception python:RuntimeError.
NOTE:
After import, mpi4py overrides the default MPI rules governing inheritance of error handlers. The
ERRORS_RETURN error handler is set in the predefined COMM_SELF and COMM_WORLD communicators, as well
as any new Comm, Win, or File instance created through mpi4py. If you ever pass such handles to
C/C++/Fortran library code, it is recommended to set the ERRORS_ARE_FATAL error handler on them to
ensure MPI errors do not pass silently.
WARNING:
Importing with from mpi4py.MPI import * will cause a name clashing with the standard Python
python:Exception base class.
TUTORIAL
WARNING:
Under construction. Contributions very welcome!
TIP:
Rolf Rabenseifner at HLRS developed a comprehensive MPI-3.1/4.0 course with slides and a large set of
exercises including solutions. This material is available online for self-study. The slides and
exercises show the C, Fortran, and Python (mpi4py) interfaces. For performance reasons, most Python
exercises use NumPy arrays and communication routines involving buffer-like objects.
TIP:
Victor Eijkhout at TACC authored the book Parallel Programming for Science and Engineering. This book
is available online in PDF and HTML formats. The book covers parallel programming with MPI and OpenMP
in C/C++ and Fortran, and MPI in Python using mpi4py.
MPI for Python supports convenient, pickle-based communication of generic Python object as well as fast,
near C-speed, direct array data communication of buffer-provider objects (e.g., NumPy arrays).
• Communication of generic Python objects
You have to use methods with all-lowercase names, like Comm.send, Comm.recv, Comm.bcast, Comm.scatter,
Comm.gather . An object to be sent is passed as a parameter to the communication call, and the received
object is simply the return value.
The Comm.isend and Comm.irecv methods return Request instances; completion of these methods can be
managed using the Request.test and Request.wait methods.
The Comm.recv and Comm.irecv methods may be passed a buffer object that can be repeatedly used to
receive messages avoiding internal memory allocation. This buffer must be sufficiently large to
accommodate the transmitted messages; hence, any buffer passed to Comm.recv or Comm.irecv must be at
least as long as the pickled data transmitted to the receiver.
Collective calls like Comm.scatter, Comm.gather, Comm.allgather, Comm.alltoall expect a single value or
a sequence of Comm.size elements at the root or all process. They return a single value, a list of
Comm.size elements, or None.
NOTE:
MPI for Python uses the highest protocol version available in the Python runtime (see the
HIGHEST_PROTOCOL constant in the pickle module). The default protocol can be changed at import time
by setting the MPI4PY_PICKLE_PROTOCOL environment variable, or at runtime by assigning a different
value to the PROTOCOL attribute of the pickle object within the MPI module.
• Communication of buffer-like objects
You have to use method names starting with an upper-case letter, like Comm.Send, Comm.Recv, Comm.Bcast,
Comm.Scatter, Comm.Gather.
In general, buffer arguments to these calls must be explicitly specified by using a 2/3-list/tuple like
[data, MPI.DOUBLE], or [data, count, MPI.DOUBLE] (the former one uses the byte-size of data and the
extent of the MPI datatype to define count).
For vector collectives communication operations like Comm.Scatterv and Comm.Gatherv, buffer arguments
are specified as [data, count, displ, datatype], where count and displ are sequences of integral
values.
Automatic MPI datatype discovery for NumPy/GPU arrays and PEP-3118 buffers is supported, but limited to
basic C types (all C/C99-native signed/unsigned integral types and single/double precision real/complex
floating types) and availability of matching datatypes in the underlying MPI implementation. In this
case, the buffer-provider object can be passed directly as a buffer argument, the count and MPI
datatype will be inferred.
If mpi4py is built against a GPU-aware MPI implementation, GPU arrays can be passed to upper-case
methods as long as they have either the __dlpack__ and __dlpack_device__ methods or the
__cuda_array_interface__ attribute that are compliant with the respective standard specifications.
Moreover, only C-contiguous or Fortran-contiguous GPU arrays are supported. It is important to note
that GPU buffers must be fully ready before any MPI routines operate on them to avoid race conditions.
This can be ensured by using the synchronization API of your array library. mpi4py does not have access
to any GPU-specific functionality and thus cannot perform this operation automatically for users.
Running Python scripts with MPI
Most MPI programs can be run with the command mpiexec. In practice, running Python programs looks like:
$ mpiexec -n 4 python script.py
to run the program with 4 processors.
Point-to-Point Communication
• Python objects (pickle under the hood):
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = {'a': 7, 'b': 3.14}
comm.send(data, dest=1, tag=11)
elif rank == 1:
data = comm.recv(source=0, tag=11)
• Python objects with non-blocking communication:
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = {'a': 7, 'b': 3.14}
req = comm.isend(data, dest=1, tag=11)
req.wait()
elif rank == 1:
req = comm.irecv(source=0, tag=11)
data = req.wait()
• NumPy arrays (the fast way!):
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# passing MPI datatypes explicitly
if rank == 0:
data = numpy.arange(1000, dtype='i')
comm.Send([data, MPI.INT], dest=1, tag=77)
elif rank == 1:
data = numpy.empty(1000, dtype='i')
comm.Recv([data, MPI.INT], source=0, tag=77)
# automatic MPI datatype discovery
if rank == 0:
data = numpy.arange(100, dtype=numpy.float64)
comm.Send(data, dest=1, tag=13)
elif rank == 1:
data = numpy.empty(100, dtype=numpy.float64)
comm.Recv(data, source=0, tag=13)
Collective Communication
• Broadcasting a Python dictionary:
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = {'key1' : [7, 2.72, 2+3j],
'key2' : ( 'abc', 'xyz')}
else:
data = None
data = comm.bcast(data, root=0)
• Scattering Python objects:
from mpi4py import MPI
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
if rank == 0:
data = [(i+1)**2 for i in range(size)]
else:
data = None
data = comm.scatter(data, root=0)
assert data == (rank+1)**2
• Gathering Python objects:
from mpi4py import MPI
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
data = (rank+1)**2
data = comm.gather(data, root=0)
if rank == 0:
for i in range(size):
assert data[i] == (i+1)**2
else:
assert data is None
• Broadcasting a NumPy array:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = np.arange(100, dtype='i')
else:
data = np.empty(100, dtype='i')
comm.Bcast(data, root=0)
for i in range(100):
assert data[i] == i
• Scattering NumPy arrays:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
sendbuf = None
if rank == 0:
sendbuf = np.empty([size, 100], dtype='i')
sendbuf.T[:,:] = range(size)
recvbuf = np.empty(100, dtype='i')
comm.Scatter(sendbuf, recvbuf, root=0)
assert np.allclose(recvbuf, rank)
• Gathering NumPy arrays:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
sendbuf = np.zeros(100, dtype='i') + rank
recvbuf = None
if rank == 0:
recvbuf = np.empty([size, 100], dtype='i')
comm.Gather(sendbuf, recvbuf, root=0)
if rank == 0:
for i in range(size):
assert np.allclose(recvbuf[i,:], i)
• Parallel matrix-vector product:
from mpi4py import MPI
import numpy
def matvec(comm, A, x):
m = A.shape[0] # local rows
p = comm.Get_size()
xg = numpy.zeros(m*p, dtype='d')
comm.Allgather([x, MPI.DOUBLE],
[xg, MPI.DOUBLE])
y = numpy.dot(A, xg)
return y
MPI-IO
• Collective I/O with NumPy arrays:
from mpi4py import MPI
import numpy as np
amode = MPI.MODE_WRONLY|MPI.MODE_CREATE
comm = MPI.COMM_WORLD
fh = MPI.File.Open(comm, "./datafile.contig", amode)
buffer = np.empty(10, dtype=np.int)
buffer[:] = comm.Get_rank()
offset = comm.Get_rank()*buffer.nbytes
fh.Write_at_all(offset, buffer)
fh.Close()
• Non-contiguous Collective I/O with NumPy arrays and datatypes:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
amode = MPI.MODE_WRONLY|MPI.MODE_CREATE
fh = MPI.File.Open(comm, "./datafile.noncontig", amode)
item_count = 10
buffer = np.empty(item_count, dtype='i')
buffer[:] = rank
filetype = MPI.INT.Create_vector(item_count, 1, size)
filetype.Commit()
displacement = MPI.INT.Get_size()*rank
fh.Set_view(displacement, filetype=filetype)
fh.Write_all(buffer)
filetype.Free()
fh.Close()
Dynamic Process Management
• Compute Pi - Master (or parent, or client) side:
#!/usr/bin/env python
from mpi4py import MPI
import numpy
import sys
comm = MPI.COMM_SELF.Spawn(sys.executable,
args=['cpi.py'],
maxprocs=5)
N = numpy.array(100, 'i')
comm.Bcast([N, MPI.INT], root=MPI.ROOT)
PI = numpy.array(0.0, 'd')
comm.Reduce(None, [PI, MPI.DOUBLE],
op=MPI.SUM, root=MPI.ROOT)
print(PI)
comm.Disconnect()
• Compute Pi - Worker (or child, or server) side:
#!/usr/bin/env python
from mpi4py import MPI
import numpy
comm = MPI.Comm.Get_parent()
size = comm.Get_size()
rank = comm.Get_rank()
N = numpy.array(0, dtype='i')
comm.Bcast([N, MPI.INT], root=0)
h = 1.0 / N; s = 0.0
for i in range(rank, N, size):
x = h * (i + 0.5)
s += 4.0 / (1.0 + x**2)
PI = numpy.array(s * h, dtype='d')
comm.Reduce([PI, MPI.DOUBLE], None,
op=MPI.SUM, root=0)
comm.Disconnect()
CUDA-aware MPI + Python GPU arrays
• Reduce-to-all CuPy arrays:
from mpi4py import MPI
import cupy as cp
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
sendbuf = cp.arange(10, dtype='i')
recvbuf = cp.empty_like(sendbuf)
assert hasattr(sendbuf, '__cuda_array_interface__')
assert hasattr(recvbuf, '__cuda_array_interface__')
cp.cuda.get_current_stream().synchronize()
comm.Allreduce(sendbuf, recvbuf)
assert cp.allclose(recvbuf, sendbuf*size)
One-Sided Communications
• Read from (write to) the entire RMA window:
import numpy as np
from mpi4py import MPI
from mpi4py.util import dtlib
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
datatype = MPI.FLOAT
np_dtype = dtlib.to_numpy_dtype(datatype)
itemsize = datatype.Get_size()
N = 10
win_size = N * itemsize if rank == 0 else 0
win = MPI.Win.Allocate(win_size, comm=comm)
buf = np.empty(N, dtype=np_dtype)
if rank == 0:
buf.fill(42)
win.Lock(rank=0)
win.Put(buf, target_rank=0)
win.Unlock(rank=0)
comm.Barrier()
else:
comm.Barrier()
win.Lock(rank=0)
win.Get(buf, target_rank=0)
win.Unlock(rank=0)
assert np.all(buf == 42)
• Accessing a part of the RMA window using the target argument, which is defined as (offset, count,
datatype):
import numpy as np
from mpi4py import MPI
from mpi4py.util import dtlib
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
datatype = MPI.FLOAT
np_dtype = dtlib.to_numpy_dtype(datatype)
itemsize = datatype.Get_size()
N = comm.Get_size() + 1
win_size = N * itemsize if rank == 0 else 0
win = MPI.Win.Allocate(
size=win_size,
disp_unit=itemsize,
comm=comm,
)
if rank == 0:
mem = np.frombuffer(win, dtype=np_dtype)
mem[:] = np.arange(len(mem), dtype=np_dtype)
comm.Barrier()
buf = np.zeros(3, dtype=np_dtype)
target = (rank, 2, datatype)
win.Lock(rank=0)
win.Get(buf, target_rank=0, target=target)
win.Unlock(rank=0)
assert np.all(buf == [rank, rank+1, 0])
Wrapping with SWIG
• C source:
/* file: helloworld.c */
void sayhello(MPI_Comm comm)
{
int size, rank;
MPI_Comm_size(comm, &size);
MPI_Comm_rank(comm, &rank);
printf("Hello, World! "
"I am process %d of %d.\n",
rank, size);
}
• SWIG interface file:
// file: helloworld.i
%module helloworld
%{
#include <mpi.h>
#include "helloworld.c"
}%
%include mpi4py/mpi4py.i
%mpi4py_typemap(Comm, MPI_Comm);
void sayhello(MPI_Comm comm);
• Try it in the Python prompt:
>>> from mpi4py import MPI
>>> import helloworld
>>> helloworld.sayhello(MPI.COMM_WORLD)
Hello, World! I am process 0 of 1.
Wrapping with F2Py
• Fortran 90 source:
! file: helloworld.f90
subroutine sayhello(comm)
use mpi
implicit none
integer :: comm, rank, size, ierr
call MPI_Comm_size(comm, size, ierr)
call MPI_Comm_rank(comm, rank, ierr)
print *, 'Hello, World! I am process ',rank,' of ',size,'.'
end subroutine sayhello
• Compiling example using f2py
$ f2py -c --f90exec=mpif90 helloworld.f90 -m helloworld
• Try it in the Python prompt:
>>> from mpi4py import MPI
>>> import helloworld
>>> fcomm = MPI.COMM_WORLD.py2f()
>>> helloworld.sayhello(fcomm)
Hello, World! I am process 0 of 1.
MPI4PY
This is the MPI for Python package.
The Message Passing Interface (MPI) is a standardized and portable message-passing system designed to
function on a wide variety of parallel computers. The MPI standard defines the syntax and semantics of
library routines and allows users to write portable programs in the main scientific programming languages
(Fortran, C, or C++). Since its release, the MPI specification has become the leading standard for
message-passing libraries for parallel computers.
MPI for Python provides MPI bindings for the Python programming language, allowing any Python program to
exploit multiple processors. This package build on the MPI specification and provides an object oriented
interface which closely follows MPI-2 C++ bindings.
Runtime configuration options
mpi4py.rc
This object has attributes exposing runtime configuration options that become effective at import
time of the MPI module.
Attributes Summary
┌──────────────┬───────────────────────────────────────┐
│ initialize │ Automatic MPI initialization at │
│ │ import │
├──────────────┼───────────────────────────────────────┤
│ threads │ Request initialization with thread │
│ │ support │
├──────────────┼───────────────────────────────────────┤
│ thread_level │ Level of thread support to request │
├──────────────┼───────────────────────────────────────┤
│ finalize │ Automatic MPI finalization at exit │
├──────────────┼───────────────────────────────────────┤
│ fast_reduce │ Use tree-based reductions for objects │
├──────────────┼───────────────────────────────────────┤
│ recv_mprobe │ Use matched probes to receive objects │
├──────────────┼───────────────────────────────────────┤
│ errors │ Error handling policy │
└──────────────┴───────────────────────────────────────┘
Attributes Documentation
mpi4py.rc.initialize
Automatic MPI initialization at import.
Type bool
Default
True
SEE ALSO:
MPI4PY_RC_INITIALIZE
mpi4py.rc.threads
Request initialization with thread support.
Type bool
Default
True
SEE ALSO:
MPI4PY_RC_THREADS
mpi4py.rc.thread_level
Level of thread support to request.
Type str
Default
"multiple"
Choices
"multiple", "serialized", "funneled", "single"
SEE ALSO:
MPI4PY_RC_THREAD_LEVEL
mpi4py.rc.finalize
Automatic MPI finalization at exit.
Type None or bool
Default
None
SEE ALSO:
MPI4PY_RC_FINALIZE
mpi4py.rc.fast_reduce
Use tree-based reductions for objects.
Type bool
Default
True
SEE ALSO:
MPI4PY_RC_FAST_REDUCE
mpi4py.rc.recv_mprobe
Use matched probes to receive objects.
Type bool
Default
True
SEE ALSO:
MPI4PY_RC_RECV_MPROBE
mpi4py.rc.errors
Error handling policy.
Type str
Default
"exception"
Choices
"exception", "default", "fatal"
SEE ALSO:
MPI4PY_RC_ERRORS
Example
MPI for Python features automatic initialization and finalization of the MPI execution environment. By
using the mpi4py.rc object, MPI initialization and finalization can be handled programatically:
import mpi4py
mpi4py.rc.initialize = False # do not initialize MPI automatically
mpi4py.rc.finalize = False # do not finalize MPI automatically
from mpi4py import MPI # import the 'MPI' module
MPI.Init() # manual initialization of the MPI environment
... # your finest code here ...
MPI.Finalize() # manual finalization of the MPI environment
Environment variables
The following environment variables override the corresponding attributes of the mpi4py.rc and MPI.pickle
objects at import time of the MPI module.
NOTE:
For variables of boolean type, accepted values are 0 and 1 (interpreted as False and True,
respectively), and strings specifying a YAML boolean value (case-insensitive).
MPI4PY_RC_INITIALIZE
Type bool
Default
True
Whether to automatically initialize MPI at import time of the mpi4py.MPI module.
SEE ALSO:
mpi4py.rc.initialize
New in version 3.1.0.
MPI4PY_RC_FINALIZE
Type None | bool
Default
None
Choices
None, True, False
Whether to automatically finalize MPI at exit time of the Python process.
SEE ALSO:
mpi4py.rc.finalize
New in version 3.1.0.
MPI4PY_RC_THREADS
Type bool
Default
True
Whether to initialize MPI with thread support.
SEE ALSO:
mpi4py.rc.threads
New in version 3.1.0.
MPI4PY_RC_THREAD_LEVEL
Default
"multiple"
Choices
"single", "funneled", "serialized", "multiple"
The level of required thread support.
SEE ALSO:
mpi4py.rc.thread_level
New in version 3.1.0.
MPI4PY_RC_FAST_REDUCE
Type bool
Default
True
Whether to use tree-based reductions for objects.
SEE ALSO:
mpi4py.rc.fast_reduce
New in version 3.1.0.
MPI4PY_RC_RECV_MPROBE
Type bool
Default
True
Whether to use matched probes to receive objects.
SEE ALSO:
mpi4py.rc.recv_mprobe
MPI4PY_RC_ERRORS
Default
"exception"
Choices
"exception", "default", "fatal"
Controls default MPI error handling policy.
SEE ALSO:
mpi4py.rc.errors
New in version 3.1.0.
MPI4PY_PICKLE_PROTOCOL
Type int
Default
pickle.HIGHEST_PROTOCOL
Controls the default pickle protocol to use when communicating Python objects.
SEE ALSO:
PROTOCOL attribute of the MPI.pickle object within the MPI module.
New in version 3.1.0.
MPI4PY_PICKLE_THRESHOLD
Type int
Default
262144
Controls the default buffer size threshold for switching from in-band to out-of-band buffer
handling when using pickle protocol version 5 or higher.
SEE ALSO:
Module mpi4py.util.pkl5.
New in version 3.1.2.
Miscellaneous functions
mpi4py.profile(name, *, path=None, logfile=None)
Support for the MPI profiling interface.
Parameters
• name (str) – Name of the profiler library to load.
• path (sequence of str, optional) – Additional paths to search for the profiler.
• logfile (str, optional) – Filename prefix for dumping profiler output.
Return type
None
mpi4py.get_config()
Return a dictionary with information about MPI.
Return type
Dict[str, str]
mpi4py.get_include()
Return the directory in the package that contains header files.
Extension modules that need to compile against mpi4py should use this function to locate the
appropriate include directory. Using Python distutils (or perhaps NumPy distutils):
import mpi4py
Extension('extension_name', ...
include_dirs=[..., mpi4py.get_include()])
Return type
str
MPI4PY.MPI
Classes
Ancillary
┌──────────────────────┬────────────────────────────┐
│ Datatype([datatype]) │ Datatype object │
├──────────────────────┼────────────────────────────┤
│ Status([status]) │ Status object │
├──────────────────────┼────────────────────────────┤
│ Request([request]) │ Request handle │
├──────────────────────┼────────────────────────────┤
│ Prequest([request]) │ Persistent request handle │
├──────────────────────┼────────────────────────────┤
│ Grequest([request]) │ Generalized request handle │
├──────────────────────┼────────────────────────────┤
│ Op([op]) │ Operation object │
├──────────────────────┼────────────────────────────┤
│ Group([group]) │ Group of processes │
├──────────────────────┼────────────────────────────┤
│ Info([info]) │ Info object │
└──────────────────────┴────────────────────────────┘
Communication
─────────────────────────────────────────────────────────────────
Comm([comm]) Communicator
─────────────────────────────────────────────────────────────────
Intracomm([comm]) Intracommunicator
─────────────────────────────────────────────────────────────────
Topocomm([comm]) Topology intracommunicator
─────────────────────────────────────────────────────────────────
Cartcomm([comm]) Cartesian topology intracommunicator
─────────────────────────────────────────────────────────────────
Graphcomm([comm]) General graph topology
intracommunicator
─────────────────────────────────────────────────────────────────
Distgraphcomm([comm]) Distributed graph topology
intracommunicator
─────────────────────────────────────────────────────────────────
Intercomm([comm]) Intercommunicator
─────────────────────────────────────────────────────────────────
Message([message]) Matched message handle
┌───────────────────────┬───────────────────────────────────────┐
│ │ │
--
MPI4PY.FUTURES │ │ │
--
MPI4PY.UTIL
New in version 3.1.0.
The mpi4py.util package collects miscellaneous utilities within the intersection of Python and MPI.
mpi4py.util.pkl5
New in version 3.1.0.
pickle protocol 5 (see PEP 574) introduced support for out-of-band buffers, allowing for more efficient
handling of certain object types with large memory footprints.
MPI for Python uses the traditional in-band handling of buffers. This approach is appropriate for
communicating non-buffer Python objects, or buffer-like objects with small memory footprints. For
point-to-point communication, in-band buffer handling allows for the communication of a pickled stream
with a single MPI message, at the expense of additional CPU and memory overhead in the pickling and
unpickling steps.
The mpi4py.util.pkl5 module provides communicator wrapper classes reimplementing pickle-based
point-to-point communication methods using pickle protocol 5. Handling out-of-band buffers necessarily
involve multiple MPI messages, thus increasing latency and hurting performance in case of small size
data. However, in case of large size data, the zero-copy savings of out-of-band buffer handling more than
offset the extra latency costs. Additionally, these wrapper methods overcome the infamous 2 GiB message
count limit (MPI-1 to MPI-3).
NOTE:
Support for pickle protocol 5 is available in the pickle module within the Python standard library
since Python 3.8. Previous Python 3 releases can use the pickle5 backport, which is available on PyPI
and can be installed with:
python -m pip install pickle5
class mpi4py.util.pkl5.Request(request=None)
Request.
Custom request class for nonblocking communications.
NOTE:
Request is not a subclass of mpi4py.MPI.Request
Parameters
request (Iterable[MPI.Request]) –
Return type
Request
Free() Free a communication request.
Return type
None
cancel()
Cancel a communication request.
Return type
None
get_status(status=None)
Non-destructive test for the completion of a request.
Parameters
status (Optional[Status]) –
Return type
bool
test(status=None)
Test for the completion of a request.
Parameters
status (Optional[Status]) –
Return type
Tuple[bool, Optional[Any]]
wait(status=None)
Wait for a request to complete.
Parameters
status (Optional[Status]) –
Return type
Any
classmethod testall(requests, statuses=None)
Test for the completion of all requests.
Classmethod
classmethod waitall(requests, statuses=None)
Wait for all requests to complete.
Classmethod
class mpi4py.util.pkl5.Message(message=None)
Message.
Custom message class for matching probes.
NOTE:
Message is not a subclass of mpi4py.MPI.Message
Parameters
message (Iterable[MPI.Message]) –
Return type
Message
recv(status=None)
Blocking receive of matched message.
Parameters
status (Optional[Status]) –
Return type
Any
irecv()
Nonblocking receive of matched message.
Return type
Request
classmethod probe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking test for a matched message.
Classmethod
classmethod iprobe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Nonblocking test for a matched message.
Classmethod
class mpi4py.util.pkl5.Comm
Communicator.
Base communicator wrapper class.
send(obj, dest, tag=0)
Blocking send in standard mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
None
bsend(obj, dest, tag=0)
Blocking send in buffered mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
None
ssend(obj, dest, tag=0)
Blocking send in synchronous mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
None
isend(obj, dest, tag=0)
Nonblocking send in standard mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
Request
ibsend(obj, dest, tag=0)
Nonblocking send in buffered mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
Request
issend(obj, dest, tag=0)
Nonblocking send in synchronous mode.
Parameters
• obj (Any) –
• dest (int) –
• tag (int) –
Return type
Request
recv(buf=None, source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking receive.
Parameters
• buf (Optional[Buffer]) –
• source (int) –
• tag (int) –
• status (Optional[Status]) –
Return type
Any
irecv(buf=None, source=ANY_SOURCE, tag=ANY_TAG)
Nonblocking receive.
WARNING:
This method cannot be supported reliably and raises RuntimeError.
Parameters
• buf (Optional[Buffer]) –
• source (int) –
• tag (int) –
Return type
Request
sendrecv(sendobj, dest, sendtag=0, recvbuf=None, source=ANY_SOURCE, recvtag=ANY_TAG, status=None)
Send and receive.
Parameters
• sendobj (Any) –
• dest (int) –
• sendtag (int) –
• recvbuf (Optional[Buffer]) –
• source (int) –
• recvtag (int) –
• status (Optional[Status]) –
Return type
Any
mprobe(source=ANY_SOURCE, tag=ANY_TAG, status=None)
Blocking test for a matched message.
Parameters
• source (int) –
• tag (int) –
• status (Optional[Status]) –
Return type
Message
improbe(source=ANY_SOURCE, tag=ANY_TAG, status=None)
Nonblocking test for a matched message.
Parameters
• source (int) –
• tag (int) –
• status (Optional[Status]) –
Return type
Optional[Message]
bcast(obj, root=0)
Broadcast.
Parameters
• obj (Any) –
• root (int) –
Return type
Any
class mpi4py.util.pkl5.Intracomm
Intracommunicator.
Intracommunicator wrapper class.
class mpi4py.util.pkl5.Intercomm
Intercommunicator.
Intercommunicator wrapper class.
Examples
test-pkl5-1.py
import numpy as np
from mpi4py import MPI
from mpi4py.util import pkl5
comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
size = comm.Get_size()
rank = comm.Get_rank()
dst = (rank + 1) % size
src = (rank - 1) % size
sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
sreq = comm.isend(sobj, dst, tag=42)
robj = comm.recv (None, src, tag=42)
sreq.Free()
assert np.min(robj) == src
assert np.max(robj) == src
test-pkl5-2.py
import numpy as np
from mpi4py import MPI
from mpi4py.util import pkl5
comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
size = comm.Get_size()
rank = comm.Get_rank()
dst = (rank + 1) % size
src = (rank - 1) % size
sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
sreq = comm.isend(sobj, dst, tag=42)
status = MPI.Status()
rmsg = comm.mprobe(status=status)
assert status.Get_source() == src
assert status.Get_tag() == 42
rreq = rmsg.irecv()
robj = rreq.wait()
sreq.Free()
assert np.max(robj) == src
assert np.min(robj) == src
mpi4py.util.dtlib
New in version 3.1.0.
The mpi4py.util.dtlib module provides converter routines between NumPy and MPI datatypes.
mpi4py.util.dtlib.from_numpy_dtype(dtype)
Convert NumPy datatype to MPI datatype.
Parameters
dtype (numpy.typing.DTypeLike) – NumPy dtype-like object.
Return type
Datatype
mpi4py.util.dtlib.to_numpy_dtype(datatype)
Convert MPI datatype to NumPy datatype.
Parameters
datatype (Datatype) – MPI datatype.
Return type
numpy.dtype
MPI4PY.RUN
New in version 3.0.0.
At import time, mpi4py initializes the MPI execution environment calling MPI_Init_thread() and installs
an exit hook to automatically call MPI_Finalize() just before the Python process terminates.
Additionally, mpi4py overrides the default ERRORS_ARE_FATAL error handler in favor of ERRORS_RETURN,
which allows translating MPI errors in Python exceptions. These departures from standard MPI behavior may
be controversial, but are quite convenient within the highly dynamic Python programming environment.
Third-party code using mpi4py can just from mpi4py import MPI and perform MPI calls without the tedious
initialization/finalization handling. MPI errors, once translated automatically to Python exceptions,
can be dealt with the common try…except…finally clauses; unhandled MPI exceptions will print a traceback
which helps in locating problems in source code.
Unfortunately, the interplay of automatic MPI finalization and unhandled exceptions may lead to
deadlocks. In unattended runs, these deadlocks will drain the battery of your laptop, or burn precious
allocation hours in your supercomputing facility.
Consider the following snippet of Python code. Assume this code is stored in a standard Python script
file and run with mpiexec in two or more processes.
from mpi4py import MPI
assert MPI.COMM_WORLD.Get_size() > 1
rank = MPI.COMM_WORLD.Get_rank()
if rank == 0:
1/0
MPI.COMM_WORLD.send(None, dest=1, tag=42)
elif rank == 1:
MPI.COMM_WORLD.recv(source=0, tag=42)
Process 0 raises ZeroDivisionError exception before performing a send call to process 1. As the exception
is not handled, the Python interpreter running in process 0 will proceed to exit with non-zero status.
However, as mpi4py installed a finalizer hook to call MPI_Finalize() before exit, process 0 will block
waiting for other processes to also enter the MPI_Finalize() call. Meanwhile, process 1 will block
waiting for a message to arrive from process 0, thus never reaching to MPI_Finalize(). The whole MPI
execution environment is irremediably in a deadlock state.
To alleviate this issue, mpi4py offers a simple, alternative command line execution mechanism based on
using the -m flag and implemented with the runpy module. To use this features, Python code should be run
passing -m mpi4py in the command line invoking the Python interpreter. In case of unhandled exceptions,
the finalizer hook will call MPI_Abort() on the MPI_COMM_WORLD communicator, thus effectively aborting
the MPI execution environment.
WARNING:
When a process is forced to abort, resources (e.g. open files) are not cleaned-up and any registered
finalizers (either with the atexit module, the Python C/API function Py_AtExit(), or even the C
standard library function atexit()) will not be executed. Thus, aborting execution is an extremely
impolite way of ensuring process termination. However, MPI provides no other mechanism to recover from
a deadlock state.
Interface options
The use of -m mpi4py to execute Python code on the command line resembles that of the Python interpreter.
• mpiexec -n numprocs python -m mpi4py pyfile [arg] ...
• mpiexec -n numprocs python -m mpi4py -m mod [arg] ...
• mpiexec -n numprocs python -m mpi4py -c cmd [arg] ...
• mpiexec -n numprocs python -m mpi4py - [arg] ...
<pyfile>
Execute the Python code contained in pyfile, which must be a filesystem path referring to either a
Python file, a directory containing a __main__.py file, or a zipfile containing a __main__.py
file.
-m <mod>
Search sys.path for the named module mod and execute its contents.
-c <cmd>
Execute the Python code in the cmd string command.
- Read commands from standard input (sys.stdin).
SEE ALSO:
python:using-on-cmdline
Documentation on Python command line interface.
REFERENCE
┌────────────┬────────────────────────────┐
│ mpi4py.MPI │ Message Passing Interface. │
└────────────┴────────────────────────────┘
mpi4py.MPI
Message Passing Interface.
Classes
────────────────────────────────────────────────────────────────────────────
Cartcomm([comm]) Cartesian topology intracommunicator
────────────────────────────────────────────────────────────────────────────
Comm([comm]) Communicator
────────────────────────────────────────────────────────────────────────────
Datatype([datatype]) Datatype object
────────────────────────────────────────────────────────────────────────────
Distgraphcomm([comm]) Distributed graph topology
intracommunicator
────────────────────────────────────────────────────────────────────────────
Errhandler([errhandler]) Error handler
────────────────────────────────────────────────────────────────────────────
File([file]) File handle
────────────────────────────────────────────────────────────────────────────
Graphcomm([comm]) General graph topology
intracommunicator
────────────────────────────────────────────────────────────────────────────
Grequest([request]) Generalized request handle
────────────────────────────────────────────────────────────────────────────
Group([group]) Group of processes
────────────────────────────────────────────────────────────────────────────
Info([info]) Info object
────────────────────────────────────────────────────────────────────────────
Intercomm([comm]) Intercommunicator
────────────────────────────────────────────────────────────────────────────
Intracomm([comm]) Intracommunicator
────────────────────────────────────────────────────────────────────────────
Message([message]) Matched message handle
────────────────────────────────────────────────────────────────────────────
Op([op]) Operation object
────────────────────────────────────────────────────────────────────────────
Pickle([dumps, loads, protocol]) Pickle/unpickle Python objects
────────────────────────────────────────────────────────────────────────────
Prequest([request]) Persistent request handle
────────────────────────────────────────────────────────────────────────────
Request([request]) Request handle
────────────────────────────────────────────────────────────────────────────
Status([status]) Status object
────────────────────────────────────────────────────────────────────────────
Topocomm([comm]) Topology intracommunicator
────────────────────────────────────────────────────────────────────────────
Win([win]) Window handle
────────────────────────────────────────────────────────────────────────────
memory(buf) Memory buffer
┌──────────────────────────────────┬───────────────────────────────────────┐
│ │ │
--
CITATION
If MPI for Python been significant to a project that leads to an academic publication, please acknowledge
that fact by citing the project.
• L. Dalcin and Y.-L. L. Fang, mpi4py: Status Update After 12 Years of Development, Computing in Science
& Engineering, 23(4):47-54, 2021. https://doi.org/10.1109/MCSE.2021.3083216
• L. Dalcin, P. Kler, R. Paz, and A. Cosimo, Parallel Distributed Computing using Python, Advances in
Water Resources, 34(9):1124-1139, 2011. https://doi.org/10.1016/j.advwatres.2011.04.013
• L. Dalcin, R. Paz, M. Storti, and J. D’Elia, MPI for Python: performance improvements and MPI-2
extensions, Journal of Parallel and Distributed Computing, 68(5):655-662, 2008.
https://doi.org/10.1016/j.jpdc.2007.09.005
• L. Dalcin, R. Paz, and M. Storti, MPI for Python, Journal of Parallel and Distributed Computing,
65(9):1108-1115, 2005. https://doi.org/10.1016/j.jpdc.2005.03.010
INSTALLATION
Requirements
You need to have the following software properly installed in order to build MPI for Python:
• A working MPI implementation, preferably supporting MPI-3 and built with shared/dynamic libraries.
NOTE:
If you want to build some MPI implementation from sources, check the instructions at building-mpi in
the appendix.
• Python 2.7, 3.5 or above.
NOTE:
Some MPI-1 implementations do require the actual command line arguments to be passed in MPI_Init().
In this case, you will need to use a rebuilt, MPI-enabled, Python interpreter executable. MPI for
Python has some support for alleviating you from this task. Check the instructions at python-mpi in
the appendix.
Using pip
If you already have a working MPI (either if you installed it from sources or by using a pre-built
package from your favourite GNU/Linux distribution) and the mpicc compiler wrapper is on your search
path, you can use pip:
$ python -m pip install mpi4py
NOTE:
If the mpicc compiler wrapper is not on your search path (or if it has a different name) you can use
env to pass the environment variable MPICC providing the full path to the MPI compiler wrapper
executable:
$ env MPICC=/path/to/mpicc python -m pip install mpi4py
WARNING:
pip keeps previouly built wheel files on its cache for future reuse. If you want to reinstall the
mpi4py package using a different or updated MPI implementation, you have to either first remove the
cached wheel file with:
$ python -m pip cache remove mpi4py
or ask pip to disable the cache:
$ python -m pip install --no-cache-dir mpi4py
Using distutils
The MPI for Python package is available for download at the project website generously hosted by GitHub.
You can use curl or wget to get a release tarball.
• Using curl:
$ curl -O https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
• Using wget:
$ wget https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
After unpacking the release tarball:
$ tar -zxf mpi4py-X.Y.Z.tar.gz
$ cd mpi4py-X.Y.Z
the package is ready for building.
MPI for Python uses a standard distutils-based build system. However, some distutils commands (like
build) have additional options:
--mpicc=
Lets you specify a special location or name for the mpicc compiler wrapper.
--mpi= Lets you pass a section with MPI configuration within a special configuration file.
--configure
Runs exhaustive tests for checking about missing MPI types, constants, and functions. This option
should be passed in order to build MPI for Python against old MPI-1 or MPI-2 implementations,
possibly providing a subset of MPI-3.
If you use a MPI implementation providing a mpicc compiler wrapper (e.g., MPICH, Open MPI), it will be
used for compilation and linking. This is the preferred and easiest way of building MPI for Python.
If mpicc is located somewhere in your search path, simply run the build command:
$ python setup.py build
If mpicc is not in your search path or the compiler wrapper has a different name, you can run the build
command specifying its location:
$ python setup.py build --mpicc=/where/you/have/mpicc
Alternatively, you can provide all the relevant information about your MPI implementation by editing the
file called mpi.cfg. You can use the default section [mpi] or add a new, custom section, for example
[other_mpi] (see the examples provided in the mpi.cfg file as a starting point to write your own
section):
[mpi]
include_dirs = /usr/local/mpi/include
libraries = mpi
library_dirs = /usr/local/mpi/lib
runtime_library_dirs = /usr/local/mpi/lib
[other_mpi]
include_dirs = /opt/mpi/include ...
libraries = mpi ...
library_dirs = /opt/mpi/lib ...
runtime_library_dirs = /op/mpi/lib ...
...
and then run the build command, perhaps specifying you custom configuration section:
$ python setup.py build --mpi=other_mpi
After building, the package is ready for install.
If you have root privileges (either by log-in as the root user of by using sudo) and you want to install
MPI for Python in your system for all users, just do:
$ python setup.py install
The previous steps will install the mpi4py package at standard location
prefix/lib/pythonX.X/site-packages.
If you do not have root privileges or you want to install MPI for Python for your private use, just do:
$ python setup.py install --user
Testing
To quickly test the installation:
$ mpiexec -n 5 python -m mpi4py.bench helloworld
Hello, World! I am process 0 of 5 on localhost.
Hello, World! I am process 1 of 5 on localhost.
Hello, World! I am process 2 of 5 on localhost.
Hello, World! I am process 3 of 5 on localhost.
Hello, World! I am process 4 of 5 on localhost.
If you installed from source, issuing at the command line:
$ mpiexec -n 5 python demo/helloworld.py
or (in the case of ancient MPI-1 implementations):
$ mpirun -np 5 python `pwd`/demo/helloworld.py
will launch a five-process run of the Python interpreter and run the test script demo/helloworld.py from
the source distribution.
You can also run all the unittest scripts:
$ mpiexec -n 5 python test/runtests.py
or, if you have nose unit testing framework installed:
$ mpiexec -n 5 nosetests -w test
or, if you have py.test unit testing framework installed:
$ mpiexec -n 5 py.test test/
APPENDIX
MPI-enabled Python interpreter
WARNING:
These days it is no longer required to use the MPI-enabled Python interpreter in most cases, and,
therefore, it is not built by default anymore because it is too difficult to reliably build a
Python interpreter across different distributions. If you know that you still really need it, see
below on how to use the build_exe and install_exe commands.
Some MPI-1 implementations (notably, MPICH 1) do require the actual command line arguments to be passed
at the time MPI_Init() is called. In this case, you will need to use a re-built, MPI-enabled, Python
interpreter binary executable. A basic implementation (targeting Python 2.X) of what is required is shown
below:
#include <Python.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
int status, flag;
MPI_Init(&argc, &argv);
status = Py_Main(argc, argv);
MPI_Finalized(&flag);
if (!flag) MPI_Finalize();
return status;
}
The source code above is straightforward; compiling it should also be. However, the linking step is more
tricky: special flags have to be passed to the linker depending on your platform. In order to alleviate
you for such low-level details, MPI for Python provides some pure-distutils based support to build and
install an MPI-enabled Python interpreter executable:
$ cd mpi4py-X.X.X
$ python setup.py build_exe [--mpi=<name>|--mpicc=/path/to/mpicc]
$ [sudo] python setup.py install_exe [--install-dir=$HOME/bin]
After the above steps you should have the MPI-enabled interpreter installed as prefix/bin/pythonX.X-mpi
(or $HOME/bin/pythonX.X-mpi). Assuming that prefix/bin (or $HOME/bin) is listed on your PATH, you should
be able to enter your MPI-enabled Python interactively, for example:
$ python2.7-mpi
Python 2.7.8 (default, Nov 10 2014, 08:19:18)
[GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/usr/bin/python2.7-mpi'
>>>
Building MPI from sources
In the list below you have some executive instructions for building some of the open-source MPI
implementations out there with support for shared/dynamic libraries on POSIX environments.
• MPICH
$ tar -zxf mpich-X.X.X.tar.gz
$ cd mpich-X.X.X
$ ./configure --enable-shared --prefix=/usr/local/mpich
$ make
$ make install
• Open MPI
$ tar -zxf openmpi-X.X.X tar.gz
$ cd openmpi-X.X.X
$ ./configure --prefix=/usr/local/openmpi
$ make all
$ make install
• MPICH 1
$ tar -zxf mpich-X.X.X.tar.gz
$ cd mpich-X.X.X
$ ./configure --enable-sharedlib --prefix=/usr/local/mpich1
$ make
$ make install
Perhaps you will need to set the LD_LIBRARY_PATH environment variable (using export, setenv or what
applies to your system) pointing to the directory containing the MPI libraries . In case of getting
runtime linking errors when running MPI programs, the following lines can be added to the user login
shell script (.profile, .bashrc, etc.).
• MPICH
MPI_DIR=/usr/local/mpich
export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
• Open MPI
MPI_DIR=/usr/local/openmpi
export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
• MPICH 1
MPI_DIR=/usr/local/mpich1
export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH:
export MPICH_USE_SHLIB=yes
WARNING:
MPICH 1 support for dynamic libraries is not completely transparent. Users should set the
environment variable MPICH_USE_SHLIB to yes in order to avoid link problems when using the mpicc
compiler wrapper.
AUTHOR
Lisandro Dalcin
COPYRIGHT
2022, Lisandro Dalcin
3.1 March 16, 2022 MPI4PY(1)