Provided by: python-mpi4py-doc_3.1.5-5ubuntu2_all bug

NAME

       mpi4py - MPI for Python

       Author Lisandro Dalcin

       Contact
              dalcinl@gmail.com

       Date   April 01, 2024

   Abstract
       This  document  describes  the  MPI  for  Python  package.  MPI for Python provides Python
       bindings for the Message Passing Interface (MPI) standard, allowing Python applications to
       exploit multiple processors on workstations, clusters and supercomputers.

       This  package  builds  on  the MPI specification and provides an object oriented interface
       resembling the MPI-2 C++  bindings.  It  supports  point-to-point  (sends,  receives)  and
       collective  (broadcasts,  scatters, gathers) communication of any picklable Python object,
       as well as efficient communication of Python objects exposing the Python buffer  interface
       (e.g. NumPy arrays and builtin bytes/array/memoryview objects).

INTRODUCTION

       Over  the last years, high performance computing has become an affordable resource to many
       more researchers in the scientific community than ever before. The conjunction of  quality
       open  source  software  and  commodity  hardware  strongly  influenced  the now widespread
       popularity of Beowulf class clusters and cluster of workstations.

       Among many parallel computational models, message-passing has proven to  be  an  effective
       one.   This  paradigm  is  specially  suited  for  (but not limited to) distributed memory
       architectures and is used in today’s most demanding scientific and engineering application
       related  to  modeling,  simulation,  design,  and  signal  processing.   However, portable
       message-passing parallel programming used to be a nightmare in the  past  because  of  the
       many   incompatible  options  developers  were  faced  to.   Fortunately,  this  situation
       definitely changed after the MPI Forum released its standard specification.

       High performance computing is traditionally associated  with  software  development  using
       compiled  languages.  However,  in typical applications programs, only a small part of the
       code is time-critical enough to require the efficiency of compiled languages. The rest  of
       the code is generally related to memory management, error handling, input/output, and user
       interaction, and those are usually the most error prone and time-consuming lines  of  code
       to write and debug in the whole development process.  Interpreted high-level languages can
       be really advantageous for this kind of tasks.

       For implementing general-purpose  numerical  computations,  MATLAB  [1]  is  the  dominant
       interpreted  programming  language.  In  the  open source side, Octave and Scilab are well
       known, freely distributed  software  packages  providing  compatibility  with  the  MATLAB
       language.  In this work, we present MPI for Python, a new package enabling applications to
       exploit multiple processors using standard MPI “look and feel” in Python scripts.

       [1]  MATLAB is a registered trademark of The MathWorks, Inc.

   What is MPI?
       MPI, [mpi-using] [mpi-ref] the Message Passing Interface, is a standardized  and  portable
       message-passing  system  designed to function on a wide variety of parallel computers. The
       standard defines the syntax and semantics of library routines and allows  users  to  write
       portable programs in the main scientific programming languages (Fortran, C, or C++).

       Since  its  release,  the  MPI  specification [mpi-std1] [mpi-std2] has become the leading
       standard for  message-passing  libraries  for  parallel  computers.   Implementations  are
       available  from  vendors  of  high-performance  computers  and from well known open source
       projects like MPICH [mpi-mpich] and Open MPI [mpi-openmpi].

   What is Python?
       Python is a modern, easy  to  learn,  powerful  programming  language.  It  has  efficient
       high-level  data  structures  and  a  simple  but  effective  approach  to object-oriented
       programming with dynamic typing and dynamic binding. It  supports  modules  and  packages,
       which encourages program modularity and code reuse. Python’s elegant syntax, together with
       its interpreted nature, make it an ideal language  for  scripting  and  rapid  application
       development in many areas on most platforms.

       The  Python  interpreter  and  the  extensive  standard library are available in source or
       binary form without charge for all major platforms, and can be freely distributed.  It  is
       easily  extended with new functions and data types implemented in C or C++. Python is also
       suitable as an extension language for customizable applications.

       Python is an ideal candidate for writing the higher-level parts of large-scale  scientific
       applications [Hinsen97] and driving simulations in parallel architectures [Beazley97] like
       clusters of PC’s or SMP’s. Python codes are quickly developed, easily maintained, and  can
       achieve a high degree of integration with other libraries written in compiled languages.

   Related Projects
       As  this work started and evolved, some ideas were borrowed from well known MPI and Python
       related open source projects from the Internet.

       • OOMPI

         • It has no relation with Python, but is an excellent object oriented approach to MPI.

         • It is a C++ class library  specification  layered  on  top  of  the  C  bindings  that
           encapsulates MPI into a functional class hierarchy.

         • It provides a flexible and intuitive interface by adding some abstractions, like Ports
           and Messages, which enrich and simplify the syntax.

       • Pypar

         • Its interface is rather minimal. There is no  support  for  communicators  or  process
           topologies.

         • It  does not require the Python interpreter to be modified or recompiled, but does not
           permit interactive parallel runs.

         • General (picklable) Python objects of any type can  be  communicated.  There  is  good
           support for numeric arrays, practically full MPI bandwidth can be achieved.

       • pyMPI

         • It rebuilds the Python interpreter providing a built-in module for message passing. It
           does permit interactive parallel runs, which are useful for learning and debugging.

         • It provides an interface suitable for basic parallel programing.  There  is  not  full
           support for defining new communicators or process topologies.

         • General  (picklable)  Python  objects can be messaged between processors. There is not
           support for numeric arrays.

       • Scientific Python

         • It provides a collection of Python modules that are useful for scientific computing.

         • There is an interface to MPI and BSP (Bulk Synchronous Parallel programming).

         • The interface is simple but incomplete and does not resemble  the  MPI  specification.
           There is support for numeric arrays.

       Additionally,  we  would like to mention some available tools for scientific computing and
       software development with Python.

       • NumPy is a package that  provides  array  manipulation  and  computational  capabilities
         similar  to  those found in IDL, MATLAB, or Octave. Using NumPy, it is possible to write
         many efficient numerical data processing applications directly in Python  without  using
         any C, C++ or Fortran code.

       • SciPy  is  an open source library of scientific tools for Python, gathering a variety of
         high level science and engineering modules together as a  single  package.  It  includes
         modules  for graphics and plotting, optimization, integration, special functions, signal
         and image processing, genetic algorithms, ODE solvers, and others.

       • Cython is a language that makes writing C extensions for the Python language as easy  as
         Python  itself.  The  Cython  language  is very close to the Python language, but Cython
         additionally supports calling C functions and declaring C types on variables  and  class
         attributes. This allows the compiler to generate very efficient C code from Cython code.
         This makes Cython the ideal language for wrapping for external C libraries, and for fast
         C modules that speed up the execution of Python code.

       • SWIG  is  a software development tool that connects programs written in C and C++ with a
         variety of high-level programming languages like Perl, Tcl/Tk, Ruby and Python.  Issuing
         header  files  to  SWIG  is  the simplest approach to interfacing C/C++ libraries from a
         Python module.

       [mpi-std1]
            MPI Forum. MPI: A Message  Passing  Interface  Standard.   International  Journal  of
            Supercomputer Applications, volume 8, number 3-4, pages 159-416, 1994.

       [mpi-std2]
            MPI  Forum.  MPI:  A  Message Passing Interface Standard.  High Performance Computing
            Applications, volume 12, number 1-2, pages 1-299, 1998.

       [mpi-using]
            William Gropp, Ewing Lusk,  and  Anthony  Skjellum.   Using  MPI:  portable  parallel
            programming with the message-passing interface.  MIT Press, 1994.

       [mpi-ref]
            Mark  Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra.  MPI -
            The Complete Reference, volume 1, The MPI Core.  MIT Press, 2nd. edition, 1998.

       [mpi-mpich]
            W.  Gropp,  E.  Lusk,  N.  Doss,  and  A.  Skjellum.   A  high-performance,  portable
            implementation  of  the  MPI message passing interface standard.  Parallel Computing,
            22(6):789-828, September 1996.

       [mpi-openmpi]
            Edgar Gabriel, Graham E. Fagg, George  Bosilca,  Thara  Angskun,  Jack  J.  Dongarra,
            Jeffrey  M.  Squyres,  Vishal  Sahay,  Prabhanjan  Kambadur,  Brian  Barrett,  Andrew
            Lumsdaine, Ralph H. Castain, David J. Daniel,  Richard  L.  Graham,  and  Timothy  S.
            Woodall.   Open   MPI:   Goals,   Concept,  and  Design  of  a  Next  Generation  MPI
            Implementation. In Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest,
            Hungary, September 2004.

       [Hinsen97]
            Konrad  Hinsen.   The Molecular Modelling Toolkit: a case study of a large scientific
            application in Python.  In Proceedings of the 6th  International  Python  Conference,
            pages 29-35, San Jose, Ca., October 1997.

       [Beazley97]
            David  M. Beazley and Peter S. Lomdahl.  Feeding a large-scale physics application to
            Python.  In Proceedings of the 6th International Python Conference, pages 21-29,  San
            Jose, Ca., October 1997.

OVERVIEW

       MPI  for  Python  provides an object oriented approach to message passing which grounds on
       the standard MPI-2 C++ bindings. The interface was designed with focus in translating  MPI
       syntax  and  semantics  of  standard  MPI-2  bindings  for  C++ to Python. Any user of the
       standard C/C++ MPI bindings should be able to use this module without need of  learning  a
       new interface.

   Communicating Python Objects and Array Data
       The  Python  standard  library supports different mechanisms for data persistence. Many of
       them rely on disk storage, but pickling and marshaling can also work with memory buffers.

       The pickle modules provide user-extensible facilities to serialize general Python  objects
       using  ASCII  or  binary  formats.  The  marshal  module  provides facilities to serialize
       built-in Python objects using a binary format  specific  to  Python,  but  independent  of
       machine architecture issues.

       MPI for Python can communicate any built-in or user-defined Python object taking advantage
       of the features provided by the pickle module. These facilities will be routinely used  to
       build  binary  representations  of  objects  to  communicate  (at  sending processes), and
       restoring them back (at receiving processes).

       Although simple and general, the serialization approach (i.e.,  pickling  and  unpickling)
       previously  discussed  imposes  important  overheads in memory as well as processor usage,
       especially in the scenario of objects with large  memory  footprints  being  communicated.
       Pickling  general  Python  objects,  ranging from primitive or container built-in types to
       user-defined classes, necessarily requires computer resources.  Processing is also  needed
       for  dispatching  the  appropriate  serialization  method (that depends on the type of the
       object) and doing the actual packing. Additional memory is always needed, and if its total
       amount  is not known a priori, many reallocations can occur.  Indeed, in the case of large
       numeric arrays, this is certainly unacceptable  and  precludes  communication  of  objects
       occupying half or more of the available memory resources.

       MPI  for  Python  supports direct communication of any object exporting the single-segment
       buffer interface. This interface is a standard Python mechanism  provided  by  some  types
       (e.g.,  strings  and numeric arrays), allowing access in the C side to a contiguous memory
       buffer (i.e.,  address  and  length)  containing  the  relevant  data.  This  feature,  in
       conjunction  with  the  capability  of  constructing user-defined MPI datatypes describing
       complicated memory layouts,  enables  the  implementation  of  many  algorithms  involving
       multidimensional  numeric  arrays (e.g., image processing, fast Fourier transforms, finite
       difference schemes on structured Cartesian grids)  directly  in  Python,  with  negligible
       overhead, and almost as fast as compiled Fortran, C, or C++ codes.

   Communicators
       In  MPI  for  Python, Comm is the base class of communicators. The Intracomm and Intercomm
       classes are sublcasses of the Comm class.  The Comm.Is_inter  method  (and  Comm.Is_intra,
       provided  for  convenience  but  not  part  of  the  MPI  specification)  is  defined  for
       communicator objects and can be used to determine the particular communicator class.

       The two predefined intracommunicator instances are available:  COMM_SELF  and  COMM_WORLD.
       From them, new communicators can be created as needed.

       The number of processes in a communicator and the calling process rank can be respectively
       obtained with methods Comm.Get_size and Comm.Get_rank. The associated process group can be
       retrieved  from  a  communicator  by  calling  the Comm.Get_group method, which returns an
       instance of the Group class. Set operations with  Group  objects  like  like  Group.Union,
       Group.Intersection  and  Group.Difference  are fully supported, as well as the creation of
       new communicators from these groups using Comm.Create and Comm.Create_group.

       New communicator instances can be obtained with the Comm.Clone,  Comm.Dup  and  Comm.Split
       methods, as well methods Intracomm.Create_intercomm and Intercomm.Merge.

       Virtual   topologies   (Cartcomm,   Graphcomm   and   Distgraphcomm   classes,  which  are
       specializations of the Intracomm class) are fully supported. New instances can be obtained
       from   intracommunicator   instances   with   factory  methods  Intracomm.Create_cart  and
       Intracomm.Create_graph.

   Point-to-Point Communications
       Point to point communication is a fundamental capability of message passing systems.  This
       mechanism  enables the transmission of data between a pair of processes, one side sending,
       the other receiving.

       MPI provides a set of send and receive functions allowing the communication of typed  data
       with   an   associated   tag.   The  type  information  enables  the  conversion  of  data
       representation from one architecture to another in the  case  of  heterogeneous  computing
       environments;  additionally,  it  allows the representation of non-contiguous data layouts
       and  user-defined  datatypes,  thus  avoiding  the  overhead  of  (otherwise  unavoidable)
       packing/unpacking  operations.  The  tag information allows selectivity of messages at the
       receiving end.

   Blocking Communications
       MPI provides basic send and receive functions that are blocking.   These  functions  block
       the  caller  until  the data buffers involved in the communication can be safely reused by
       the application program.

       In MPI for Python, the Comm.Send, Comm.Recv  and  Comm.Sendrecv  methods  of  communicator
       objects  provide  support  for blocking point-to-point communications within Intracomm and
       Intercomm instances. These methods can communicate memory buffers. The variants Comm.send,
       Comm.recv and Comm.sendrecv can communicate general Python objects.

   Nonblocking Communications
       On  many  systems, performance can be significantly increased by overlapping communication
       and computation. This is particularly true on systems where communication can be  executed
       autonomously by an intelligent, dedicated communication controller.

       MPI  provides  nonblocking  send and receive functions. They allow the possible overlap of
       communication and computation.  Non-blocking  communication  always  come  in  two  parts:
       posting functions, which begin the requested operation; and test-for-completion functions,
       which allow to discover whether the requested operation has completed.

       In MPI for Python, the  Comm.Isend  and  Comm.Irecv  methods  initiate  send  and  receive
       operations,  respectively.  These  methods return a Request instance, uniquely identifying
       the started operation.  Its completion can be managed using the Request.Test, Request.Wait
       and  Request.Cancel  methods.  The  management  of  Request  objects and associated memory
       buffers involved in communication requires a careful, rather low-level coordination. Users
       must  ensure  that  objects  exposing  their memory buffers are not accessed at the Python
       level while they are involved in nonblocking message-passing operations.

   Persistent Communications
       Often a communication with the same argument list is repeatedly executed within  an  inner
       loop.  In  such  cases,  communication  can  be  further  optimized  by  using  persistent
       communication, a particular case of nonblocking communication allowing  the  reduction  of
       the  overhead  between processes and communication controllers. Furthermore , this kind of
       optimization can also alleviate  the  extra  call  overheads  associated  to  interpreted,
       dynamic languages like Python.

       In  MPI  for  Python,  the  Comm.Send_init  and  Comm.Recv_init  methods create persistent
       requests for a send and receive operation, respectively.  These methods return an instance
       of  the  Prequest  class, a subclass of the Request class. The actual communication can be
       effectively started using the Prequest.Start method, and its completion can be managed  as
       previously described.

   Collective Communications
       Collective  communications  allow  the transmittal of data between multiple processes of a
       group simultaneously. The syntax and semantics of collective functions is consistent  with
       point-to-point  communication.  Collective  functions communicate typed data, but messages
       are not paired with an associated tag; selectivity of messages is implied in  the  calling
       order. Additionally, collective functions come in blocking versions only.

       The more commonly used collective communication operations are the following.

       • Barrier synchronization across all group members.

       • Global communication functions

         • Broadcast data from one member to all members of a group.

         • Gather data from all members to one member of a group.

         • Scatter data from one member to all members of a group.

       • Global reduction operations such as sum, maximum, minimum, etc.

       In   MPI   for   Python,   the   Comm.Bcast,  Comm.Scatter,  Comm.Gather,  Comm.Allgather,
       Comm.Alltoall methods provide support for collective communications of memory buffers. The
       lower-case    variants   Comm.bcast,   Comm.scatter,   Comm.gather,   Comm.allgather   and
       Comm.alltoall can communicate general Python objects.   The  vector  variants  (which  can
       communicate  different  amounts  of  data  to  each  process) Comm.Scatterv, Comm.Gatherv,
       Comm.Allgatherv, Comm.Alltoallv and Comm.Alltoallw  are  also  supported,  they  can  only
       communicate objects exposing memory buffers.

       Global  reducion  operations  on  memory  buffers  are accessible through the Comm.Reduce,
       Comm.Reduce_scatter, Comm.Allreduce,  Intracomm.Scan  and  Intracomm.Exscan  methods.  The
       lower-case  variants  Comm.reduce, Comm.allreduce, Intracomm.scan and Intracomm.exscan can
       communicate general Python objects; however, the actual  required  reduction  computations
       are  performed  sequentially  at  some  process. All the predefined (i.e., SUM, PROD, MAX,
       etc.)  reduction operations can be applied.

   Support for GPU-aware MPI
       Several MPI implementations, including Open MPI and MVAPICH, support passing GPU  pointers
       to MPI calls to avoid explict data movement between the host and the device. On the Python
       side, GPU arrays have been implemented by many libraries that need GPU  computation,  such
       as  CuPy,  Numba, PyTorch, and PyArrow. In order to increase library interoperability, two
       kinds of zero-copy data exchange protocols are defined and agreed upon:  DLPack  and  CUDA
       Array Interface. For example, a CuPy array can be passed to a Numba CUDA-jit kernel.

       MPI for Python provides an experimental support for GPU-aware MPI.  This feature requires:

       1. mpi4py is built against a GPU-aware MPI library.

       2. The Python GPU arrays are compliant with either of the protocols.

       See the Tutorial section for further information. We note that

       • Whether  or  not  a  MPI  call  can  work  for  GPU arrays depends on the underlying MPI
         implementation, not on mpi4py.

       • This support is currently experimental and subject to change in the future.

   Dynamic Process Management
       In the context of the MPI-1 specification, a parallel application is static; that  is,  no
       processes can be added to or deleted from a running application after it has been started.
       Fortunately, this limitation was addressed in MPI-2. The new specification added a process
       management model providing a basic interface between an application and external resources
       and process managers.

       This MPI-2 extension can be really useful, especially for sequential applications built on
       top  of  parallel  modules, or parallel applications with a client/server model. The MPI-2
       process model provides a mechanism to create new  processes  and  establish  communication
       between  them  and  the existing MPI application. It also provides mechanisms to establish
       communication between two existing MPI applications, even  when  one  did  not  start  the
       other.

       In  MPI  for  Python,  new  independent  process  groups  can  be  created  by calling the
       Intracomm.Spawn  method  within  an  intracommunicator.    This   call   returns   a   new
       intercommunicator  (i.e.,  an  Intercomm  instance) at the parent process group. The child
       process group can retrieve the matching intercommunicator by calling  the  Comm.Get_parent
       class  method.  At  each  side,  the new intercommunicator can be used to perform point to
       point and collective communications between the parent and child groups of processes.

       Alternatively,  disjoint  groups  of  processes  can  establish  communication   using   a
       client/server  approach.  Any server application must first call the Open_port function to
       open a port and the Publish_name function to publish a provided service, and next call the
       Intracomm.Accept  method.   Any  client applications can first find a published service by
       calling the Lookup_name function, which returns the port where a server can be  contacted;
       and  next  call  the Intracomm.Connect method. Both Intracomm.Accept and Intracomm.Connect
       methods return an Intercomm instance. When connection between client/server  processes  is
       no  longer  needed,  all  of  them  must  cooperatively  call  the Comm.Disconnect method.
       Additionally, server applications should release resources by calling  the  Unpublish_name
       and Close_port functions.

   One-Sided Communications
       One-sided   communications  (also  called  Remote  Memory  Access,  RMA)  supplements  the
       traditional two-sided, send/receive  based  MPI  communication  model  with  a  one-sided,
       put/get   based  interface.  One-sided  communication  that  can  take  advantage  of  the
       capabilities of highly specialized network hardware. Additionally, this  extension  lowers
       latency and software overhead in applications written using a shared-memory-like paradigm.

       The  MPI specification revolves around the use of objects called windows; they intuitively
       specify regions of a process’s memory that have been made available for  remote  read  and
       write operations.  The published memory blocks can be accessed through three functions for
       put (remote send), get (remote write), and accumulate (remote update  or  reduction)  data
       items.  A  much  larger  number of functions support different synchronization styles; the
       semantics of these synchronization operations are fairly complex.

       In MPI for Python, one-sided operations are available by using instances of the Win class.
       New  window objects are created by calling the Win.Create method at all processes within a
       communicator and specifying a memory buffer . When a window instance is no longer  needed,
       the Win.Free method should be called.

       The  three  one-sided  MPI  operations  for remote write, read and reduction are available
       through calling the methods Win.Put, Win.Get, and Win.Accumulate respectively within a Win
       instance.   These  methods  need  an  integer  rank  identifying the target process and an
       integer offset relative the base address of the remote memory block being accessed.

       The one-sided operations read, write, and reduction are implicitly nonblocking,  and  must
       be  synchronized  by  using two primary modes.  Active target synchronization requires the
       origin process to call the Win.Start and Win.Complete methods at the origin  process,  and
       target  process  cooperates  by calling the Win.Post and Win.Wait methods. There is also a
       collective variant provided by the Win.Fence method.  Passive  target  synchronization  is
       more lenient, only the origin process calls the Win.Lock and Win.Unlock methods. Locks are
       used to protect remote  accesses  to  the  locked  remote  window  and  to  protect  local
       load/store accesses to a locked local window.

   Parallel Input/Output
       The  POSIX  standard  provides  a  model  of  a  widely portable file system. However, the
       optimization needed for  parallel  input/output  cannot  be  achieved  with  this  generic
       interface.  In  order  to  ensure  efficiency  and  scalability,  the  underlying parallel
       input/output system must provide a high-level interface supporting  partitioning  of  file
       data  among  processes  and a collective interface supporting complete transfers of global
       data structures between process memories and files. Additionally, further efficiencies can
       be gained via support for asynchronous input/output, strided accesses to data, and control
       over physical file layout on storage devices. This scenario motivated the inclusion in the
       MPI-2  standard  of  a  custom  interface  in  order  to  support more elaborated parallel
       input/output operations.

       The MPI specification for parallel input/output revolves around  the  use  objects  called
       files.  As  defined  by MPI, files are not just contiguous byte streams. Instead, they are
       regarded as ordered collections of typed data items. MPI  supports  sequential  or  random
       access to any integral set of these items. Furthermore, files are opened collectively by a
       group of processes.

       The common patterns for accessing a shared file (broadcast, scatter, gather, reduction) is
       expressed  by  using  user-defined  datatypes.   Compared to the communication patterns of
       point-to-point and collective communications, this approach has  the  advantage  of  added
       flexibility  and  expressiveness.  Data access operations (read and write) are defined for
       different kinds of positioning (using explicit  offsets,  individual  file  pointers,  and
       shared  file  pointers),  coordination  (non-collective  and  collective), and synchronism
       (blocking, nonblocking, and split collective with begin/end phases).

       In MPI for Python, all MPI input/output operations are performed through instances of  the
       File  class.  File  handles  are obtained by calling the File.Open method at all processes
       within a communicator and providing a file name and the intended access mode.  After  use,
       they  must  be  closed  by  calling  the  File.Close method.  Files even can be deleted by
       calling method File.Delete.

       After creation, files are typically associated with a per-process view. The  view  defines
       the  current  set  of  data  visible and accessible from an open file as an ordered set of
       elementary datatypes. This data layout can be set and queried with the  File.Set_view  and
       File.Get_view methods respectively.

       Actual input/output operations are achieved by many methods combining read and write calls
       with different behavior regarding positioning, coordination, and synchronism. Summing  up,
       MPI  for  Python  provides  the  thirty  (30) methods defined in MPI-2 for reading from or
       writing to files using explicit offsets  or  file  pointers  (individual  or  shared),  in
       blocking or nonblocking and collective or noncollective versions.

   Environmental Management
   Initialization and Exit
       Module  functions  Init  or  Init_thread  and  Finalize  provide  MPI  initialization  and
       finalization respectively. Module functions Is_initialized and  Is_finalized  provide  the
       respective tests for initialization and finalization.

       NOTE:
          MPI_Init()  or MPI_Init_thread() is actually called when you import the MPI module from
          the mpi4py package, but only if MPI is not already initialized. In such  case,  calling
          Init  or  Init_thread  from Python is expected to generate an MPI error, and in turn an
          exception will be raised.

       NOTE:
          MPI_Finalize() is registered (by using Python C/API  function  Py_AtExit())  for  being
          automatically   called  when  Python  processes  exit,  but  only  if  mpi4py  actually
          initialized MPI. Therefore, there is no need to call Finalize from Python to ensure MPI
          finalization.

   Implementation Information
       • The  MPI  version number can be retrieved from module function Get_version. It returns a
         two-integer tuple (version, subversion).

       • The Get_processor_name function can be used to access the processor name.

       • The values of predefined attributes attached to the world communicator can  be  obtained
         by calling the Comm.Get_attr method within the COMM_WORLD instance.

   Timers
       MPI timer functionalities are available through the Wtime and Wtick functions.

   Error Handling
       In  order  facilitate  handle  sharing  with  other  Python  modules interfacing MPI-based
       parallel libraries, the predefined MPI error handlers ERRORS_RETURN  and  ERRORS_ARE_FATAL
       can  be assigned to and retrieved from communicators using methods Comm.Set_errhandler and
       Comm.Get_errhandler, and similarly for windows and files.

       When the predefined error handler ERRORS_RETURN is set, errors  returned  from  MPI  calls
       within  Python  code  will  raise an instance of the exception class Exception, which is a
       subclass of the standard Python exception RuntimeError.

       NOTE:
          After import, mpi4py overrides the default MPI rules  governing  inheritance  of  error
          handlers.  The  ERRORS_RETURN  error  handler  is  set  in the predefined COMM_SELF and
          COMM_WORLD communicators, as well as any  new  Comm,  Win,  or  File  instance  created
          through  mpi4py.  If  you  ever  pass such handles to C/C++/Fortran library code, it is
          recommended to set the ERRORS_ARE_FATAL error handler on them to ensure MPI  errors  do
          not pass silently.

       WARNING:
          Importing  with  from  mpi4py.MPI import * will cause a name clashing with the standard
          Python Exception base class.

TUTORIAL

       WARNING:
          Under construction. Contributions very welcome!

       TIP:
          Rolf Rabenseifner at HLRS developed a comprehensive MPI-3.1/4.0 course with slides  and
          a  large  set  of  exercises including solutions. This material is available online for
          self-study. The  slides  and  exercises  show  the  C,  Fortran,  and  Python  (mpi4py)
          interfaces.  For  performance  reasons,  most  Python  exercises  use  NumPy arrays and
          communication routines involving buffer-like objects.

       TIP:
          Victor Eijkhout at  TACC  authored  the  book  Parallel  Programming  for  Science  and
          Engineering.   This  book is available online in PDF and HTML formats.  The book covers
          parallel programming with MPI and OpenMP in C/C++ and Fortran, and MPI in Python  using
          mpi4py.

       MPI for Python supports convenient, pickle-based communication of generic Python object as
       well as fast, near C-speed, direct array data  communication  of  buffer-provider  objects
       (e.g., NumPy arrays).

       • Communication of generic Python objects

         You have to use methods with all-lowercase names, like Comm.send, Comm.recv, Comm.bcast,
         Comm.scatter, Comm.gather . An object to be  sent  is  passed  as  a  parameter  to  the
         communication call, and the received object is simply the return value.

         The  Comm.isend  and  Comm.irecv  methods  return Request instances; completion of these
         methods can be managed using the Request.test and Request.wait methods.

         The Comm.recv and Comm.irecv  methods  may  be  passed  a  buffer  object  that  can  be
         repeatedly  used  to  receive  messages avoiding internal memory allocation. This buffer
         must be sufficiently large to accommodate the transmitted messages;  hence,  any  buffer
         passed  to  Comm.recv  or  Comm.irecv  must  be  at  least  as  long as the pickled data
         transmitted to the receiver.

         Collective calls like Comm.scatter, Comm.gather, Comm.allgather, Comm.alltoall expect  a
         single value or a sequence of Comm.size elements at the root or all process. They return
         a single value, a list of Comm.size elements, or None.

         NOTE:
            MPI for Python uses the highest protocol version available in the Python runtime (see
            the  HIGHEST_PROTOCOL  constant  in  the pickle module).  The default protocol can be
            changed at import time by setting the MPI4PY_PICKLE_PROTOCOL environment variable, or
            at  runtime  by  assigning  a different value to the PROTOCOL attribute of the pickle
            object within the MPI module.

       • Communication of buffer-like objects

         You have to use method  names  starting  with  an  upper-case  letter,  like  Comm.Send,
         Comm.Recv, Comm.Bcast, Comm.Scatter, Comm.Gather.

         In  general,  buffer  arguments  to  these calls must be explicitly specified by using a
         2/3-list/tuple like [data, MPI.DOUBLE], or [data, count,  MPI.DOUBLE]  (the  former  one
         uses the byte-size of data and the extent of the MPI datatype to define count).

         For  vector  collectives  communication  operations like Comm.Scatterv and Comm.Gatherv,
         buffer arguments are specified as [data, count, displ, datatype], where count and  displ
         are sequences of integral values.

         Automatic MPI datatype discovery for NumPy/GPU arrays and PEP-3118 buffers is supported,
         but limited to basic C  types  (all  C/C99-native  signed/unsigned  integral  types  and
         single/double  precision  real/complex  floating  types)  and  availability  of matching
         datatypes in the underlying MPI implementation. In this case, the buffer-provider object
         can  be  passed  directly  as  a  buffer  argument,  the  count and MPI datatype will be
         inferred.

         If mpi4py is built against a GPU-aware MPI implementation, GPU arrays can be  passed  to
         upper-case  methods  as  long  as  they have either the __dlpack__ and __dlpack_device__
         methods or the __cuda_array_interface__ attribute that are compliant with the respective
         standard  specifications.  Moreover,  only C-contiguous or Fortran-contiguous GPU arrays
         are supported. It is important to note that GPU buffers must be fully ready  before  any
         MPI  routines operate on them to avoid race conditions. This can be ensured by using the
         synchronization API  of  your  array  library.  mpi4py  does  not  have  access  to  any
         GPU-specific  functionality  and  thus  cannot  perform this operation automatically for
         users.

   Running Python scripts with MPI
       Most MPI programs can be run  with  the  command  mpiexec.  In  practice,  running  Python
       programs looks like:

          $ mpiexec -n 4 python script.py

       to run the program with 4 processors.

   Point-to-Point Communication
       • Python objects (pickle under the hood):

            from mpi4py import MPI

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            if rank == 0:
                data = {'a': 7, 'b': 3.14}
                comm.send(data, dest=1, tag=11)
            elif rank == 1:
                data = comm.recv(source=0, tag=11)

       • Python objects with non-blocking communication:

            from mpi4py import MPI

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            if rank == 0:
                data = {'a': 7, 'b': 3.14}
                req = comm.isend(data, dest=1, tag=11)
                req.wait()
            elif rank == 1:
                req = comm.irecv(source=0, tag=11)
                data = req.wait()

       • NumPy arrays (the fast way!):

            from mpi4py import MPI
            import numpy

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            # passing MPI datatypes explicitly
            if rank == 0:
                data = numpy.arange(1000, dtype='i')
                comm.Send([data, MPI.INT], dest=1, tag=77)
            elif rank == 1:
                data = numpy.empty(1000, dtype='i')
                comm.Recv([data, MPI.INT], source=0, tag=77)

            # automatic MPI datatype discovery
            if rank == 0:
                data = numpy.arange(100, dtype=numpy.float64)
                comm.Send(data, dest=1, tag=13)
            elif rank == 1:
                data = numpy.empty(100, dtype=numpy.float64)
                comm.Recv(data, source=0, tag=13)

   Collective Communication
       • Broadcasting a Python dictionary:

            from mpi4py import MPI

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            if rank == 0:
                data = {'key1' : [7, 2.72, 2+3j],
                        'key2' : ( 'abc', 'xyz')}
            else:
                data = None
            data = comm.bcast(data, root=0)

       • Scattering Python objects:

            from mpi4py import MPI

            comm = MPI.COMM_WORLD
            size = comm.Get_size()
            rank = comm.Get_rank()

            if rank == 0:
                data = [(i+1)**2 for i in range(size)]
            else:
                data = None
            data = comm.scatter(data, root=0)
            assert data == (rank+1)**2

       • Gathering Python objects:

            from mpi4py import MPI

            comm = MPI.COMM_WORLD
            size = comm.Get_size()
            rank = comm.Get_rank()

            data = (rank+1)**2
            data = comm.gather(data, root=0)
            if rank == 0:
                for i in range(size):
                    assert data[i] == (i+1)**2
            else:
                assert data is None

       • Broadcasting a NumPy array:

            from mpi4py import MPI
            import numpy as np

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            if rank == 0:
                data = np.arange(100, dtype='i')
            else:
                data = np.empty(100, dtype='i')
            comm.Bcast(data, root=0)
            for i in range(100):
                assert data[i] == i

       • Scattering NumPy arrays:

            from mpi4py import MPI
            import numpy as np

            comm = MPI.COMM_WORLD
            size = comm.Get_size()
            rank = comm.Get_rank()

            sendbuf = None
            if rank == 0:
                sendbuf = np.empty([size, 100], dtype='i')
                sendbuf.T[:,:] = range(size)
            recvbuf = np.empty(100, dtype='i')
            comm.Scatter(sendbuf, recvbuf, root=0)
            assert np.allclose(recvbuf, rank)

       • Gathering NumPy arrays:

            from mpi4py import MPI
            import numpy as np

            comm = MPI.COMM_WORLD
            size = comm.Get_size()
            rank = comm.Get_rank()

            sendbuf = np.zeros(100, dtype='i') + rank
            recvbuf = None
            if rank == 0:
                recvbuf = np.empty([size, 100], dtype='i')
            comm.Gather(sendbuf, recvbuf, root=0)
            if rank == 0:
                for i in range(size):
                    assert np.allclose(recvbuf[i,:], i)

       • Parallel matrix-vector product:

            from mpi4py import MPI
            import numpy

            def matvec(comm, A, x):
                m = A.shape[0] # local rows
                p = comm.Get_size()
                xg = numpy.zeros(m*p, dtype='d')
                comm.Allgather([x,  MPI.DOUBLE],
                               [xg, MPI.DOUBLE])
                y = numpy.dot(A, xg)
                return y

   MPI-IO
       • Collective I/O with NumPy arrays:

            from mpi4py import MPI
            import numpy as np

            amode = MPI.MODE_WRONLY|MPI.MODE_CREATE
            comm = MPI.COMM_WORLD
            fh = MPI.File.Open(comm, "./datafile.contig", amode)

            buffer = np.empty(10, dtype=np.int)
            buffer[:] = comm.Get_rank()

            offset = comm.Get_rank()*buffer.nbytes
            fh.Write_at_all(offset, buffer)

            fh.Close()

       • Non-contiguous Collective I/O with NumPy arrays and datatypes:

            from mpi4py import MPI
            import numpy as np

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()
            size = comm.Get_size()

            amode = MPI.MODE_WRONLY|MPI.MODE_CREATE
            fh = MPI.File.Open(comm, "./datafile.noncontig", amode)

            item_count = 10

            buffer = np.empty(item_count, dtype='i')
            buffer[:] = rank

            filetype = MPI.INT.Create_vector(item_count, 1, size)
            filetype.Commit()

            displacement = MPI.INT.Get_size()*rank
            fh.Set_view(displacement, filetype=filetype)

            fh.Write_all(buffer)
            filetype.Free()
            fh.Close()

   Dynamic Process Management
       • Compute Pi - Master (or parent, or client) side:

            #!/usr/bin/env python
            from mpi4py import MPI
            import numpy
            import sys

            comm = MPI.COMM_SELF.Spawn(sys.executable,
                                       args=['cpi.py'],
                                       maxprocs=5)

            N = numpy.array(100, 'i')
            comm.Bcast([N, MPI.INT], root=MPI.ROOT)
            PI = numpy.array(0.0, 'd')
            comm.Reduce(None, [PI, MPI.DOUBLE],
                        op=MPI.SUM, root=MPI.ROOT)
            print(PI)

            comm.Disconnect()

       • Compute Pi - Worker (or child, or server) side:

            #!/usr/bin/env python
            from mpi4py import MPI
            import numpy

            comm = MPI.Comm.Get_parent()
            size = comm.Get_size()
            rank = comm.Get_rank()

            N = numpy.array(0, dtype='i')
            comm.Bcast([N, MPI.INT], root=0)
            h = 1.0 / N; s = 0.0
            for i in range(rank, N, size):
                x = h * (i + 0.5)
                s += 4.0 / (1.0 + x**2)
            PI = numpy.array(s * h, dtype='d')
            comm.Reduce([PI, MPI.DOUBLE], None,
                        op=MPI.SUM, root=0)

            comm.Disconnect()

   CUDA-aware MPI + Python GPU arrays
       • Reduce-to-all CuPy arrays:

            from mpi4py import MPI
            import cupy as cp

            comm = MPI.COMM_WORLD
            size = comm.Get_size()
            rank = comm.Get_rank()

            sendbuf = cp.arange(10, dtype='i')
            recvbuf = cp.empty_like(sendbuf)
            assert hasattr(sendbuf, '__cuda_array_interface__')
            assert hasattr(recvbuf, '__cuda_array_interface__')
            cp.cuda.get_current_stream().synchronize()
            comm.Allreduce(sendbuf, recvbuf)

            assert cp.allclose(recvbuf, sendbuf*size)

   One-Sided Communications
       • Read from (write to) the entire RMA window:

            import numpy as np
            from mpi4py import MPI
            from mpi4py.util import dtlib

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            datatype = MPI.FLOAT
            np_dtype = dtlib.to_numpy_dtype(datatype)
            itemsize = datatype.Get_size()

            N = 10
            win_size = N * itemsize if rank == 0 else 0
            win = MPI.Win.Allocate(win_size, comm=comm)

            buf = np.empty(N, dtype=np_dtype)
            if rank == 0:
                buf.fill(42)
                win.Lock(rank=0)
                win.Put(buf, target_rank=0)
                win.Unlock(rank=0)
                comm.Barrier()
            else:
                comm.Barrier()
                win.Lock(rank=0)
                win.Get(buf, target_rank=0)
                win.Unlock(rank=0)
                assert np.all(buf == 42)

       • Accessing  a  part  of  the  RMA  window  using the target argument, which is defined as
         (offset, count, datatype):

            import numpy as np
            from mpi4py import MPI
            from mpi4py.util import dtlib

            comm = MPI.COMM_WORLD
            rank = comm.Get_rank()

            datatype = MPI.FLOAT
            np_dtype = dtlib.to_numpy_dtype(datatype)
            itemsize = datatype.Get_size()

            N = comm.Get_size() + 1
            win_size = N * itemsize if rank == 0 else 0
            win = MPI.Win.Allocate(
                size=win_size,
                disp_unit=itemsize,
                comm=comm,
            )
            if rank == 0:
                mem = np.frombuffer(win, dtype=np_dtype)
                mem[:] = np.arange(len(mem), dtype=np_dtype)
            comm.Barrier()

            buf = np.zeros(3, dtype=np_dtype)
            target = (rank, 2, datatype)
            win.Lock(rank=0)
            win.Get(buf, target_rank=0, target=target)
            win.Unlock(rank=0)
            assert np.all(buf == [rank, rank+1, 0])

   Wrapping with SWIG
       • C source:

            /* file: helloworld.c */
            void sayhello(MPI_Comm comm)
            {
              int size, rank;
              MPI_Comm_size(comm, &size);
              MPI_Comm_rank(comm, &rank);
              printf("Hello, World! "
                     "I am process %d of %d.\n",
                     rank, size);
            }

       • SWIG interface file:

            // file: helloworld.i
            %module helloworld
            %{
            #include <mpi.h>
            #include "helloworld.c"
            }%

            %include mpi4py/mpi4py.i
            %mpi4py_typemap(Comm, MPI_Comm);
            void sayhello(MPI_Comm comm);

       • Try it in the Python prompt:

            >>> from mpi4py import MPI
            >>> import helloworld
            >>> helloworld.sayhello(MPI.COMM_WORLD)
            Hello, World! I am process 0 of 1.

   Wrapping with F2Py
       • Fortran 90 source:

            ! file: helloworld.f90
            subroutine sayhello(comm)
              use mpi
              implicit none
              integer :: comm, rank, size, ierr
              call MPI_Comm_size(comm, size, ierr)
              call MPI_Comm_rank(comm, rank, ierr)
              print *, 'Hello, World! I am process ',rank,' of ',size,'.'
            end subroutine sayhello

       • Compiling example using f2py

            $ f2py -c --f90exec=mpif90 helloworld.f90 -m helloworld

       • Try it in the Python prompt:

            >>> from mpi4py import MPI
            >>> import helloworld
            >>> fcomm = MPI.COMM_WORLD.py2f()
            >>> helloworld.sayhello(fcomm)
            Hello, World! I am process 0 of 1.

MPI4PY

   Runtime configuration options
       mpi4py.rc
              This object has attributes  exposing  runtime  configuration  options  that  become
              effective at import time of the MPI module.

       Attributes Summary

                           ┌─────────────┬──────────────────────────────────┐
                           │initialize   │ Automatic  MPI initialization at │
                           │             │ import                           │
                           ├─────────────┼──────────────────────────────────┤
                           │threads      │ Request   initialization    with │
                           │             │ thread support                   │
                           ├─────────────┼──────────────────────────────────┤
                           │thread_level │ Level   of   thread  support  to │
                           │             │ request                          │
                           ├─────────────┼──────────────────────────────────┤
                           │finalize     │ Automatic  MPI  finalization  at │
                           │             │ exit                             │
                           ├─────────────┼──────────────────────────────────┤
                           │fast_reduce  │ Use  tree-based  reductions  for │
                           │             │ objects                          │
                           ├─────────────┼──────────────────────────────────┤
                           │recv_mprobe  │ Use matched  probes  to  receive │
                           │             │ objects                          │
                           ├─────────────┼──────────────────────────────────┤
                           │errors       │ Error handling policy            │
                           └─────────────┴──────────────────────────────────┘

       Attributes Documentation

       mpi4py.rc.initialize
              Automatic MPI initialization at import.

              Type   bool

              Default
                     True

              SEE ALSO:
                 MPI4PY_RC_INITIALIZE

       mpi4py.rc.threads
              Request initialization with thread support.

              Type   bool

              Default
                     True

              SEE ALSO:
                 MPI4PY_RC_THREADS

       mpi4py.rc.thread_level
              Level of thread support to request.

              Type   str

              Default
                     "multiple"

              Choices
                     "multiple", "serialized", "funneled", "single"

              SEE ALSO:
                 MPI4PY_RC_THREAD_LEVEL

       mpi4py.rc.finalize
              Automatic MPI finalization at exit.

              Type   None or bool

              Default
                     None

              SEE ALSO:
                 MPI4PY_RC_FINALIZE

       mpi4py.rc.fast_reduce
              Use tree-based reductions for objects.

              Type   bool

              Default
                     True

              SEE ALSO:
                 MPI4PY_RC_FAST_REDUCE

       mpi4py.rc.recv_mprobe
              Use matched probes to receive objects.

              Type   bool

              Default
                     True

              SEE ALSO:
                 MPI4PY_RC_RECV_MPROBE

       mpi4py.rc.errors
              Error handling policy.

              Type   str

              Default
                     "exception"

              Choices
                     "exception", "default", "fatal"

              SEE ALSO:
                 MPI4PY_RC_ERRORS

       Example

       MPI  for  Python  features  automatic initialization and finalization of the MPI execution
       environment. By using the mpi4py.rc object, MPI initialization  and  finalization  can  be
       handled programatically:

          import mpi4py
          mpi4py.rc.initialize = False  # do not initialize MPI automatically
          mpi4py.rc.finalize = False    # do not finalize MPI automatically

          from mpi4py import MPI # import the 'MPI' module

          MPI.Init()      # manual initialization of the MPI environment
          ...             # your finest code here ...
          MPI.Finalize()  # manual finalization of the MPI environment

   Environment variables
       The following environment variables override the corresponding attributes of the mpi4py.rc
       and MPI.pickle objects at import time of the MPI module.

       NOTE:
          For variables of boolean type, accepted values are 0 and 1 (interpreted  as  False  and
          True, respectively), and strings specifying a YAML boolean value (case-insensitive).

       MPI4PY_RC_INITIALIZE

              Type   bool

              Default
                     True

              Whether to automatically initialize MPI at import time of the mpi4py.MPI module.

              SEE ALSO:
                 mpi4py.rc.initialize

              New in version 3.1.0.

       MPI4PY_RC_FINALIZE

              Type   None | bool

              Default
                     None

              Choices
                     None, True, False

              Whether to automatically finalize MPI at exit time of the Python process.

              SEE ALSO:
                 mpi4py.rc.finalize

              New in version 3.1.0.

       MPI4PY_RC_THREADS

              Type   bool

              Default
                     True

              Whether to initialize MPI with thread support.

              SEE ALSO:
                 mpi4py.rc.threads

              New in version 3.1.0.

       MPI4PY_RC_THREAD_LEVEL

              Default
                     "multiple"

              Choices
                     "single", "funneled", "serialized", "multiple"

              The level of required thread support.

              SEE ALSO:
                 mpi4py.rc.thread_level

              New in version 3.1.0.

       MPI4PY_RC_FAST_REDUCE

              Type   bool

              Default
                     True

              Whether to use tree-based reductions for objects.

              SEE ALSO:
                 mpi4py.rc.fast_reduce

              New in version 3.1.0.

       MPI4PY_RC_RECV_MPROBE

              Type   bool

              Default
                     True

              Whether to use matched probes to receive objects.

              SEE ALSO:
                 mpi4py.rc.recv_mprobe

       MPI4PY_RC_ERRORS

              Default
                     "exception"

              Choices
                     "exception", "default", "fatal"

              Controls default MPI error handling policy.

              SEE ALSO:
                 mpi4py.rc.errors

              New in version 3.1.0.

       MPI4PY_PICKLE_PROTOCOL

              Type   int

              Default
                     pickle.HIGHEST_PROTOCOL

              Controls the default pickle protocol to use when communicating Python objects.

              SEE ALSO:
                 PROTOCOL attribute of the MPI.pickle object within the MPI module.

              New in version 3.1.0.

       MPI4PY_PICKLE_THRESHOLD

              Type   int

              Default
                     262144

              Controls   the  default  buffer  size  threshold  for  switching  from  in-band  to
              out-of-band buffer handling when using pickle protocol version 5 or higher.

              SEE ALSO:
                 Module mpi4py.util.pkl5.

              New in version 3.1.2.

   Miscellaneous functions
       mpi4py.profile()

       mpi4py.get_config()

       mpi4py.get_include()

MPI4PY.MPI

   Classes
       Ancillary

                                             ┌─────────┬───┐
                                             │Datatype │   │
                                             ├─────────┼───┤
                                             │Status   │   │
                                             ├─────────┼───┤
                                             │Request  │   │
                                             ├─────────┼───┤
                                             │Prequest │   │
                                             ├─────────┼───┤
                                             │Grequest │   │
                                             ├─────────┼───┤
                                             │Op       │   │
                                             ├─────────┼───┤
                                             │Group    │   │
                                             ├─────────┼───┤
                                             │Info     │   │
                                             └─────────┴───┘

       Communication

                                          ┌──────────────┬───┐
                                          │Comm          │   │
                                          ├──────────────┼───┤
                                          │Intracomm     │   │
                                          ├──────────────┼───┤
                                          │Topocomm      │   │
                                          ├──────────────┼───┤
                                          │Cartcomm      │   │
                                          ├──────────────┼───┤
                                          │Graphcomm     │   │
                                          ├──────────────┼───┤
                                          │Distgraphcomm │   │
                                          ├──────────────┼───┤
                                          │Intercomm     │   │
                                          ├──────────────┼───┤
                                          │Message       │   │
                                          └──────────────┴───┘

       One-sided operations

                                               ┌────┬───┐
                                               │Win │   │
                                               └────┴───┘

       Input/Output

                                               ┌─────┬───┐
                                               │File │   │
                                               └─────┴───┘

       Error handling

                                            ┌───────────┬───┐
                                            │Errhandler │   │
                                            ├───────────┼───┤
                                            │Exception  │   │
                                            └───────────┴───┘

       Auxiliary

                                              ┌───────┬───┐
                                              │Pickle │   │
                                              ├───────┼───┤
                                              │memory │   │
                                              └───────┴───┘

   Functions
       Version inquiry

                                       ┌────────────────────┬───┐
                                       │Get_version         │   │
                                       ├────────────────────┼───┤
                                       │Get_library_version │   │
                                       └────────────────────┴───┘

       Initialization and finalization

                                          ┌───────────────┬───┐
                                          │Init           │   │
                                          ├───────────────┼───┤
                                          │Init_thread    │   │
                                          ├───────────────┼───┤
                                          │Finalize       │   │
                                          ├───────────────┼───┤
                                          │Is_initialized │   │
                                          ├───────────────┼───┤
                                          │Is_finalized   │   │
                                          ├───────────────┼───┤
                                          │Query_thread   │   │
                                          ├───────────────┼───┤
                                          │Is_thread_main │   │
                                          └───────────────┴───┘

       Memory allocation

                                            ┌──────────┬───┐
                                            │Alloc_mem │   │
                                            ├──────────┼───┤
                                            │Free_mem  │   │
                                            └──────────┴───┘

       Address manipulation

                                           ┌────────────┬───┐
                                           │Get_address │   │
                                           ├────────────┼───┤
                                           │Aint_add    │   │
                                           ├────────────┼───┤
                                           │Aint_diff   │   │
                                           └────────────┴───┘

       Timer

                                              ┌──────┬───┐
                                              │Wtick │   │
                                              ├──────┼───┤
                                              │Wtime │   │
                                              └──────┴───┘

       Error handling

                                         ┌─────────────────┬───┐
                                         │Get_error_class  │   │
                                         ├─────────────────┼───┤
                                         │Get_error_string │   │
                                         ├─────────────────┼───┤
                                         │Add_error_class  │   │
                                         ├─────────────────┼───┤
                                         │Add_error_code   │   │
                                         ├─────────────────┼───┤
                                         │Add_error_string │   │
                                         └─────────────────┴───┘

       Dynamic process management

                                          ┌───────────────┬───┐
                                          │Open_port      │   │
                                          ├───────────────┼───┤
                                          │Close_port     │   │
                                          └───────────────┴───┘

                                          │Publish_name   │   │
                                          ├───────────────┼───┤
                                          │Unpublish_name │   │
                                          ├───────────────┼───┤
                                          │Lookup_name    │   │
                                          └───────────────┴───┘

       Miscellanea

                                        ┌───────────────────┬───┐
                                        │Attach_buffer      │   │
                                        ├───────────────────┼───┤
                                        │Detach_buffer      │   │
                                        ├───────────────────┼───┤
                                        │Compute_dims       │   │
                                        ├───────────────────┼───┤
                                        │Get_processor_name │   │
                                        ├───────────────────┼───┤
                                        │Register_datarep   │   │
                                        ├───────────────────┼───┤
                                        │Pcontrol           │   │
                                        └───────────────────┴───┘

       Utilities

                                            ┌───────────┬───┐
                                            │get_vendor │   │
                                            └───────────┴───┘

   Attributes
                                    ┌───────────────────────────┬───┐
                                    │UNDEFINED                  │   │
                                    ├───────────────────────────┼───┤
                                    │ANY_SOURCE                 │   │
                                    ├───────────────────────────┼───┤
                                    │ANY_TAG                    │   │
                                    ├───────────────────────────┼───┤
                                    │PROC_NULL                  │   │
                                    ├───────────────────────────┼───┤
                                    │ROOT                       │   │
                                    ├───────────────────────────┼───┤
                                    │BOTTOM                     │   │
                                    ├───────────────────────────┼───┤
                                    │IN_PLACE                   │   │
                                    ├───────────────────────────┼───┤
                                    │KEYVAL_INVALID             │   │
                                    ├───────────────────────────┼───┤
                                    │TAG_UB                     │   │
                                    ├───────────────────────────┼───┤
                                    │HOST                       │   │
                                    ├───────────────────────────┼───┤
                                    │IO                         │   │
                                    ├───────────────────────────┼───┤
                                    │WTIME_IS_GLOBAL            │   │
                                    ├───────────────────────────┼───┤
                                    │UNIVERSE_SIZE              │   │
                                    ├───────────────────────────┼───┤
                                    │APPNUM                     │   │
                                    ├───────────────────────────┼───┤
                                    │LASTUSEDCODE               │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_BASE                   │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_SIZE                   │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_DISP_UNIT              │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_CREATE_FLAVOR          │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_FLAVOR                 │   │
                                    └───────────────────────────┴───┘

                                    │WIN_MODEL                  │   │
                                    ├───────────────────────────┼───┤
                                    │SUCCESS                    │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_LASTCODE               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_COMM                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_GROUP                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_TYPE                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_REQUEST                │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_OP                     │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_BUFFER                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_COUNT                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_TAG                    │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RANK                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_ROOT                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_TRUNCATE               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_IN_STATUS              │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_PENDING                │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_TOPOLOGY               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_DIMS                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_ARG                    │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_OTHER                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_UNKNOWN                │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_INTERN                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_INFO                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_FILE                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_WIN                    │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_KEYVAL                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_INFO_KEY               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_INFO_VALUE             │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_INFO_NOKEY             │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_ACCESS                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_AMODE                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_BAD_FILE               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_FILE_EXISTS            │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_FILE_IN_USE            │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_NO_SPACE               │   │
                                    └───────────────────────────┴───┘

                                    │ERR_NO_SUCH_FILE           │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_IO                     │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_READ_ONLY              │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_CONVERSION             │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_DUP_DATAREP            │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_UNSUPPORTED_DATAREP    │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_UNSUPPORTED_OPERATION  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_NAME                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_NO_MEM                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_NOT_SAME               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_PORT                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_QUOTA                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_SERVICE                │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_SPAWN                  │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_BASE                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_SIZE                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_DISP                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_ASSERT                 │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_LOCKTYPE               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_CONFLICT           │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_SYNC               │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_RANGE              │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_ATTACH             │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_SHARED             │   │
                                    ├───────────────────────────┼───┤
                                    │ERR_RMA_FLAVOR             │   │
                                    ├───────────────────────────┼───┤
                                    │ORDER_C                    │   │
                                    ├───────────────────────────┼───┤
                                    │ORDER_F                    │   │
                                    ├───────────────────────────┼───┤
                                    │ORDER_FORTRAN              │   │
                                    ├───────────────────────────┼───┤
                                    │TYPECLASS_INTEGER          │   │
                                    ├───────────────────────────┼───┤
                                    │TYPECLASS_REAL             │   │
                                    ├───────────────────────────┼───┤
                                    │TYPECLASS_COMPLEX          │   │
                                    ├───────────────────────────┼───┤
                                    │DISTRIBUTE_NONE            │   │
                                    ├───────────────────────────┼───┤
                                    │DISTRIBUTE_BLOCK           │   │
                                    ├───────────────────────────┼───┤
                                    │DISTRIBUTE_CYCLIC          │   │
                                    ├───────────────────────────┼───┤
                                    │DISTRIBUTE_DFLT_DARG       │   │
                                    └───────────────────────────┴───┘

                                    │COMBINER_NAMED             │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_DUP               │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_CONTIGUOUS        │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_VECTOR            │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_HVECTOR           │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_INDEXED           │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_HINDEXED          │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_INDEXED_BLOCK     │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_HINDEXED_BLOCK    │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_STRUCT            │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_SUBARRAY          │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_DARRAY            │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_RESIZED           │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_F90_REAL          │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_F90_COMPLEX       │   │
                                    ├───────────────────────────┼───┤
                                    │COMBINER_F90_INTEGER       │   │
                                    ├───────────────────────────┼───┤
                                    │IDENT                      │   │
                                    ├───────────────────────────┼───┤
                                    │CONGRUENT                  │   │
                                    ├───────────────────────────┼───┤
                                    │SIMILAR                    │   │
                                    ├───────────────────────────┼───┤
                                    │UNEQUAL                    │   │
                                    ├───────────────────────────┼───┤
                                    │CART                       │   │
                                    ├───────────────────────────┼───┤
                                    │GRAPH                      │   │
                                    ├───────────────────────────┼───┤
                                    │DIST_GRAPH                 │   │
                                    ├───────────────────────────┼───┤
                                    │UNWEIGHTED                 │   │
                                    ├───────────────────────────┼───┤
                                    │WEIGHTS_EMPTY              │   │
                                    ├───────────────────────────┼───┤
                                    │COMM_TYPE_SHARED           │   │
                                    ├───────────────────────────┼───┤
                                    │BSEND_OVERHEAD             │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_FLAVOR_CREATE          │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_FLAVOR_ALLOCATE        │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_FLAVOR_DYNAMIC         │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_FLAVOR_SHARED          │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_SEPARATE               │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_UNIFIED                │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_NOCHECK               │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_NOSTORE               │   │
                                    └───────────────────────────┴───┘

                                    │MODE_NOPUT                 │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_NOPRECEDE             │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_NOSUCCEED             │   │
                                    ├───────────────────────────┼───┤
                                    │LOCK_EXCLUSIVE             │   │
                                    ├───────────────────────────┼───┤
                                    │LOCK_SHARED                │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_RDONLY                │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_WRONLY                │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_RDWR                  │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_CREATE                │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_EXCL                  │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_DELETE_ON_CLOSE       │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_UNIQUE_OPEN           │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_SEQUENTIAL            │   │
                                    ├───────────────────────────┼───┤
                                    │MODE_APPEND                │   │
                                    ├───────────────────────────┼───┤
                                    │SEEK_SET                   │   │
                                    ├───────────────────────────┼───┤
                                    │SEEK_CUR                   │   │
                                    ├───────────────────────────┼───┤
                                    │SEEK_END                   │   │
                                    ├───────────────────────────┼───┤
                                    │DISPLACEMENT_CURRENT       │   │
                                    ├───────────────────────────┼───┤
                                    │DISP_CUR                   │   │
                                    ├───────────────────────────┼───┤
                                    │THREAD_SINGLE              │   │
                                    ├───────────────────────────┼───┤
                                    │THREAD_FUNNELED            │   │
                                    ├───────────────────────────┼───┤
                                    │THREAD_SERIALIZED          │   │
                                    ├───────────────────────────┼───┤
                                    │THREAD_MULTIPLE            │   │
                                    ├───────────────────────────┼───┤
                                    │VERSION                    │   │
                                    ├───────────────────────────┼───┤
                                    │SUBVERSION                 │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_PROCESSOR_NAME         │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_ERROR_STRING           │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_PORT_NAME              │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_INFO_KEY               │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_INFO_VAL               │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_OBJECT_NAME            │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_DATAREP_STRING         │   │
                                    ├───────────────────────────┼───┤
                                    │MAX_LIBRARY_VERSION_STRING │   │
                                    ├───────────────────────────┼───┤
                                    │DATATYPE_NULL              │   │
                                    ├───────────────────────────┼───┤
                                    │UB                         │   │
                                    └───────────────────────────┴───┘

                                    │LB                         │   │
                                    ├───────────────────────────┼───┤
                                    │PACKED                     │   │
                                    ├───────────────────────────┼───┤
                                    │BYTE                       │   │
                                    ├───────────────────────────┼───┤
                                    │AINT                       │   │
                                    ├───────────────────────────┼───┤
                                    │OFFSET                     │   │
                                    ├───────────────────────────┼───┤
                                    │COUNT                      │   │
                                    ├───────────────────────────┼───┤
                                    │CHAR                       │   │
                                    ├───────────────────────────┼───┤
                                    │WCHAR                      │   │
                                    ├───────────────────────────┼───┤
                                    │SIGNED_CHAR                │   │
                                    ├───────────────────────────┼───┤
                                    │SHORT                      │   │
                                    ├───────────────────────────┼───┤
                                    │INT                        │   │
                                    ├───────────────────────────┼───┤
                                    │LONG                       │   │
                                    ├───────────────────────────┼───┤
                                    │LONG_LONG                  │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED_CHAR              │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED_SHORT             │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED                   │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED_LONG              │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED_LONG_LONG         │   │
                                    ├───────────────────────────┼───┤
                                    │FLOAT                      │   │
                                    ├───────────────────────────┼───┤
                                    │DOUBLE                     │   │
                                    ├───────────────────────────┼───┤
                                    │LONG_DOUBLE                │   │
                                    ├───────────────────────────┼───┤
                                    │C_BOOL                     │   │
                                    ├───────────────────────────┼───┤
                                    │INT8_T                     │   │
                                    ├───────────────────────────┼───┤
                                    │INT16_T                    │   │
                                    ├───────────────────────────┼───┤
                                    │INT32_T                    │   │
                                    ├───────────────────────────┼───┤
                                    │INT64_T                    │   │
                                    ├───────────────────────────┼───┤
                                    │UINT8_T                    │   │
                                    ├───────────────────────────┼───┤
                                    │UINT16_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │UINT32_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │UINT64_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │C_COMPLEX                  │   │
                                    ├───────────────────────────┼───┤
                                    │C_FLOAT_COMPLEX            │   │
                                    ├───────────────────────────┼───┤
                                    │C_DOUBLE_COMPLEX           │   │
                                    ├───────────────────────────┼───┤
                                    │C_LONG_DOUBLE_COMPLEX      │   │
                                    ├───────────────────────────┼───┤
                                    │CXX_BOOL                   │   │
                                    └───────────────────────────┴───┘

                                    │CXX_FLOAT_COMPLEX          │   │
                                    ├───────────────────────────┼───┤
                                    │CXX_DOUBLE_COMPLEX         │   │
                                    ├───────────────────────────┼───┤
                                    │CXX_LONG_DOUBLE_COMPLEX    │   │
                                    ├───────────────────────────┼───┤
                                    │SHORT_INT                  │   │
                                    ├───────────────────────────┼───┤
                                    │INT_INT                    │   │
                                    ├───────────────────────────┼───┤
                                    │TWOINT                     │   │
                                    ├───────────────────────────┼───┤
                                    │LONG_INT                   │   │
                                    ├───────────────────────────┼───┤
                                    │FLOAT_INT                  │   │
                                    ├───────────────────────────┼───┤
                                    │DOUBLE_INT                 │   │
                                    ├───────────────────────────┼───┤
                                    │LONG_DOUBLE_INT            │   │
                                    ├───────────────────────────┼───┤
                                    │CHARACTER                  │   │
                                    ├───────────────────────────┼───┤
                                    │LOGICAL                    │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER                    │   │
                                    ├───────────────────────────┼───┤
                                    │REAL                       │   │
                                    ├───────────────────────────┼───┤
                                    │DOUBLE_PRECISION           │   │
                                    ├───────────────────────────┼───┤
                                    │COMPLEX                    │   │
                                    ├───────────────────────────┼───┤
                                    │DOUBLE_COMPLEX             │   │
                                    ├───────────────────────────┼───┤
                                    │LOGICAL1                   │   │
                                    ├───────────────────────────┼───┤
                                    │LOGICAL2                   │   │
                                    ├───────────────────────────┼───┤
                                    │LOGICAL4                   │   │
                                    ├───────────────────────────┼───┤
                                    │LOGICAL8                   │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER1                   │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER2                   │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER4                   │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER8                   │   │
                                    ├───────────────────────────┼───┤
                                    │INTEGER16                  │   │
                                    ├───────────────────────────┼───┤
                                    │REAL2                      │   │
                                    ├───────────────────────────┼───┤
                                    │REAL4                      │   │
                                    ├───────────────────────────┼───┤
                                    │REAL8                      │   │
                                    ├───────────────────────────┼───┤
                                    │REAL16                     │   │
                                    ├───────────────────────────┼───┤
                                    │COMPLEX4                   │   │
                                    ├───────────────────────────┼───┤
                                    │COMPLEX8                   │   │
                                    ├───────────────────────────┼───┤
                                    │COMPLEX16                  │   │
                                    ├───────────────────────────┼───┤
                                    │COMPLEX32                  │   │
                                    ├───────────────────────────┼───┤
                                    │UNSIGNED_INT               │   │
                                    └───────────────────────────┴───┘

                                    │SIGNED_SHORT               │   │
                                    ├───────────────────────────┼───┤
                                    │SIGNED_INT                 │   │
                                    ├───────────────────────────┼───┤
                                    │SIGNED_LONG                │   │
                                    ├───────────────────────────┼───┤
                                    │SIGNED_LONG_LONG           │   │
                                    ├───────────────────────────┼───┤
                                    │BOOL                       │   │
                                    ├───────────────────────────┼───┤
                                    │SINT8_T                    │   │
                                    ├───────────────────────────┼───┤
                                    │SINT16_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │SINT32_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │SINT64_T                   │   │
                                    ├───────────────────────────┼───┤
                                    │F_BOOL                     │   │
                                    ├───────────────────────────┼───┤
                                    │F_INT                      │   │
                                    ├───────────────────────────┼───┤
                                    │F_FLOAT                    │   │
                                    ├───────────────────────────┼───┤
                                    │F_DOUBLE                   │   │
                                    ├───────────────────────────┼───┤
                                    │F_COMPLEX                  │   │
                                    ├───────────────────────────┼───┤
                                    │F_FLOAT_COMPLEX            │   │
                                    ├───────────────────────────┼───┤
                                    │F_DOUBLE_COMPLEX           │   │
                                    ├───────────────────────────┼───┤
                                    │REQUEST_NULL               │   │
                                    ├───────────────────────────┼───┤
                                    │MESSAGE_NULL               │   │
                                    ├───────────────────────────┼───┤
                                    │MESSAGE_NO_PROC            │   │
                                    ├───────────────────────────┼───┤
                                    │OP_NULL                    │   │
                                    ├───────────────────────────┼───┤
                                    │MAX                        │   │
                                    ├───────────────────────────┼───┤
                                    │MIN                        │   │
                                    ├───────────────────────────┼───┤
                                    │SUM                        │   │
                                    ├───────────────────────────┼───┤
                                    │PROD                       │   │
                                    ├───────────────────────────┼───┤
                                    │LAND                       │   │
                                    ├───────────────────────────┼───┤
                                    │BAND                       │   │
                                    ├───────────────────────────┼───┤
                                    │LOR                        │   │
                                    ├───────────────────────────┼───┤
                                    │BOR                        │   │
                                    ├───────────────────────────┼───┤
                                    │LXOR                       │   │
                                    ├───────────────────────────┼───┤
                                    │BXOR                       │   │
                                    ├───────────────────────────┼───┤
                                    │MAXLOC                     │   │
                                    ├───────────────────────────┼───┤
                                    │MINLOC                     │   │
                                    ├───────────────────────────┼───┤
                                    │REPLACE                    │   │
                                    ├───────────────────────────┼───┤
                                    │NO_OP                      │   │
                                    ├───────────────────────────┼───┤
                                    │GROUP_NULL                 │   │
                                    └───────────────────────────┴───┘

                                    │GROUP_EMPTY                │   │
                                    ├───────────────────────────┼───┤
                                    │INFO_NULL                  │   │
                                    ├───────────────────────────┼───┤
                                    │INFO_ENV                   │   │
                                    ├───────────────────────────┼───┤
                                    │ERRHANDLER_NULL            │   │
                                    ├───────────────────────────┼───┤
                                    │ERRORS_RETURN              │   │
                                    ├───────────────────────────┼───┤
                                    │ERRORS_ARE_FATAL           │   │
                                    ├───────────────────────────┼───┤
                                    │COMM_NULL                  │   │
                                    ├───────────────────────────┼───┤
                                    │COMM_SELF                  │   │
                                    ├───────────────────────────┼───┤
                                    │COMM_WORLD                 │   │
                                    ├───────────────────────────┼───┤
                                    │WIN_NULL                   │   │
                                    ├───────────────────────────┼───┤
                                    │FILE_NULL                  │   │
                                    ├───────────────────────────┼───┤
                                    │pickle                     │   │
                                    └───────────────────────────┴───┘

MPI4PY.FUTURES

       New in version 3.0.0.

       This package provides a high-level interface for asynchronously executing callables  on  a
       pool of worker processes using MPI for inter-process communication.

   concurrent.futures
       The  mpi4py.futures  package  is  based  on  concurrent.futures  from  the Python standard
       library. More precisely, mpi4py.futures provides the MPIPoolExecutor class as  a  concrete
       implementation  of  the  abstract  class  Executor.   The  submit()  interface schedules a
       callable to be executed asynchronously  and  returns  a  Future  object  representing  the
       execution  of  the  callable.   Future  instances  can  be  queried for the call result or
       exception. Sets of Future instances  can  be  passed  to  the  wait()  and  as_completed()
       functions.

       NOTE:
          The  concurrent.futures  package  was  introduced  in  Python 3.2. A backport targeting
          Python 2.7 is available on PyPI. The mpi4py.futures package uses concurrent.futures  if
          available,  either  from  the  Python  3 standard library or the Python 2.7 backport if
          installed.  Otherwise,  mpi4py.futures  uses  a  bundled  copy  of  core  functionality
          backported from Python 3.5 to work with Python 2.7.

       SEE ALSO:

          Module concurrent.futures
                 Documentation of the concurrent.futures standard module.

   MPIPoolExecutor
       The MPIPoolExecutor class uses a pool of MPI processes to execute calls asynchronously. By
       performing  computations  in  separate  processes,  it  allows  to  side-step  the  global
       interpreter  lock but also means that only picklable objects can be executed and returned.
       The __main__ module must be importable by worker processes, thus MPIPoolExecutor instances
       may not work in the interactive interpreter.

       MPIPoolExecutor  takes  advantage of the dynamic process management features introduced in
       the MPI-2 standard. In particular, the MPI.Intracomm.Spawn method of MPI.COMM_SELF is used
       in  the  master  (or  parent)  process  to spawn new worker (or child) processes running a
       Python  interpreter.  The  master  process  uses  a  separate   thread   (one   for   each
       MPIPoolExecutor  instance)  to  communicate  back  and forth with the workers.  The worker
       processes serve the execution of tasks in the  main  (and  only)  thread  until  they  are
       signaled for completion.

       NOTE:
          The  worker  processes  must  import  the main script in order to unpickle any callable
          defined in the __main__ module and submitted from the master process. Furthermore,  the
          callables  may  need  access  to  other  global  variables.  At  the  worker processes,
          mpi4py.futures executes the main  script  code  (using  the  runpy  module)  under  the
          __worker__ namespace to define the __main__ module. The __main__ and __worker__ modules
          are added to sys.modules (both at the master and worker  processes)  to  ensure  proper
          pickling and unpickling.

       WARNING:
          During  the  initial import phase at the workers, the main script cannot create and use
          new MPIPoolExecutor instances. Otherwise, each worker would attempt to spawn a new pool
          of  workers,  leading  to  infinite  recursion.  mpi4py.futures  detects such recursive
          attempts to spawn new workers and aborts the MPI execution  environment.  As  the  main
          script  code  is  run  under  the  __worker__ namespace, the easiest way to avoid spawn
          recursion is using the idiom if __name__ == '__main__': ... in the main script.

       class  mpi4py.futures.MPIPoolExecutor(max_workers=None,   initializer=None,   initargs=(),
       **kwargs)
              An  Executor  subclass  that  executes calls asynchronously using a pool of at most
              max_workers processes.   If  max_workers  is  None  or  not  given,  its  value  is
              determined  from the MPI4PY_FUTURES_MAX_WORKERS environment variable if set, or the
              MPI universe size if set,  otherwise  a  single  worker  process  is  spawned.   If
              max_workers is lower than or equal to 0, then a ValueError will be raised.

              initializer  is  an  optional  callable  that is called at the start of each worker
              process before executing any tasks; initargs is a tuple of arguments passed to  the
              initializer.  If initializer raises an exception, all pending tasks and any attempt
              to submit new tasks to the pool will raise a BrokenExecutor exception.

              Other parameters:

              • python_exe: Path to the  Python  interpreter  executable  used  to  spawn  worker
                processes, otherwise sys.executable is used.

              • python_args:  list  or iterable with additional command line flags to pass to the
                Python executable. Command line flags determined from  inspection  of  sys.flags,
                sys.warnoptions and sys._xoptions in are passed unconditionally.

              • mpi_info: dict or iterable yielding (key, value) pairs.  These (key, value) pairs
                are passed (through an MPI.Info object) to the MPI.Intracomm.Spawn call  used  to
                spawn  worker  processes.  This  mechanism  allows telling the MPI runtime system
                where and how to start the processes. Check the documentation of the backend  MPI
                implementation  about  the set of keys it interprets and the corresponding format
                for values.

              • globals: dict or iterable yielding (name, value) pairs  to  initialize  the  main
                module namespace in worker processes.

              • main:  If  set  to  False, do not import the __main__ module in worker processes.
                Setting main to False prevents worker processes from accessing definitions in the
                parent __main__ namespace.

              • path:  list  or  iterable with paths to append to sys.path in worker processes to
                extend the module search path.

              • wdir: Path to set  the  current  working  directory  in  worker  processes  using
                os.chdir().  The  initial  working  directory  is  set by the MPI implementation.
                Quality MPI implementations should honor a wdir info key passed through mpi_info,
                although such feature is not mandatory.

              • env:  dict or iterable yielding (name, value) pairs with environment variables to
                update os.environ in worker processes.  The initial environment is set by the MPI
                implementation.  MPI  implementations  may  allow setting the initial environment
                through mpi_info, however such feature is not required nor recommended by the MPI
                standard.

              submit(func, *args, **kwargs)
                     Schedule  the  callable,  func,  to be executed as func(*args, **kwargs) and
                     returns a Future object representing the execution of the callable.

                        executor = MPIPoolExecutor(max_workers=1)
                        future = executor.submit(pow, 321, 1234)
                        print(future.result())

              map(func, *iterables, timeout=None, chunksize=1, **kwargs)
                     Equivalent to map(func, *iterables) except func is  executed  asynchronously
                     and  several  calls  to  func  may  be  made  concurrently, out-of-order, in
                     separate  processes.   The  returned  iterator  raises  a  TimeoutError   if
                     __next__()  is  called  and the result isn’t available after timeout seconds
                     from the original call to map().  timeout can be an  int  or  a  float.   If
                     timeout  is not specified or None, there is no limit to the wait time.  If a
                     call raises an exception, then that exception will be raised when its  value
                     is retrieved from the iterator. This method chops iterables into a number of
                     chunks which it submits to the pool as  separate  tasks.  The  (approximate)
                     size  of  these  chunks  can be specified by setting chunksize to a positive
                     integer. For very long iterables, using a  large  value  for  chunksize  can
                     significantly  improve  performance  compared to the default size of one. By
                     default,  the  returned  iterator  yields  results  in-order,  waiting   for
                     successive  tasks  to complete . This behavior can be changed by passing the
                     keyword argument unordered as True, then the result iterator  will  yield  a
                     result as soon as any of the tasks complete.

                        executor = MPIPoolExecutor(max_workers=3)
                        for result in executor.map(pow, [2]*32, range(32)):
                            print(result)

              starmap(func, iterable, timeout=None, chunksize=1, **kwargs)
                     Equivalent  to itertools.starmap(func, iterable). Used instead of map() when
                     argument parameters are already grouped in tuples  from  a  single  iterable
                     (the  data  has  been  “pre-zipped”).  map(func, *iterable) is equivalent to
                     starmap(func, zip(*iterable)).

                        executor = MPIPoolExecutor(max_workers=3)
                        iterable = ((2, n) for n in range(32))
                        for result in executor.starmap(pow, iterable):
                            print(result)

              shutdown(wait=True, cancel_futures=False)
                     Signal the executor that it should free any resources that it is using  when
                     the  currently  pending  futures  are done executing.  Calls to submit() and
                     map() made after shutdown() will raise RuntimeError.

                     If wait is True then this method will  not  return  until  all  the  pending
                     futures  are  done  executing and the resources associated with the executor
                     have been freed.  If wait is False then this method will return  immediately
                     and  the  resources  associated  with  the  executor  will be freed when all
                     pending futures are done executing.  Regardless of the value  of  wait,  the
                     entire  Python  program  will  not  exit  until all pending futures are done
                     executing.

                     If cancel_futures is True, this method will cancel all pending futures  that
                     the  executor  has  not  started  running. Any futures that are completed or
                     running won’t be cancelled, regardless of the value of cancel_futures.

                     You can avoid having to call this method explicitly  if  you  use  the  with
                     statement,  which  will  shutdown  the  executor  instance  (waiting  as  if
                     shutdown() were called with wait set to True).

                        import time
                        with MPIPoolExecutor(max_workers=1) as executor:
                            future = executor.submit(time.sleep, 2)
                        assert future.done()

              bootup(wait=True)
                     Signal the executor that it should allocate eagerly any  required  resources
                     (in  particular,  MPI worker processes). If wait is True, then bootup() will
                     not return until the executor resources are ready  to  process  submissions.
                     Resources  are  automatically  allocated in the first call to submit(), thus
                     calling bootup() explicitly is seldom needed.

       MPI4PY_FUTURES_MAX_WORKERS
              If the  max_workers  parameter  to  MPIPoolExecutor  is  None  or  not  given,  the
              MPI4PY_FUTURES_MAX_WORKERS  environment  variable  provides  fallback value for the
              maximum number of MPI worker processes to spawn.

       NOTE:
          As the master process uses a separate thread to  perform  MPI  communication  with  the
          workers, the backend MPI implementation should provide support for MPI.THREAD_MULTIPLE.
          However, some popular MPI implementations do not support yet concurrent MPI calls  from
          multiple  threads.  Additionally, users may decide to initialize MPI with a lower level
          of thread support. If the level of thread support in  the  backend  MPI  is  less  than
          MPI.THREAD_MULTIPLE,  mpi4py.futures  will use a global lock to serialize MPI calls. If
          the level of thread support is less  than  MPI.THREAD_SERIALIZED,  mpi4py.futures  will
          emit a RuntimeWarning.

       WARNING:
          If  the  level  of thread support in the backend MPI is less than MPI.THREAD_SERIALIZED
          (i.e, it is either MPI.THREAD_SINGLE or MPI.THREAD_FUNNELED), in theory  mpi4py.futures
          cannot  be  used.  Rather than raising an exception, mpi4py.futures emits a warning and
          takes a “cross-fingers” attitude to continue execution in the hope that serializing MPI
          calls with a global lock will actually work.

   MPICommExecutor
       Legacy MPI-1 implementations (as well as some vendor MPI-2 implementations) do not support
       the dynamic process management features introduced in the  MPI-2  standard.  Additionally,
       job  schedulers  and  batch  systems  in  supercomputing  facilities  may  pose additional
       complications to applications using the MPI_Comm_spawn() routine.

       With these issues  in  mind,  mpi4py.futures  supports  an  additonal,  more  traditional,
       SPMD-like  usage  pattern  requiring MPI-1 calls only. Python applications are started the
       usual way, e.g., using the mpiexec command. Python code should make a collective  call  to
       the  MPICommExecutor  context  manager  to partition the set of MPI processes within a MPI
       communicator in one master processes and many workers processes. The master  process  gets
       access  to  an  MPIPoolExecutor  instance  to  submit tasks. Meanwhile, the worker process
       follow a different execution path and team-up to execute  the  tasks  submitted  from  the
       master.

       Besides  alleviating  the  lack  of  dynamic process managment features in legacy MPI-1 or
       partial MPI-2 implementations, the  MPICommExecutor  context  manager  may  be  useful  in
       classic MPI-based Python applications willing to take advantage of the simple, task-based,
       master/worker approach available in the mpi4py.futures package.

       class mpi4py.futures.MPICommExecutor(comm=None, root=0)
              Context  manager  for  MPIPoolExecutor.  This  context   manager   splits   a   MPI
              (intra)communicator  comm  (defaults  to MPI.COMM_WORLD if not provided or None) in
              two disjoint sets: a single master  process  (with  rank  root  in  comm)  and  the
              remaining   worker   processes.   These   sets   are   then  connected  through  an
              intercommunicator.  The  target  of  the  with  statement  is  assigned  either  an
              MPIPoolExecutor instance (at the master) or None (at the workers).

                 from mpi4py import MPI
                 from mpi4py.futures import MPICommExecutor

                 with MPICommExecutor(MPI.COMM_WORLD, root=0) as executor:
                     if executor is not None:
                        future = executor.submit(abs, -42)
                        assert future.result() == 42
                        answer = set(executor.map(abs, [-42, 42]))
                        assert answer == {42}

       WARNING:
          If MPICommExecutor is passed a communicator of size one (e.g., MPI.COMM_SELF), then the
          executor instace assigned to  the  target  of  the  with  statement  will  execute  all
          submitted  tasks  in  a  single  worker thread, thus ensuring that task execution still
          progress asynchronously. However, the GIL will prevent the main and worker threads from
          running  concurrently  in  multicore processors. Moreover, the thread context switching
          may harm noticeably the performance of CPU-bound tasks. In case of I/O-bound tasks, the
          GIL  is  not  usually an issue, however, as a single worker thread is used, it progress
          one task at a time. We advice against using MPICommExecutor with communicators of  size
          one and suggest refactoring your code to use instead a ThreadPoolExecutor.

   Command line
       Recalling the issues related to the lack of support for dynamic process managment features
       in MPI implementations, mpi4py.futures supports an alternative usage pattern where  Python
       code (either from scripts, modules, or zip files) is run under command line control of the
       mpi4py.futures package by  passing  -m  mpi4py.futures  to  the  python  executable.   The
       mpi4py.futures   invocation   should   be   passed  a  pyfile  path  to  a  script  (or  a
       zipfile/directory containing a __main__.py file).  Additionally, mpi4py.futures accepts -m
       mod  to  execute  a module named mod, -c cmd to execute a command string cmd, or even - to
       read commands from standard input (sys.stdin).  Summarizing, mpi4py.futures can be invoked
       in the following ways:

       • $ mpiexec -n numprocs python -m mpi4py.futures pyfile [arg] ...$ mpiexec -n numprocs python -m mpi4py.futures -m mod [arg] ...$ mpiexec -n numprocs python -m mpi4py.futures -c cmd [arg] ...$ mpiexec -n numprocs python -m mpi4py.futures - [arg] ...

       Before  starting  the  main  script execution, mpi4py.futures splits MPI.COMM_WORLD in one
       master (the process with rank 0 in MPI.COMM_WORLD) and numprocs - 1 workers  and  connects
       them  through  an MPI intercommunicator.  Afterwards, the master process proceeds with the
       execution of the user script code, which eventually creates MPIPoolExecutor  instances  to
       submit  tasks.  Meanwhile, the worker processes follow a different execution path to serve
       the master.  Upon successful termination of the main script at the master, the entire  MPI
       execution  environment  exists  gracefully. In case of any unhandled exception in the main
       script, the master process calls MPI.COMM_WORLD.Abort(1) to prevent  deadlocks  and  force
       termination of entire MPI execution environment.

       WARNING:
          Running  scripts  under  command  line  control  of  mpi4py.futures is quite similar to
          executing a single-process application  that  spawn  additional  workers  as  required.
          However,  there  is  a  very  important  difference  users  should  be  aware  of.  All
          MPIPoolExecutor instances created at the master will share the pool of  workers.  Tasks
          submitted  at  the master from many different executors will be scheduled for execution
          in random order as soon as a worker is idle. Any executor can  easily  starve  all  the
          workers  (e.g.,  by  calling  MPIPoolExecutor.map()  with long iterables). If that ever
          happens, submissions from other executors will not be serviced until free  workers  are
          available.

       SEE ALSO:

          Command line
                 Documentation on Python command line interface.

   Examples
       The  following julia.py script computes the Julia set and dumps an image to disk in binary
       PGM format. The code starts by importing MPIPoolExecutor from the mpi4py.futures  package.
       Next,  some global constants and functions implement the computation of the Julia set. The
       computations are protected with the standard if __name__ ==  '__main__':...   idiom.   The
       image  is  computed  by  whole  scanlines submitting all these tasks at once using the map
       method. The result iterator yields scanlines in-order as the tasks complete. Finally, each
       scanline is dumped to disk.

       julia.py

          from mpi4py.futures import MPIPoolExecutor

          x0, x1, w = -2.0, +2.0, 640*2
          y0, y1, h = -1.5, +1.5, 480*2
          dx = (x1 - x0) / w
          dy = (y1 - y0) / h

          c = complex(0, 0.65)

          def julia(x, y):
              z = complex(x, y)
              n = 255
              while abs(z) < 3 and n > 1:
                  z = z**2 + c
                  n -= 1
              return n

          def julia_line(k):
              line = bytearray(w)
              y = y1 - k * dy
              for j in range(w):
                  x = x0 + j * dx
                  line[j] = julia(x, y)
              return line

          if __name__ == '__main__':

              with MPIPoolExecutor() as executor:
                  image = executor.map(julia_line, range(h))
                  with open('julia.pgm', 'wb') as f:
                      f.write(b'P5 %d %d %d\n' % (w, h, 255))
                      for line in image:
                          f.write(line)

       The  recommended  way to execute the script is by using the mpiexec command specifying one
       MPI process (master) and (optional but recommended) the desired MPI universe  size,  which
       determines  the  number  of  additional  dynamically  spawned processes (workers). The MPI
       universe size is provided either by a batch system or set by  the  user  via  command-line
       arguments  to  mpiexec  or  environment variables. Below we provide examples for MPICH and
       Open MPI implementations [1]. In all of these examples, the  mpiexec  command  launches  a
       single  master  process running the Python interpreter and executing the main script. When
       required, mpi4py.futures spawns the pool of 16 worker processes. The master submits  tasks
       to  the  workers  and  waits  for the results. The workers receive incoming tasks, execute
       them, and send back the results to the master.

       When using MPICH implementation or its derivatives based on  the  Hydra  process  manager,
       users can set the MPI universe size via the -usize argument to mpiexec:

          $ mpiexec -n 1 -usize 17 python julia.py

       or, alternatively, by setting the MPIEXEC_UNIVERSE_SIZE environment variable:

          $ MPIEXEC_UNIVERSE_SIZE=17 mpiexec -n 1 python julia.py

       In the Open MPI implementation, the MPI universe size can be set via the -host argument to
       mpiexec:

          $ mpiexec -n 1 -host <hostname>:17 python julia.py

       Another way to specify the  number  of  workers  is  to  use  the  mpi4py.futures-specific
       environment variable MPI4PY_FUTURES_MAX_WORKERS:

          $ MPI4PY_FUTURES_MAX_WORKERS=16 mpiexec -n 1 python julia.py

       Note that in this case, the MPI universe size is ignored.

       Alternatively,  users may decide to execute the script in a more traditional way, that is,
       all the MPI processes are started at once. The  user  script  is  run  under  command-line
       control of mpi4py.futures passing the -m flag to the python executable:

          $ mpiexec -n 17 python -m mpi4py.futures julia.py

       As  explained  previously,  the 17 processes are partitioned in one master and 16 workers.
       The master process executes the main script while the workers execute the tasks  submitted
       by the master.

       [1]  When  using  an  MPI  implementation  other  than MPICH or Open MPI, please check the
            documentation of the implementation and/or batch system for the ways to  specify  the
            desired MPI universe size.

       GIL    See global interpreter lock.

MPI4PY.UTIL

       New in version 3.1.0.

       The mpi4py.util package collects miscellaneous utilities within the intersection of Python
       and MPI.

   mpi4py.util.pkl5
       New in version 3.1.0.

       pickle protocol 5 (see PEP 574) introduced support for out-of-band buffers,  allowing  for
       more efficient handling of certain object types with large memory footprints.

       MPI  for  Python  uses  the  traditional  in-band  handling  of  buffers. This approach is
       appropriate for communicating non-buffer Python objects, or buffer-like objects with small
       memory  footprints.  For  point-to-point communication, in-band buffer handling allows for
       the communication of a pickled stream with  a  single  MPI  message,  at  the  expense  of
       additional CPU and memory overhead in the pickling and unpickling steps.

       The   mpi4py.util.pkl5   module   provides  communicator  wrapper  classes  reimplementing
       pickle-based point-to-point  communication  methods  using  pickle  protocol  5.  Handling
       out-of-band buffers necessarily involve multiple MPI messages, thus increasing latency and
       hurting performance in case of small size data. However, in case of large size  data,  the
       zero-copy savings of out-of-band buffer handling more than offset the extra latency costs.
       Additionally, these wrapper methods overcome the infamous 2 GiB message count limit (MPI-1
       to MPI-3).

       NOTE:
          Support  for  pickle  protocol  5  is  available in the pickle module within the Python
          standard library since Python 3.8. Previous Python  3  releases  can  use  the  pickle5
          backport, which is available on PyPI and can be installed with:

              python -m pip install pickle5

       class mpi4py.util.pkl5.Request(*args, **kwargs)
              Custom request class for nonblocking communications.

              NOTE:
                 Request is not a subclass of mpi4py.MPI.Request

              Free()

              cancel()

              get_status()

              test()

              wait()

              testall()

                     Classmethod

              waitall()

                     Classmethod

       class mpi4py.util.pkl5.Message(*args, **kwargs)
              Custom message class for matching probes.

              NOTE:
                 Message is not a subclass of mpi4py.MPI.Message

              recv()

              irecv()

              probe()

                     Classmethod

              iprobe()

                     Classmethod

       class mpi4py.util.pkl5.Comm(*args, **kwargs)
              Base communicator wrapper class.

              send()

              bsend()

              ssend()

              isend()

              ibsend()

              issend()

              recv()

              irecv()

                     WARNING:
                        This method cannot be supported reliably and raises RuntimeError.

              sendrecv()

              mprobe()

              improbe()

              bcast()

       class mpi4py.util.pkl5.Intracomm(*args, **kwargs)
              Intracommunicator wrapper class.

       class mpi4py.util.pkl5.Intercomm(*args, **kwargs)
              Intercommunicator wrapper class.

   Examples
       test-pkl5-1.py

          import numpy as np
          from mpi4py import MPI
          from mpi4py.util import pkl5

          comm = pkl5.Intracomm(MPI.COMM_WORLD)  # comm wrapper
          size = comm.Get_size()
          rank = comm.Get_rank()
          dst = (rank + 1) % size
          src = (rank - 1) % size

          sobj = np.full(1024**3, rank, dtype='i4')  # > 4 GiB
          sreq = comm.isend(sobj, dst, tag=42)
          robj = comm.recv (None, src, tag=42)
          sreq.Free()

          assert np.min(robj) == src
          assert np.max(robj) == src

       test-pkl5-2.py

          import numpy as np
          from mpi4py import MPI
          from mpi4py.util import pkl5

          comm = pkl5.Intracomm(MPI.COMM_WORLD)  # comm wrapper
          size = comm.Get_size()
          rank = comm.Get_rank()
          dst = (rank + 1) % size
          src = (rank - 1) % size

          sobj = np.full(1024**3, rank, dtype='i4')  # > 4 GiB
          sreq = comm.isend(sobj, dst, tag=42)

          status = MPI.Status()
          rmsg = comm.mprobe(status=status)
          assert status.Get_source() == src
          assert status.Get_tag() == 42
          rreq = rmsg.irecv()
          robj = rreq.wait()

          sreq.Free()
          assert np.max(robj) == src
          assert np.min(robj) == src

   mpi4py.util.dtlib
       New in version 3.1.0.

       The mpi4py.util.dtlib module provides converter routines between NumPy and MPI datatypes.

       mpi4py.util.dtlib.from_numpy_dtype()

              Parameters
                     dtype -- NumPy dtype-like object.

       mpi4py.util.dtlib.to_numpy_dtype()

              Parameters
                     datatype -- MPI datatype.

MPI4PY.RUN

       New in version 3.0.0.

       At import time, mpi4py initializes the MPI execution environment calling MPI_Init_thread()
       and installs an exit hook to automatically call  MPI_Finalize()  just  before  the  Python
       process  terminates.  Additionally,  mpi4py  overrides  the default ERRORS_ARE_FATAL error
       handler in  favor  of  ERRORS_RETURN,  which  allows  translating  MPI  errors  in  Python
       exceptions.  These  departures  from  standard  MPI behavior may be controversial, but are
       quite convenient within the highly dynamic  Python  programming  environment.  Third-party
       code  using  mpi4py  can  just  from  mpi4py  import MPI and perform MPI calls without the
       tedious initialization/finalization handling.  MPI errors, once  translated  automatically
       to  Python  exceptions, can be dealt with the common tryexceptfinally clauses; unhandled
       MPI exceptions will print a traceback which helps in locating problems in source code.

       Unfortunately, the interplay of automatic MPI finalization and  unhandled  exceptions  may
       lead  to  deadlocks.  In  unattended  runs, these deadlocks will drain the battery of your
       laptop, or burn precious allocation hours in your supercomputing facility.

       Consider the following snippet of Python code. Assume this code is stored  in  a  standard
       Python script file and run with mpiexec in two or more processes.

          from mpi4py import MPI
          assert MPI.COMM_WORLD.Get_size() > 1
          rank = MPI.COMM_WORLD.Get_rank()
          if rank == 0:
              1/0
              MPI.COMM_WORLD.send(None, dest=1, tag=42)
          elif rank == 1:
              MPI.COMM_WORLD.recv(source=0, tag=42)

       Process  0  raises ZeroDivisionError exception before performing a send call to process 1.
       As the exception is not handled, the Python interpreter running in process 0 will  proceed
       to  exit  with  non-zero  status.  However,  as  mpi4py installed a finalizer hook to call
       MPI_Finalize() before exit, process 0 will block waiting for other processes to also enter
       the  MPI_Finalize()  call. Meanwhile, process 1 will block waiting for a message to arrive
       from process 0, thus never reaching to MPI_Finalize(). The whole MPI execution environment
       is irremediably in a deadlock state.

       To  alleviate  this  issue,  mpi4py  offers  a  simple, alternative command line execution
       mechanism based on using the -m flag and implemented with the runpy module.  To  use  this
       features,  Python  code  should  be run passing -m mpi4py in the command line invoking the
       Python interpreter. In  case  of  unhandled  exceptions,  the  finalizer  hook  will  call
       MPI_Abort()  on  the  MPI_COMM_WORLD  communicator,  thus  effectively  aborting  the  MPI
       execution environment.

       WARNING:
          When a process is forced to abort, resources (e.g. open files) are not  cleaned-up  and
          any  registered  finalizers  (either  with the atexit module, the Python C/API function
          Py_AtExit(), or even the C standard library function atexit()) will  not  be  executed.
          Thus,  aborting execution is an extremely impolite way of ensuring process termination.
          However, MPI provides no other mechanism to recover from a deadlock state.

   Interface options
       The use of -m mpi4py to execute Python code on the command  line  resembles  that  of  the
       Python interpreter.

       • mpiexec -n numprocs python -m mpi4py pyfile [arg] ...mpiexec -n numprocs python -m mpi4py -m mod [arg] ...mpiexec -n numprocs python -m mpi4py -c cmd [arg] ...mpiexec -n numprocs python -m mpi4py - [arg] ...

       <pyfile>
              Execute  the  Python  code  contained  in  pyfile,  which must be a filesystem path
              referring to either a Python file, a directory containing a __main__.py file, or  a
              zipfile containing a __main__.py file.

       -m <mod>
              Search sys.path for the named module mod and execute its contents.

       -c <cmd>
              Execute the Python code in the cmd string command.

       -      Read commands from standard input (sys.stdin).

       SEE ALSO:

          Command line
                 Documentation on Python command line interface.

REFERENCE

                                            ┌───────────┬───┐
                                            │mpi4py.MPI │   │
                                            └───────────┴───┘

   mpi4py.MPI
       mpi4py.MPI

CITATION

       If  MPI  for  Python  been significant to a project that leads to an academic publication,
       please acknowledge that fact by citing the project.

       • L. Dalcin and Y.-L. L. Fang, mpi4py:  Status  Update  After  12  Years  of  Development,
         Computing       in       Science       &       Engineering,      23(4):47-54,      2021.
         https://doi.org/10.1109/MCSE.2021.3083216

       • L. Dalcin, P. Kler, R. Paz, and A. Cosimo, Parallel Distributed Computing using  Python,
         Advances         in         Water        Resources,        34(9):1124-1139,        2011.
         https://doi.org/10.1016/j.advwatres.2011.04.013

       • L. Dalcin, R. Paz, M. Storti, and J. D’Elia, MPI for  Python:  performance  improvements
         and  MPI-2  extensions,  Journal  of  Parallel and Distributed Computing, 68(5):655-662,
         2008.  https://doi.org/10.1016/j.jpdc.2007.09.005

       • L. Dalcin, R. Paz, and M. Storti, MPI for Python, Journal of  Parallel  and  Distributed
         Computing, 65(9):1108-1115, 2005.  https://doi.org/10.1016/j.jpdc.2005.03.010

INSTALLATION

   Requirements
       You  need  to  have  the  following  software properly installed in order to build MPI for
       Python:

       • A working MPI implementation, preferably supporting MPI-3 and built with  shared/dynamic
         libraries.

         NOTE:
            If  you want to build some MPI implementation from sources, check the instructions at
            Building MPI from sources in the appendix.

       • Python 2.7, 3.5 or above.

         NOTE:
            Some MPI-1 implementations do require the actual command line arguments to be  passed
            in  MPI_Init().  In  this  case,  you will need to use a rebuilt, MPI-enabled, Python
            interpreter executable. MPI for Python has some support for alleviating you from this
            task. Check the instructions at MPI-enabled Python interpreter in the appendix.

   Using pip
       If  you  already have a working MPI (either if you installed it from sources or by using a
       pre-built package from your favourite  GNU/Linux  distribution)  and  the  mpicc  compiler
       wrapper is on your search path, you can use pip:

          $ python -m pip install mpi4py

       NOTE:
          If  the  mpicc  compiler  wrapper  is not on your search path (or if it has a different
          name) you can use env to pass the environment variable MPICC providing the full path to
          the MPI compiler wrapper executable:

              $ env MPICC=/path/to/mpicc python -m pip install mpi4py

       WARNING:
          pip  keeps  previouly  built  wheel files on its cache for future reuse. If you want to
          reinstall the mpi4py package using a different or updated MPI implementation, you  have
          to either first remove the cached wheel file with:

              $ python -m pip cache remove mpi4py

          or ask pip to disable the cache:

              $ python -m pip install --no-cache-dir mpi4py

   Using distutils
       The  MPI  for  Python  package is available for download at the project website generously
       hosted by GitHub. You can use curl or wget to get a release tarball.

       • Using curl:

            $ curl -O https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz

       • Using wget:

            $ wget https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz

       After unpacking the release tarball:

          $ tar -zxf mpi4py-X.Y.Z.tar.gz
          $ cd mpi4py-X.Y.Z

       the package is ready for building.

       MPI for Python uses a standard  distutils-based  build  system.  However,  some  distutils
       commands (like build) have additional options:

       --mpicc=
              Lets you specify a special location or name for the mpicc compiler wrapper.

       --mpi= Lets you pass a section with MPI configuration within a special configuration file.

       --configure
              Runs  exhaustive  tests  for  checking  about  missing  MPI  types,  constants, and
              functions. This option should be passed in order to build MPI  for  Python  against
              old MPI-1 or MPI-2 implementations, possibly providing a subset of MPI-3.

       If  you  use  a  MPI  implementation providing a mpicc compiler wrapper (e.g., MPICH, Open
       MPI), it will be used for compilation and linking. This is the preferred and  easiest  way
       of building MPI for Python.

       If mpicc is located somewhere in your search path, simply run the build command:

          $ python setup.py build

       If  mpicc is not in your search path or the compiler wrapper has a different name, you can
       run the build command specifying its location:

          $ python setup.py build --mpicc=/where/you/have/mpicc

       Alternatively, you can provide all the relevant information about your MPI  implementation
       by  editing  the  file called mpi.cfg. You can use the default section [mpi] or add a new,
       custom section, for example [other_mpi] (see the examples provided in the mpi.cfg file  as
       a starting point to write your own section):

          [mpi]

          include_dirs         = /usr/local/mpi/include
          libraries            = mpi
          library_dirs         = /usr/local/mpi/lib
          runtime_library_dirs = /usr/local/mpi/lib

          [other_mpi]

          include_dirs         = /opt/mpi/include ...
          libraries            = mpi ...
          library_dirs         = /opt/mpi/lib ...
          runtime_library_dirs = /op/mpi/lib ...

          ...

       and then run the build command, perhaps specifying you custom configuration section:

          $ python setup.py build --mpi=other_mpi

       After building, the package is ready for install.

       If  you  have root privileges (either by log-in as the root user of by using sudo) and you
       want to install MPI for Python in your system for all users, just do:

          $ python setup.py install

       The  previous   steps   will   install   the   mpi4py   package   at   standard   location
       prefix/lib/pythonX.X/site-packages.

       If  you do not have root privileges or you want to install MPI for Python for your private
       use, just do:

          $ python setup.py install --user

   Testing
       To quickly test the installation:

          $ mpiexec -n 5 python -m mpi4py.bench helloworld
          Hello, World! I am process 0 of 5 on localhost.
          Hello, World! I am process 1 of 5 on localhost.
          Hello, World! I am process 2 of 5 on localhost.
          Hello, World! I am process 3 of 5 on localhost.
          Hello, World! I am process 4 of 5 on localhost.

       If you installed from source, issuing at the command line:

          $ mpiexec -n 5 python demo/helloworld.py

       or (in the case of ancient MPI-1 implementations):

          $ mpirun -np 5 python `pwd`/demo/helloworld.py

       will launch a five-process  run  of  the  Python  interpreter  and  run  the  test  script
       demo/helloworld.py from the source distribution.

       You can also run all the unittest scripts:

          $ mpiexec -n 5 python test/runtests.py

       or, if you have nose unit testing framework installed:

          $ mpiexec -n 5 nosetests -w test

       or, if you have py.test unit testing framework installed:

          $ mpiexec -n 5 py.test test/

APPENDIX

   MPI-enabled Python interpreter
          WARNING:
              These  days  it  is no longer required to use the MPI-enabled Python interpreter in
              most cases, and, therefore, it is not built by default anymore because  it  is  too
              difficult  to  reliably  build a Python interpreter across different distributions.
              If you know that you still really need it, see below on how to  use  the  build_exe
              and install_exe commands.

       Some MPI-1 implementations (notably, MPICH 1) do require the actual command line arguments
       to be passed at the time MPI_Init() is called. In this  case,  you  will  need  to  use  a
       re-built,  MPI-enabled,  Python  interpreter  binary  executable.  A  basic implementation
       (targeting Python 2.X) of what is required is shown below:

          #include <Python.h>
          #include <mpi.h>

          int main(int argc, char *argv[])
          {
             int status, flag;
             MPI_Init(&argc, &argv);
             status = Py_Main(argc, argv);
             MPI_Finalized(&flag);
             if (!flag) MPI_Finalize();
             return status;
          }

       The source code above is straightforward;  compiling  it  should  also  be.  However,  the
       linking  step  is  more tricky: special flags have to be passed to the linker depending on
       your platform. In order to alleviate you  for  such  low-level  details,  MPI  for  Python
       provides  some  pure-distutils  based  support  to build and install an MPI-enabled Python
       interpreter executable:

          $ cd mpi4py-X.X.X
          $ python setup.py build_exe [--mpi=<name>|--mpicc=/path/to/mpicc]
          $ [sudo] python setup.py install_exe [--install-dir=$HOME/bin]

       After  the  above  steps  you  should  have  the  MPI-enabled  interpreter  installed   as
       prefix/bin/pythonX.X-mpi   (or  $HOME/bin/pythonX.X-mpi).  Assuming  that  prefix/bin  (or
       $HOME/bin) is listed on your PATH, you should be able to  enter  your  MPI-enabled  Python
       interactively, for example:

          $ python2.7-mpi
          Python 2.7.8 (default, Nov 10 2014, 08:19:18)
          [GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2
          Type "help", "copyright", "credits" or "license" for more information.
          >>> import sys
          >>> sys.executable
          '/usr/bin/python2.7-mpi'
          >>>

   Building MPI from sources
       In  the  list  below  you  have  some  executive  instructions  for  building  some of the
       open-source MPI implementations out there with support  for  shared/dynamic  libraries  on
       POSIX environments.

       • MPICH

            $ tar -zxf mpich-X.X.X.tar.gz
            $ cd mpich-X.X.X
            $ ./configure --enable-shared --prefix=/usr/local/mpich
            $ make
            $ make install

       • Open MPI

            $ tar -zxf openmpi-X.X.X tar.gz
            $ cd openmpi-X.X.X
            $ ./configure --prefix=/usr/local/openmpi
            $ make all
            $ make install

       • MPICH 1

            $ tar -zxf mpich-X.X.X.tar.gz
            $ cd mpich-X.X.X
            $ ./configure --enable-sharedlib --prefix=/usr/local/mpich1
            $ make
            $ make install

       Perhaps  you  will  need  to  set  the LD_LIBRARY_PATH environment variable (using export,
       setenv or what applies to your system)  pointing  to  the  directory  containing  the  MPI
       libraries  .  In  case  of  getting  runtime linking errors when running MPI programs, the
       following lines can be added to the user login shell script (.profile, .bashrc, etc.).

       • MPICH

            MPI_DIR=/usr/local/mpich
            export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH

       • Open MPI

            MPI_DIR=/usr/local/openmpi
            export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH

       • MPICH 1

            MPI_DIR=/usr/local/mpich1
            export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH:
            export MPICH_USE_SHLIB=yes

         WARNING:
            MPICH 1 support for dynamic libraries is not completely transparent. Users should set
            the  environment variable MPICH_USE_SHLIB to yes in order to avoid link problems when
            using the mpicc compiler wrapper.

AUTHOR

       Lisandro Dalcin

COPYRIGHT

       2024, Lisandro Dalcin