Provided by: ibacm_1.0.9-3_amd64 

NAME
ibacm - address and route resolution services for InfiniBand.
SYNOPSIS
ibacm [-D] [-P] [-A addr_file] [-O option_file]
DESCRIPTION
The IB ACM implements and provides a framework for name, address, and route (path) resolution services
over InfiniBand. It is intended to address connection setup scalability issues running MPI applications
on large clusters. The IB ACM provides information needed to establish a connection, but does not
implement the CM protocol.
A primary user of the ibacm service is the librdmacm library. This enables applications to make use of
the ibacm service without code changes or needing to be aware that the service is in use. librdmacm
versions 1.0.12 - 1.0.15 can invoke IB ACM services when built using the --with-ib_acm option. Version
1.0.16 and newer of librdmacm will automatically use the IB ACM if it is installed. The IB ACM services
tie in under the rdma_resolve_addr, rdma_resolve_route, and rdma_getaddrinfo routines. For maximum
benefit, the rdma_getaddrinfo routine should be used, however existing applications should still see
significant connection scaling benefits using the calls available in librdmacm 1.0.11 and previous
releases.
The IB ACM is focused on being scalable and efficient. The current implementation limits network
traffic, SA interactions, and centralized services. ACM supports multiple resolution protocols in order
to handle different fabric topologies.
The IB ACM package is comprised of two components: the ibacm service and a test/configuration utility -
ib_acme. Both are userspace components and are available for Linux and Windows. Additional details are
given below.
OPTIONS
-D run in daemon mode (default)
-P run as standard process
-A addr_file
address configuration file
-O option_file
option configuration file
QUICK START GUIDE
1. Prerequisites: libibverbs and libibumad must be installed. The IB stack should be running with IPoIB
configured. These steps assume that the user has administrative privileges.
2. Install the IB ACM package. This installs ibacm, ib_acme, and init.d scripts.
3. Run 'ibacm' as administrator to start the ibacm daemon.
4. Optionally, run 'ib_acme -d <dest_ip> -v' to verify that the ibacm service is running.
5. Install librdmacm, using the build option --with-ib_acm if needed. This build option is not needed
with librdmacm 1.0.17 or newer. The librdmacm will automatically use the ibacm service. On failures,
the librdmacm will fall back to normal resolution.
6. You can use ib_acme -P to gather performance statistics from the local ibacm daemon to see if the
service is working correctly.
NOTES
ib_acme:
The ib_acme program serves a dual role. It acts as a utility to test ibacm operation and help verify if
the ibacm service and selected protocol is usable for a given cluster configuration. Additionally, it
automatically generates ibacm configuration files to assist with or eliminate manual setup.
ibacm configuration files:
The ibacm service relies on two configuration files.
The ibacm_addr.cfg file contains name and address mappings for each IB <device, port, pkey> endpoint.
Although the names in the ibacm_addr.cfg file can be anything, ib_acme maps the host name and IP
addresses to the IB endpoints. If the address file cannot be found, the ibacm service will attempt to
create one using default values.
The ibacm_opts.cfg file provides a set of configurable options for the ibacm service, such as timeout,
number of retries, logging level, etc. ib_acme generates the ibacm_opts.cfg file using static
information. If an option file cannot be found, ibacm will use default values.
ibacm:
The ibacm service is responsible for resolving names and addresses to InfiniBand path information and
caching such data. It should execute with administrative privileges.
The ibacm implements a client interface over TCP sockets, which is abstracted by the librdmacm library.
One or more back-end protocols are used by the ibacm service to satisfy user requests. Although the
ibacm supports standard SA path record queries on the back-end, it also supports a resolution protocol
based on multicast traffic. The latter is not usable on all fabric topologies, specifically ones that
may not have reversible paths or fabrics using torus routing. Users should use the ib_acme utility to
verify that multicast protocol is usable before running other applications.
Conceptually, the ibacm service implements an ARP like protocol and either uses IB multicast records to
construct path record data or queries the SA directly, depending on the selected route protocol. By
default, the ibacm services uses and caches SA path record queries.
Specifically, all IB endpoints join a number of multicast groups. Multicast groups differ based on
rates, mtu, sl, etc., and are prioritized. All participating endpoints must be able to communicate on
the lowest priority multicast group. The ibacm assigns one or more names/addresses to each IB endpoint
using the ibacm_addr.cfg file. Clients provide source and destination names or addresses as input to the
service, and receive as output path record data.
The service maps a client's source name/address to a local IB endpoint. If a client does not provide a
source address, then the ibacm service will select one based on the destination and local routing tables.
If the destination name/address is not cached locally, it sends a multicast request out on the lowest
priority multicast group on the local endpoint. The request carries a list of multicast groups that the
sender can use. The recipient of the request selects the highest priority multicast group that it can
use as well and returns that information directly to the sender. The request data is cached by all
endpoints that receive the multicast request message. The source endpoint also caches the response and
uses the multicast group that was selected to construct or obtain path record data, which is returned to
the client.
The current implementation of the IB ACM has several additional restrictions:
- The ibacm is limited in its handling of dynamic changes. ibacm must be stopped and restarted if a
cluster is reconfigured.
- Cached data does not timed out and is only updated if a new resolution request is received from a
different QPN than a cached request.
- Support for IPv6 has not been verified.
- The number of addresses that can be assigned to a single endpoint is limited to 4.
- The number of multicast groups that an endpoint can support is limited to 2.
The ibacm contains several internal caches. These include caches for GID and LID destination addresses.
These caches can be optionally preloaded. ibacm supports the OpenSM dump_pr plugin "full" PathRecord
format which is used to preload these caches. The file format is specified in the ibacm_opts.cfg file
via the route_preload setting which should be set to full_opensm_v1 for this file format. Default format
is none which does not preload these caches. See dump_pr.notes.txt in dump_pr for more information on
the full_opensm_v1 file format and how to configure OpenSM to generate this file.
Additionally, the name, IPv4, and IPv6 caches can be be preloaded by using the addr_preload option. The
default is none which does not preload these caches. To preload these caches, set this option to
acm_hosts and configure the addr_data_file appropriately.
SEE ALSO
ibacm(7), ib_acme(1), rdma_cm(7)
ibacm 2013-07-18 ibacm(1)