bionic (4) numa.4freebsd.gz

Provided by: freebsd-manpages_11.1-3_all bug

NAME

     NUMA — Non-Uniform Memory Access

SYNOPSIS

     options SMP
     options MAXMEMDOM=16

     #include <sys/numa.h>
     #include <sys/cpuset.h>
     #include <sys/bus.h>

DESCRIPTION

     Non-Uniform Memory Access is a computer architecture design which involves unequal costs between
     processors, memory and IO devices in a given system.

     In a NUMA architecture, the latency to access specific memory or IO devices depends upon which processor
     the memory or device is attached to.  Accessing memory local to a processor is faster than accessing memory
     that is connected to one of the other processors.

     NUMA is enabled when the MAXMEMDOM option is used in a kernel configuration file and is set to a value
     greater than 1.

     Thread and process NUMA policies are controlled with the numa_setaffinity(2) and numa_getaffinity(2)
     syscalls.

     The numactl(1) tool is available for starting processes with a non-default policy, or to change the policy
     of an existing thread or process.

     Systems with non-uniform access to I/O devices may mark those devices with the local VM domain identifier.
     Drivers can find out their local domain information by calling bus_get_domain(9).

   MIB Variables
     The operation of NUMA is controlled and exposes information with these sysctl(8) MIB variables:

     vm.ndomains
             The number of VM domains which have been detected.

     vm.default_policy
             The default VM domain allocation policy.  Defaults to "first-touch-rr".  The valid values are
             "first-touch", "first-touch-rr", "rr", where "rr" is a short-hand for "round-robin."  See
             numa_setaffinity(2) for more information about the available policies.

     vm.phys_locality
             A table indicating the relative cost of each VM domain to each other.  A value of 10 indicates
             equal cost.  A value of -1 means the locality map is not available or no locality information is
             available.

     vm.phys_segs
             The map of physical memory, grouped by VM domain.

IMPLEMENTATION NOTES

     The current NUMA implementation is VM-focused.  The hardware NUMA domains are mapped into a contiguous,
     non-sparse VM domain space, starting from 0.  Thus, VM domain information (for example, the domain
     identifier) is not necessarily the same as is found in the hardware specific information.

     The NUMA allocation policies are implemented as a policy and iterator in sys/vm/vm_domain.c and
     sys/vm/vm_domain.h.  Policy information is available in both struct thread and struct proc.  Processes
     inherit NUMA policy from parent processes and threads inherit NUMA policy from parent threads.  Note that
     threads do not explicitly inherit their NUMA policy from processes.  Instead, if no thread policy is set,
     the system will fall back to the process policy.

     For now, NUMA domain policies only influence physical page allocation in sys/vm/vm_phys.c.  This is useful
     for userland memory allocation, but not for kernel and driver memory allocation.  These features will be
     implemented in future work.

SEE ALSO

     numactl(1), numa_getaffinity(2), numa_setaffinity(2), bus_get_domain(9)

HISTORY

     NUMA first appeared in FreeBSD 9.0 as a first-touch allocation policy with a fail-over to round-robin
     allocation and was not configurable.  It was then modified in FreeBSD 10.0 to implement a round-robin
     allocation policy and was also not configurable.

     The numa_getaffinity(2) and numa_setaffinity(2) syscalls first appeared in FreeBSD 11.0.

     The numactl(1) tool first appeared in FreeBSD 11.0.

AUTHORS

     This manual page written by Adrian Chadd <adrian@FreeBSD.org>.

NOTES

     No statistics are kept to indicate how often NUMA allocation policies succeed or fail.