Provided by: slurm-llnl_2.2.7-1_i386 bug

NAME

       salloc  -  Obtain  a  SLURM  job allocation (a set of nodes), execute a
       command, and then release the allocation when the command is finished.

SYNOPSIS

       salloc [options] [<command> [command args]]

DESCRIPTION

       salloc is used to allocate a SLURM job allocation, which is  a  set  of
       resources  (nodes),  possibly with some set of constraints (e.g. number
       of  processors  per  node).   When  salloc  successfully  obtains   the
       requested  allocation,  it then runs the command specified by the user.
       Finally,  when  the  user  specified  command   is   complete,   salloc
       relinquishes the job allocation.

       The  command may be any program the user wishes.  Some typical commands
       are xterm, a shell script containing srun commands, and srun  (see  the
       EXAMPLES  section).  If  no  command  is  specified,  then the value of
       SallocDefaultCommand in slurm.conf is used. If SallocDefaultCommand  is
       not set, then salloc runs the user's default shell.

       NOTE:  The  salloc  logic  includes  support  to  save  and restore the
       terminal  line  settings  and  is  designed  to  be  executed  in   the
       foreground.  If  you  need to execute salloc in the background, set its
       standard input to some file, for example: "salloc -n16 a.out </dev/null
       &"

OPTIONS

       -A, --account=<account>
              Charge  resources  used  by  this job to specified account.  The
              account is an arbitrary string. The account name may be  changed
              after job submission using the scontrol command.

       --acctg-freq=<seconds>
              Define  the  job accounting sampling interval.  This can be used
              to override  the  JobAcctGatherFrequency  parameter  in  SLURM's
              configuration  file,  slurm.conf.  A value of zero disables real
              the periodic job sampling and  provides  accounting  information
              only  on  job  termination (reducing SLURM interference with the
              job).

       -B --extra-node-info=<sockets[:cores[:threads]]>
              Request a specific allocation of resources with  details  as  to
              the number and type of computational resources within a cluster:
              number of sockets (or physical processors) per node,  cores  per
              socket,  and  threads  per  core.  The total amount of resources
              being requested is the product of all of the terms.  Each  value
              specified  is considered a minimum.  An asterisk (*) can be used
              as a placeholder indicating that all available resources of that
              type  are  to be utilized.  As with nodes, the individual levels
              can also be specified in separate options if desired:
                  --sockets-per-node=<sockets>
                  --cores-per-socket=<cores>
                  --threads-per-core=<threads>
              If  task/affinity  plugin  is  enabled,   then   specifying   an
              allocation  in this manner also sets a default --cpu_bind option
              of threads if the -B option specifies a thread count,  otherwise
              an  option  of  cores if a core count is specified, otherwise an
              option   of   sockets.    If   SelectType   is   configured   to
              select/cons_res,   it   must   have   a  parameter  of  CR_Core,
              CR_Core_Memory, CR_Socket, or CR_Socket_Memory for  this  option
              to be honored.  This option is not supported on BlueGene systems
              (select/bluegene plugin is configured).  If not  specified,  the
              scontrol show job will display 'ReqS:C:T=*:*:*'.

       --begin=<time>
              Submit  the  batch  script  to the SLURM controller immediately,
              like normal, but tell the controller to defer the allocation  of
              the job until the specified time.

              Time may be of the form HH:MM:SS to run a job at a specific time
              of day (seconds are optional).  (If that time is  already  past,
              the  next day is assumed.)  You may also specify midnight, noon,
              or teatime (4pm) and you can have a time-of-day suffixed with AM
              or  PM  for running in the morning or the evening.  You can also
              say what day the job will be run, by specifying a  date  of  the
              form  MMDDYY or MM/DD/YY YYYY-MM-DD. Combine date and time using
              the following format YYYY-MM-DD[THH:MM[:SS]]. You can also  give
              times  like  now + count time-units, where the time-units can be
              seconds (default), minutes, hours, days, or weeks  and  you  can
              tell  SLURM  to  run the job today with the keyword today and to
              run the job tomorrow with the keyword tomorrow.  The  value  may
              be changed after job submission using the scontrol command.  For
              example:
                 --begin=16:00
                 --begin=now+1hour
                 --begin=now+60           (seconds by default)
                 --begin=2010-01-20T12:34:00

              Notes on date/time specifications:
               -  Although  the  'seconds'  field   of   the   HH:MM:SS   time
              specification is allowed by the code, note that the poll time of
              the SLURM scheduler is not precise enough to guarantee  dispatch
              of  the  job  on  the exact second.  The job will be eligible to
              start on the next poll following the specified time.  The  exact
              poll  interval  depends on the SLURM scheduler (e.g., 60 seconds
              with the default sched/builtin).
               -  If  no  time  (HH:MM:SS)  is  specified,  the   default   is
              (00:00:00).
               -  If a date is specified without a year (e.g., MM/DD) then the
              current year is assumed, unless the  combination  of  MM/DD  and
              HH:MM:SS  has  already  passed  for that year, in which case the
              next year is used.

       --bell Force salloc to ring the terminal bell when the  job  allocation
              is  granted  (and  only if stdout is a tty).  By default, salloc
              only rings the bell if the allocation is pending for  more  than
              ten  seconds  (and only if stdout is a tty). Also see the option
              --no-bell.

       --comment=<string>
              An arbitrary comment.

       -C, --constraint=<list>
              Specify a list of constraints.   The  constraints  are  features
              that have been assigned to the nodes by the slurm administrator.
              The list of constraints may include multiple features  separated
              by  ampersand  (AND)  and/or  vertical  bar (OR) operators.  For
              example:             --constraint="opteron&video"             or
              --constraint="fast|faster".   In  the  first example, only nodes
              having both the feature "opteron" AND the feature  "video"  will
              be  used.   There  is  no mechanism to specify that you want one
              node with  feature  "opteron"  and  another  node  with  feature
              "video" in case no node has both features.  If only one of a set
              of possible options should be used for all allocated nodes, then
              use  the  OR  operator  and  enclose  the  options within square
              brackets.  For example: "--constraint=[rack1|rack2|rack3|rack4]"
              might  be  used to specify that all nodes must be allocated on a
              single rack of the cluster, but any of those four racks  can  be
              used.   A  request  can  also specify the number of nodes needed
              with some feature by appending an asterisk and count  after  the
              feature     name.      For     example     "salloc    --nodes=16
              --constraint=graphics*4 ..."  indicates that the job requires 16
              nodes at that at least four of those nodes must have the feature
              "graphics."  Constraints with node counts may only  be  combined
              with  AND  operators.   If no nodes have the requested features,
              then the job will be rejected by the slurm job manager.

       --contiguous
              If set, then the allocated nodes must  form  a  contiguous  set.
              Not honored with the topology/tree or topology/3d_torus plugins,
              both of which can modify the node ordering.

       --cores-per-socket=<cores>
              Restrict node selection to nodes with  at  least  the  specified
              number of cores per socket.  See additional information under -B
              option above when task/affinity plugin is enabled.

       --cpu_bind=[{quiet,verbose},]type
              Bind tasks to CPUs. Used only when the task/affinity  plugin  is
              enabled.    The   configuration  parameter  TaskPluginParam  may
              override these options.   For  example,  if  TaskPluginParam  is
              configured  to  bind to cores, your job will not be able to bind
              tasks to sockets.  NOTE: To have  SLURM  always  report  on  the
              selected  CPU  binding for all commands executed in a shell, you
              can  enable  verbose  mode   by   setting   the   SLURM_CPU_BIND
              environment variable value to "verbose".

              The  following  informational environment variables are set when
              --cpu_bind is in use:
                      SLURM_CPU_BIND_VERBOSE
                      SLURM_CPU_BIND_TYPE
                      SLURM_CPU_BIND_LIST

              See  the  ENVIRONMENT  VARIABLE  section  for  a  more  detailed
              description of the individual SLURM_CPU_BIND* variables.

              When  using --cpus-per-task to run multithreaded tasks, be aware
              that CPU binding is inherited from the parent  of  the  process.
              This  means that the multithreaded task should either specify or
              clear the CPU binding itself to avoid having all threads of  the
              multithreaded   task  use  the  same  mask/CPU  as  the  parent.
              Alternatively, fat masks (masks  which  specify  more  than  one
              allowed  CPU)  could  be  used for the tasks in order to provide
              multiple CPUs for the multithreaded tasks.

              By default, a job step has access to every CPU allocated to  the
              job.   To  ensure  that  distinct CPUs are allocated to each job
              step, use the --exclusive option.

              If the job step allocation includes an allocation with a  number
              of sockets, cores, or threads equal to the number of tasks to be
              started  then  the  tasks  will  by  default  be  bound  to  the
              appropriate  resources.   Disable  this  mode  of  operation  by
              explicitly setting "--cpu-bind=none".

              Note that a job step can be allocated different numbers of  CPUs
              on each node or be allocated CPUs not starting at location zero.
              Therefore one of the options which  automatically  generate  the
              task  binding  is  recommended.   Explicitly  specified masks or
              bindings are only honored when the job step has  been  allocated
              every available CPU on the node.

              Binding  a task to a NUMA locality domain means to bind the task
              to the set of CPUs that belong to the NUMA  locality  domain  or
              "NUMA  node".   If  NUMA  locality  domain  options  are used on
              systems with no NUMA support, then each socket is  considered  a
              locality domain.

              Supported options include:

              q[uiet]
                     Quietly bind before task runs (default)

              v[erbose]
                     Verbosely report binding before task runs

              no[ne] Do not bind tasks to CPUs (default)

              rank   Automatically  bind  by task rank.  Task zero is bound to
                     socket (or core or  thread)  zero,  etc.   Not  supported
                     unless the entire node is allocated to the job.

              map_cpu:<list>
                     Bind  by  mapping  CPU  IDs  to  tasks as specified where
                     <list> is  <cpuid1>,<cpuid2>,...<cpuidN>.   CPU  IDs  are
                     interpreted  as  decimal  values unless they are preceded
                     with  '0x'  in  which  case  they  are   interpreted   as
                     hexadecimal values.  Not supported unless the entire node
                     is allocated to the job.

              mask_cpu:<list>
                     Bind by setting CPU masks on  tasks  as  specified  where
                     <list>  is  <mask1>,<mask2>,...<maskN>.   CPU  masks  are
                     always interpreted  as  hexadecimal  values  but  can  be
                     preceded with an optional '0x'.

              sockets
                     Automatically  generate  masks  binding tasks to sockets.
                     Only the CPUs on the socket which have been allocated  to
                     the  job  will  be  used.  If the number of tasks differs
                     from the number of allocated sockets this can  result  in
                     sub-optimal binding.

              cores  Automatically  generate masks binding tasks to cores.  If
                     the number of tasks differs from the number of  allocated
                     cores this can result in sub-optimal binding.

              threads
                     Automatically  generate  masks  binding tasks to threads.
                     If the  number  of  tasks  differs  from  the  number  of
                     allocated threads this can result in sub-optimal binding.

              ldoms  Automatically   generate  masks  binding  tasks  to  NUMA
                     locality domains.  If the number of  tasks  differs  from
                     the  number of allocated locality domains this can result
                     in sub-optimal binding.

              help   Show this help message

       -c, --cpus-per-task=<ncpus>
              Advise the SLURM controller that ensuing job steps will  require
              ncpus  number  of processors per task.  Without this option, the
              controller will just try to allocate one processor per task.

              For instance, consider an application that  has  4  tasks,  each
              requiring   3  processors.   If  our  cluster  is  comprised  of
              quad-processors nodes and we simply ask for 12  processors,  the
              controller  might  give  us only 3 nodes.  However, by using the
              --cpus-per-task=3 options, the controller knows that  each  task
              requires  3 processors on the same node, and the controller will
              grant an allocation of 4 nodes, one for each of the 4 tasks.

       -d, --dependency=<dependency_list>
              Defer the start of this job  until  the  specified  dependencies
              have been satisfied completed.  <dependency_list> is of the form
              <type:job_id[:job_id][,type:job_id[:job_id]]>.   Many  jobs  can
              share  the  same  dependency  and  these jobs may even belong to
              different  users. The  value may be changed after job submission
              using the scontrol command.

              after:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have begun execution.

              afterany:job_id[:jobid...]
                     This job can begin execution  after  the  specified  jobs
                     have terminated.

              afternotok:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have terminated in some failed state (non-zero exit code,
                     node failure, timed out, etc).

              afterok:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have successfully executed (ran  to  completion  with  an
                     exit code of zero).

              singleton
                     This   job  can  begin  execution  after  any  previously
                     launched jobs sharing the same job  name  and  user  have
                     terminated.

       -D, --chdir=<path>
              change directory to path before beginning execution.

       --exclusive
              The  job  allocation cannot share nodes with other running jobs.
              This is the oposite of --share, whichever option is seen last on
              the  command  line  will  win.   (The  default  shared/exclusive
              behaviour depends on system configuration.)

       -F, --nodefile=<node file>
              Much like --nodelist, but the list is contained  in  a  file  of
              name  node  file.   The  node  names  of  the list may also span
              multiple lines in the file.    Duplicate node names in the  file
              will be ignored.  The order of the node names in the list is not
              important; the node names will be sorted by SLURM.

       --get-user-env[=timeout][mode]
              This option will load login environment variables for  the  user
              specified  in  the  --uid option.  The environment variables are
              retrieved by running something of this sort "su - <username>  -c
              /usr/bin/env"  and  parsing  the  output.   Be  aware  that  any
              environment variables already set in salloc's  environment  will
              take  precedence  over  any  environment variables in the user's
              login environment.  The optional timeout value  is  in  seconds.
              Default value is 3 seconds.  The optional mode value control the
              "su" options.  With a  mode  value  of  "S",  "su"  is  executed
              without  the  "-"  option.   With  a  mode value of "L", "su" is
              executed with the "-" option, replicating the login environment.
              If  mode not specified, the mode established at SLURM build time
              is   used.    Example   of   use    include    "--get-user-env",
              "--get-user-env=10"           "--get-user-env=10L",          and
              "--get-user-env=S".  NOTE: This option only works if the  caller
              has  an  effective  uid  of  "root".  This option was originally
              created for use by Moab.

       --gid=<group>
              If salloc is run as root, and the --gid option is  used,  submit
              the job with group's group access permissions.  group may be the
              group name or the numerical group ID.

       --gres=<list>
              Specifies  a  comma  delimited  list   of   generic   consumable
              resources.    The   format   of   each  entry  on  the  list  is
              "name[:count[*cpu]]".   The  name  is  that  of  the  consumable
              resource.   The  count  is  the number of those resources with a
              default value of 1.  The specified resources will  be  allocated
              to  the job on each node allocated unless "*cpu" is appended, in
              which case the resources will be allocated on a per  cpu  basis.
              The  available  generic  consumable resources is configurable by
              the  system  administrator.   A  list   of   available   generic
              consumable  resources  will be printed and the command will exit
              if the option argument  is  "help".   Examples  of  use  include
              "--gres=gpus:2*cpu,disk=40G" and "--gres=help".

       -H, --hold
              Specify  the job is to be submitted in a held state (priority of
              zero).  A held job can now be released using scontrol  to  reset
              its priority (e.g. "scontrol update jobid=<id> priority=1".

       -h, --help
              Display help information and exit.

       --hint=<type>
              Bind tasks according to application hints

              compute_bound
                     Select  settings  for compute bound applications: use all
                     cores in each socket, one thread per core

              memory_bound
                     Select settings for memory bound applications:  use  only
                     one core in each socket, one thread per core

              [no]multithread
                     [don't]  use  extra  threads with in-core multi-threading
                     which can benefit communication intensive applications

              help   show this help message

       -I, --immediate[=<seconds>]
              exit if resources are  not  available  within  the  time  period
              specified.  If no argument is given, resources must be available
              immediately for the request to succeed.  By default, --immediate
              is  off,  and  the  command  will  block  until resources become
              available.

       -J, --job-name=<jobname>
              Specify a name for the job allocation. The specified  name  will
              appear  along  with the job id number when querying running jobs
              on the system.   The  default  job  name  is  the  name  of  the
              "command" specified on the command line.

       --jobid=<jobid>
              Allocate  resources  as  the specified job id.  NOTE: Only valid
              for user root.

       -K, --kill-command[=signal]
              salloc always runs a user-specified command once the  allocation
              is  granted.   salloc will wait indefinitely for that command to
              exit.  If you specify the --kill-command option salloc will send
              a  signal  to  your  command  any time that the SLURM controller
              tells salloc that its job allocation has been revoked.  The  job
              allocation  can be revoked for a couple of reasons: someone used
              scancel to revoke the allocation, or the allocation reached  its
              time  limit.  If you do not specify a signal name or number, the
              default signal is SIGTERM.

       -k, --no-kill
              Do not automatically terminate a job of one of the nodes it  has
              been allocated fails.  The user will assume the responsibilities
              for fault-tolerance should a node fail.  When there  is  a  node
              failure,  any  active  job steps (usually MPI jobs) on that node
              will almost certainly suffer a fatal error, but with  --no-kill,
              the  job  allocation  will not be revoked so the user may launch
              new job steps on the remaining nodes in their allocation.

              By default SLURM terminates the entire  job  allocation  if  any
              node fails in its range of allocated nodes.

       -L, --licenses=<license>
              Specification  of  licenses (or other resources available on all
              nodes of the cluster) which  must  be  allocated  to  this  job.
              License  names  can  be  followed  by an asterisk and count (the
              default count is one).  Multiple license names should  be  comma
              separated (e.g.  "--licenses=foo*4,bar").

       -m, --distribution=
              <block|cyclic|arbitrary|plane=<options>[:block|cyclic]>

              Specify alternate distribution methods for remote processes.  In
              salloc, this only sets environment variables that will  be  used
              by   subsequent   srun   requests.   This  option  controls  the
              assignment of tasks to the nodes on which  resources  have  been
              allocated,  and the distribution of those resources to tasks for
              binding (task affinity). The first distribution  method  (before
              the  ":")  controls  the distribution of resources across nodes.
              The optional second distribution method (after the ":") controls
              the  distribution  of  resources  across  sockets within a node.
              Note that with select/cons_res, the number of cpus allocated  on
              each   socket   and   node   may  be  different.  Refer  to  the
              mc_support.html  document  for  more  information  on   resource
              allocation,  assignment  of tasks to nodes, and binding of tasks
              to CPUs.

              First distribution method:

              block  The block distribution method will distribute tasks to  a
                     node  such  that  consecutive  tasks  share  a  node. For
                     example, consider an allocation of three nodes each  with
                     two  cpus.  A  four-task  block distribution request will
                     distribute those tasks to the nodes with  tasks  one  and
                     two on the first node, task three on the second node, and
                     task four on the third node.  Block distribution  is  the
                     default  behavior  if  the  number  of  tasks exceeds the
                     number of allocated nodes.

              cyclic The cyclic distribution method will distribute tasks to a
                     node  such  that  consecutive  tasks are distributed over
                     consecutive  nodes  (in  a  round-robin   fashion).   For
                     example,  consider an allocation of three nodes each with
                     two cpus. A four-task cyclic  distribution  request  will
                     distribute  those  tasks  to the nodes with tasks one and
                     four on the first node, task two on the second node,  and
                     task  three on the third node.  Note that when SelectType
                     is select/cons_res, the same number of CPUs  may  not  be
                     allocated   on  each  node.  Task  distribution  will  be
                     round-robin among all the  nodes  with  CPUs  yet  to  be
                     assigned  to  tasks.   Cyclic distribution is the default
                     behavior if the number of tasks is  no  larger  than  the
                     number of allocated nodes.

              plane  The  tasks are distributed in blocks of a specified size.
                     The options include a number representing the size of the
                     task   block.    This   is   followed   by   an  optional
                     specification of the task distribution  scheme  within  a
                     block of tasks and between the blocks of tasks.  For more
                     details (including examples and diagrams), please see
                     https://computing.llnl.gov/linux/slurm/mc_support.html
                     and
                     https://computing.llnl.gov/linux/slurm/dist_plane.html.

              arbitrary
                     The  arbitrary  method  of  distribution  will   allocate
                     processes  in-order  as  listed in file designated by the
                     environment variable SLURM_HOSTFILE.  If this variable is
                     listed  it will over ride any other method specified.  If
                     not set the method will default  to  block.   Inside  the
                     hostfile  must  contain  at  minimum  the number of hosts
                     requested and be one per line  or  comma  separated.   If
                     specifying  a  task  count  (-n, --ntasks=<number>), your
                     tasks will be laid out on the nodes in the order  of  the
                     file.

              Second distribution method:

              block  The  block  distribution  method will distribute tasks to
                     sockets such that consecutive tasks share a socket.

              cyclic The cyclic distribution method will distribute  tasks  to
                     sockets  such that consecutive tasks are distributed over
                     consecutive sockets (in a round-robin fashion).

       --mail-type=<type>
              Notify user by email when certain event types occur.  Valid type
              values  are  BEGIN,  END,  FAIL,  REQUEUE,  and  ALL  (any state
              change). The user to be notified is indicated with --mail-user.

       --mail-user=<user>
              User to receive email notification of state changes  as  defined
              by --mail-type.  The default value is the submitting user.

       --mem=<MB>
              Specify the real memory required per node in MegaBytes.  Default
              value is DefMemPerNode and the maximum value  is  MaxMemPerNode.
              If configured, both of parameters can be seen using the scontrol
              show config command.  This parameter would generally be used  if
              whole  nodes  are  allocated to jobs (SelectType=select/linear).
              Also see --mem-per-cpu.  --mem and  --mem-per-cpu  are  mutually
              exclusive.

       --mem-per-cpu=<MB>
              Mimimum memory required per allocated CPU in MegaBytes.  Default
              value is DefMemPerCPU and the maximum value is MaxMemPerCPU (see
              exception  below). If configured, both of parameters can be seen
              using the scontrol show config command.  Note that if the  job's
              --mem-per-cpu  value  exceeds  the configured MaxMemPerCPU, then
              the user's limit will be treated as a  memory  limit  per  task;
              --mem-per-cpu  will  be  reduced  to  a  value  no  larger  than
              MaxMemPerCPU;  --cpus-per-task  will  be  set   and   value   of
              --cpus-per-task  multiplied  by the new --mem-per-cpu value will
              equal the original --mem-per-cpu value specified  by  the  user.
              This  parameter would generally be used if individual processors
              are allocated to jobs  (SelectType=select/cons_res).   Also  see
              --mem.  --mem and --mem-per-cpu are mutually exclusive.

       --mem_bind=[{quiet,verbose},]type
              Bind tasks to memory. Used only when the task/affinity plugin is
              enabled and the NUMA memory functions are available.  Note  that
              the  resolution  of  CPU  and  memory binding may differ on some
              architectures. For example, CPU binding may be performed at  the
              level  of the cores within a processor while memory binding will
              be performed at the level of  nodes,  where  the  definition  of
              "nodes"  may  differ  from system to system. The use of any type
              other than "none" or "local" is not recommended.   If  you  want
              greater control, try running a simple test code with the options
              "--cpu_bind=verbose,none --mem_bind=verbose,none"  to  determine
              the specific configuration.

              NOTE: To have SLURM always report on the selected memory binding
              for all commands executed in a shell,  you  can  enable  verbose
              mode by setting the SLURM_MEM_BIND environment variable value to
              "verbose".

              The following informational environment variables are  set  when
              --mem_bind is in use:

                      SLURM_MEM_BIND_VERBOSE
                      SLURM_MEM_BIND_TYPE
                      SLURM_MEM_BIND_LIST

              See  the  ENVIRONMENT  VARIABLES  section  for  a  more detailed
              description of the individual SLURM_MEM_BIND* variables.

              Supported options include:

              q[uiet]
                     quietly bind before task runs (default)

              v[erbose]
                     verbosely report binding before task runs

              no[ne] don't bind tasks to memory (default)

              rank   bind by task rank (not recommended)

              local  Use memory local to the processor in use

              map_mem:<list>
                     bind by mapping a node's memory  to  tasks  as  specified
                     where  <list>  is <cpuid1>,<cpuid2>,...<cpuidN>.  CPU IDs
                     are  interpreted  as  decimal  values  unless  they   are
                     preceded  with  '0x'  in  which  case they interpreted as
                     hexadecimal values (not recommended)

              mask_mem:<list>
                     bind by setting memory masks on tasks as specified  where
                     <list>  is  <mask1>,<mask2>,...<maskN>.  memory masks are
                     always interpreted  as  hexadecimal  values.   Note  that
                     masks  must  be  preceded with a '0x' if they don't begin
                     with [0-9] so they are seen as numerical values by srun.

              help   show this help message

       --mincpus=<n>
              Specify a minimum number of logical cpus/processors per node.

       -N, --nodes=<minnodes[-maxnodes]>
              Request that a minimum of minnodes nodes be  allocated  to  this
              job.   The  scheduler  may decide to launch the job on more than
              minnodes nodes.  A limit  on  the  maximum  node  count  may  be
              specified  with  maxnodes (e.g. "--nodes=2-4").  The minimum and
              maximum node count may be the same to specify a specific  number
              of  nodes  (e.g.  "--nodes=2-2"  will  ask  for two and ONLY two
              nodes).  The partition's node limits supersede those of the job.
              If  a  job's  node limits are outside of the range permitted for
              its associated partition, the job will  be  left  in  a  PENDING
              state.   This  permits  possible execution at a later time, when
              the partition limit is changed.  If a job node limit exceeds the
              number  of  nodes  configured  in the partition, the job will be
              rejected.  Note that the environment variable SLURM_NNODES  will
              be  set to the count of nodes actually allocated to the job. See
              the ENVIRONMENT VARIABLES  section for more information.  If  -N
              is  not  specified,  the  default behavior is to allocate enough
              nodes to satisfy the requirements of the -n and -c options.  The
              job will be allocated as many nodes as possible within the range
              specified and without delaying the initiation of the job.

       -n, --ntasks=<number>
              salloc does not launch  tasks,  it  requests  an  allocation  of
              resources  and  executed  some  command. This option advises the
              SLURM controller that job steps run within this allocation  will
              launch  a  maximum  of number tasks and sufficient resources are
              allocated to accomplish this.  The default is one task per node,
              but  note  that  the  --cpus-per-task  option  will  change this
              default.

       --network=<type>
              Specify the communication protocol to be used.  This  option  is
              supported  on  AIX  systems.  Since POE is used to launch tasks,
              this option is not normally  used  or  is  specified  using  the
              SLURM_NETWORK  environment variable.  The interpretation of type
              is system dependent.  For systems with an IBM Federation switch,
              the  following  comma-separated  and  case insensitive types are
              recognized: IP (the default is user-space),  SN_ALL,  SN_SINGLE,
              BULK_XFER  and  adapter  names   (e.g. SNI0 and SNI1).  For more
              information,  on  IBM  systems  see  poe  documentation  on  the
              environment  variables  MP_EUIDEVICE and MP_USE_BULK_XFER.  Note
              that only four jobs steps may be active at once on a  node  with
              the BULK_XFER option due to limitations in the Federation switch
              driver.

       --nice[=adjustment]
              Run the job with an adjusted scheduling priority  within  SLURM.
              With no adjustment value the scheduling priority is decreased by
              100. The adjustment range is from -10000 (highest  priority)  to
              10000  (lowest  priority).  Only  privileged users can specify a
              negative adjustment. NOTE: This option is presently  ignored  if
              SchedulerType=sched/wiki or SchedulerType=sched/wiki2.

       --ntasks-per-core=<ntasks>
              Request the maximum ntasks be invoked on each core.  Meant to be
              used with the --ntasks  option.   Related  to  --ntasks-per-node
              except  at the core level instead of the node level.  Masks will
              automatically be generated to bind the tasks  to  specific  core
              unless  --cpu_bind=none  is specified.  NOTE: This option is not
              supported      unless      SelectTypeParameters=CR_Core       or
              SelectTypeParameters=CR_Core_Memory is configured.

       --ntasks-per-socket=<ntasks>
              Request  the maximum ntasks be invoked on each socket.  Meant to
              be used with the --ntasks option.  Related to  --ntasks-per-node
              except  at  the  socket  level instead of the node level.  Masks
              will automatically be generated to bind the  tasks  to  specific
              sockets  unless --cpu_bind=none is specified.  NOTE: This option
              is  not  supported  unless   SelectTypeParameters=CR_Socket   or
              SelectTypeParameters=CR_Socket_Memory is configured.

       --ntasks-per-node=<ntasks>
              Request the maximum ntasks be invoked on each node.  Meant to be
              used  with   the   --nodes   option.    This   is   related   to
              --cpus-per-task=ncpus,  but  does  not  require knowledge of the
              actual number of cpus on each node.  In some cases, it  is  more
              convenient  to  be  able to request that no more than a specific
              number of tasks be invoked  on  each  node.   Examples  of  this
              include  submitting  a  hybrid MPI/OpenMP app where only one MPI
              "task/rank" should be assigned to each node while  allowing  the
              OpenMP  portion to utilize all of the parallelism present in the
              node, or submitting a  single  setup/cleanup/monitoring  job  to
              each  node  of a pre-existing allocation as one step in a larger
              job script.

       --no-bell
              Silence salloc's use of the terminal bell. Also see  the  option
              --bell.

       --no-shell
              immediately  exit  after allocating resources, without running a
              command. However, the SLURM job will still be created  and  will
              remain active and will own the allocated resources as long as it
              is active.  You will have a SLURM  job  id  with  no  associated
              processes  or  tasks.  You can submit srun commands against this
              resource allocation, if you specify the --jobid= option with the
              job  id  of this SLURM job.  Or, this can be used to temporarily
              reserve a set of resources so that other jobs  cannot  use  them
              for some period of time.  (Note that the SLURM job is subject to
              the normal constraints on jobs, including time limits,  so  that
              eventually  the  job  will  terminate  and the resources will be
              freed, or you can terminate the job manually using  the  scancel
              command.)

       -O, --overcommit
              Overcommit  resources.   Normally, salloc will allocate one task
              per processor.  By specifying --overcommit  you  are  explicitly
              allowing more than one task per processor.  However no more than
              MAX_TASKS_PER_NODE tasks are permitted to execute per node.

       -p, --partition=<partition_names>
              Request a specific partition for the  resource  allocation.   If
              not  specified,  the  default  behaviour  is  to allow the slurm
              controller to select the default partition as designated by  the
              system   administrator.  If  the  job  can  use  more  than  one
              partition, specify their names in a comma separate list and  the
              one offering earliest initiation will be used.

       -Q, --quiet
              Suppress  informational  messages from salloc. Errors will still
              be displayed.

       --qos=<qos>
              Request a quality of service for the job.   QOS  values  can  be
              defined  for  each user/cluster/account association in the SLURM
              database.  Users will be limited to their association's  defined
              set   of   qos's   when   the   SLURM  configuration  parameter,
              AccountingStorageEnforce, includes "qos" in it's definition.

       --reservation=<name>
              Allocate resources for the job from the named reservation.

       -s, --share
              The job allocation can share  nodes  with  other  running  jobs.
              (The   default  shared/exclusive  behaviour  depends  on  system
              configuration.)  This may result the  allocation  being  granted
              sooner  than  if the --share option was not set and allow higher
              system utilization,  but  application  performance  will  likely
              suffer due to competition for resources within a node.

       --signal=<sig_num>[@<sig_time>]
              When  a  job is within sig_time seconds of its end time, send it
              the signal sig_num.  Due to the resolution of event handling  by
              SLURM,  the  signal  may  be  sent up to 60 seconds earlier than
              specified.  sig_num may either be a signal number or name  (e.g.
              "10"  or "USR1").  sig_time must have integer value between zero
              and 65535.  By default, no signal is sent before the  job's  end
              time.   If  a  sig_num  is  specified  without any sig_time, the
              default time will be 60 seconds.

       --sockets-per-node=<sockets>
              Restrict node selection to nodes with  at  least  the  specified
              number  of  sockets.  See additional information under -B option
              above when task/affinity plugin is enabled.

       -t, --time=<time>
              Set a limit on the total run time of the job allocation.  If the
              requested time limit exceeds the partition's time limit, the job
              will be left in a PENDING state  (possibly  indefinitely).   The
              default time limit is the partition's time limit.  When the time
              limit is reached, the each task in each job step is sent SIGTERM
              followed  by  SIGKILL. The interval between signals is specified
              by the SLURM configuration parameter KillWait.  A time limit  of
              zero  requests  that  no time limit be imposed.  Acceptable time
              formats       include       "minutes",        "minutes:seconds",
              "hours:minutes:seconds",  "days-hours", "days-hours:minutes" and
              "days-hours:minutes:seconds".

       --threads-per-core=<threads>
              Restrict node selection to nodes with  at  least  the  specified
              number of threads per core.  See additional information under -B
              option above when task/affinity plugin is enabled.

       --time-min=<time>
              Set a minimum time limit on the job allocation.   If  specified,
              the  job  may have it's --time limit lowered to a value no lower
              than --time-min if doing so permits the job to  begin  execution
              earlier  than otherwise possible.  The job's time limit will not
              be changed after  the  job  is  allocated  resources.   This  is
              performed   by  a  backfill  scheduling  algorithm  to  allocate
              resources  otherwise  reserved   for   higher   priority   jobs.
              Acceptable  time  formats  include "minutes", "minutes:seconds",
              "hours:minutes:seconds", "days-hours", "days-hours:minutes"  and
              "days-hours:minutes:seconds".

       --tmp=<MB>
              Specify a minimum amount of temporary disk space.

       -u, --usage
              Display brief help message and exit.

       --uid=<user>
              Attempt  to  submit  and/or  run  a  job  as user instead of the
              invoking user id. The invoking user's credentials will  be  used
              to  check access permissions for the target partition. User root
              may use this option to run jobs as a normal user in  a  RootOnly
              partition  for  example.  If  run  as root, salloc will drop its
              permissions to  the  uid  specified  after  node  allocation  is
              successful. user may be the user name or numerical user ID.

       -V, --version
              Display version information and exit.

       -v, --verbose
              Increase  the  verbosity  of  salloc's  informational  messages.
              Multiple -v's will  further  increase  salloc's  verbosity.   By
              default only errors will be displayed.

       -W, --wait=<seconds>
              This option has been replaced by --immediate=<seconds>.

       -w, --nodelist=<node name list>
              Request  a  specific  list  of  node  names.   The  list  may be
              specified as a comma-separated list of node names, or a range of
              node  names  (e.g.  mynode[1-5,7,...]).  Duplicate node names in
              the list will be ignored.  The order of the node  names  in  the
              list is not important; the node names will be sorted by SLURM.

       --wait-all-nodes=<value>
              Controls  when  the execution of the command begins.  By default
              the job will begin execution as soon as the allocation is made.

              0    Begin execution as soon as allocation can be made.  Do  not
                   wait for all nodes to be ready for use (i.e. booted).

              1    Do not begin execution until all nodes are ready for use.

       --wckey=<wckey>
              Specify  wckey  to be used with job.  If TrackWCKey=no (default)
              in the slurm.conf this value is ignored.

       -x, --exclude=<node name list>
              Explicitly exclude certain nodes from the resources  granted  to
              the job.

       The  following options support Blue Gene systems, but may be applicable
       to other systems as well.

       --blrts-image=<path>
              Path to blrts image for bluegene block.  BGL only.  Default from
              blugene.conf if not set.

       --cnload-image=<path>
              Path  to  compute  node  image  for  bluegene  block.  BGP only.
              Default from blugene.conf if not set.

       --conn-type=<type>
              Require the partition connection type to be of a  certain  type.
              On Blue Gene the acceptable of type are MESH, TORUS and NAV.  If
              NAV, or if not set, then SLURM will try  to  fit  a  TORUS  else
              MESH.   You  should  not  normally  set this option.  SLURM will
              normally allocate a TORUS if possible for a given geometry.   If
              running on a BGP system and wanting to run in HTC mode (only for
              1 midplane and below).  You can use HTC_S  for  SMP,  HTC_D  for
              Dual, HTC_V for virtual node mode, and HTC_L for Linux mode.

       -g, --geometry=<XxYxZ>
              Specify the geometry requirements for the job. The three numbers
              represent the required geometry giving dimensions in  the  X,  Y
              and  Z  directions.  For example "--geometry=2x3x4", specifies a
              block of nodes having 2 x 3  x  4  =  24  nodes  (actually  base
              partitions on Blue Gene).

       --ioload-image=<path>
              Path  to  io  image for bluegene block.  BGP only.  Default from
              blugene.conf if not set.

       --linux-image=<path>
              Path to linux image for bluegene block.  BGL only.  Default from
              blugene.conf if not set.

       --mloader-image=<path>
              Path   to  mloader  image  for  bluegene  block.   Default  from
              blugene.conf if not set.

       -R, --no-rotate
              Disables rotation of the job's requested geometry  in  order  to
              fit an appropriate block.  By default the specified geometry can
              rotate in three dimensions.

       --ramdisk-image=<path>
              Path to ramdisk image for bluegene block.   BGL  only.   Default
              from blugene.conf if not set.

       --reboot
              Force the allocated nodes to reboot before starting the job.

INPUT ENVIRONMENT VARIABLES

       Upon  startup,  salloc  will  read  and  handle  the options set in the
       following environment variables.  Note:  Command  line  options  always
       override environment variables settings.

       SALLOC_ACCOUNT        Same as -A, --account

       SALLOC_ACCTG_FREQ     Same as --acctg-freq

       SALLOC_BELL           Same as --bell

       SALLOC_CONN_TYPE      Same as --conn-type

       SALLOC_CPU_BIND       Same as --cpu_bind

       SALLOC_DEBUG          Same as -v, --verbose

       SALLOC_EXCLUSIVE      Same as --exclusive

       SLURM_EXIT_ERROR      Specifies  the  exit  code generated when a SLURM
                             error occurs (e.g. invalid options).  This can be
                             used  by a script to distinguish application exit
                             codes from various SLURM error conditions.   Also
                             see SLURM_EXIT_IMMEDIATE.

       SLURM_EXIT_IMMEDIATE  Specifies   the  exit  code  generated  when  the
                             --immediate option is used and resources are  not
                             currently  available.   This  can  be  used  by a
                             script to distinguish application exit codes from
                             various   SLURM   error   conditions.   Also  see
                             SLURM_EXIT_ERROR.

       SALLOC_GEOMETRY       Same as -g, --geometry

       SALLOC_IMMEDIATE      Same as -I, --immediate

       SALLOC_JOBID          Same as --jobid

       SALLOC_MEM_BIND       Same as --mem_bind

       SALLOC_NETWORK        Same as --network

       SALLOC_NO_BELL        Same as --no-bell

       SALLOC_NO_ROTATE      Same as -R, --no-rotate

       SALLOC_OVERCOMMIT     Same as -O, --overcommit

       SALLOC_PARTITION      Same as -p, --partition

       SALLOC_QOS            Same as --qos

       SALLOC_SIGNAL         Same as --signal

       SALLOC_TIMELIMIT      Same as -t, --time

       SALLOC_WAIT           Same as -W, --wait

       SALLOC_WAIT_ALL_NODES Same as --wait-all-nodes

OUTPUT ENVIRONMENT VARIABLES

       salloc will set the following environment variables in the  environment
       of the executed program:

       BASIL_RESERVATION_ID
              The reservation ID on Cray systems running ALPS/BASIL only.

       SLURM_CPU_BIND
              Set to value of the --cpu_bind option.

       SLURM_JOB_ID (and SLURM_JOBID for backwards compatibility)
              The ID of the job allocation.

       SLURM_JOB_CPUS_PER_NODE
              Count of processors available to the job on this node.  Note the
              select/linear plugin allocates entire  nodes  to  jobs,  so  the
              value  indicates  the  total  count  of  CPUs on each node.  The
              select/cons_res plugin allocates individual processors to  jobs,
              so  this  number indicates the number of processors on each node
              allocated to the job allocation.

       SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
              List of nodes allocated to the job.

       SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
              Total number of nodes in the job allocation.

       SLURM_MEM_BIND
              Set to value of the --mem_bind option.

       SLURM_SUBMIT_DIR
              The directory from which salloc was invoked.

       SLURM_NTASKS_PER_NODE
              Set to value of the --ntasks-per-node option, if specified.

       SLURM_TASKS_PER_NODE
              Number of tasks to be initiated on each node. Values  are  comma
              separated  and  in  the same order as SLURM_NODELIST.  If two or
              more consecutive nodes are to have the  same  task  count,  that
              count  is  followed by "(x#)" where "#" is the repetition count.
              For example, "SLURM_TASKS_PER_NODE=2(x3),1" indicates  that  the
              first  three  nodes will each execute three tasks and the fourth
              node will execute one task.

       MPIRUN_NOALLOCATE
              Do not allocate a block on Blue Gene systems only.

       MPIRUN_NOFREE
              Do not free a block on Blue Gene systems only.

       MPIRUN_PARTITION
              The block name on Blue Gene systems only.

SIGNALS

       While salloc is waiting for a PENDING job allocation, most signals will
       cause salloc to revoke the allocation request and exit.

       However  if  the  allocation  has  been  granted and salloc has already
       started the specified command, then salloc will  ignore  most  signals.
       salloc will not exit or release the allocation until the command exits.
       One notable exception is SIGHUP. A SIGHUP signal will cause  salloc  to
       release  the  allocation  and  exit  without waiting for the command to
       finish.  Another exception is SIGTERM, which will be forwarded  to  the
       spawned process.

EXAMPLES

       To  get  an allocation, and open a new xterm in which srun commands may
       be typed interactively:

              $ salloc -N16 xterm
              salloc: Granted job allocation 65537
              (at this point the xterm appears, and salloc waits for xterm  to
              exit)
              salloc: Relinquishing job allocation 65537

       To grab an allocation of nodes and launch a parallel application on one
       command line (See the salloc man page for more examples):

              salloc -N5 srun -n10 myprogram

COPYING

       Copyright (C) 2006-2007 The Regents of the  University  of  California.
       Copyright (C) 2008-2010 Lawrence Livermore National Security.  Produced
       at   Lawrence   Livermore   National   Laboratory   (cf,   DISCLAIMER).
       CODE-OCEC-09-009. All rights reserved.

       This  file  is  part  of  SLURM,  a  resource  management program.  For
       details, see <https://computing.llnl.gov/linux/slurm/>.

       SLURM is free software; you can redistribute it and/or modify it  under
       the  terms  of  the GNU General Public License as published by the Free
       Software Foundation; either version 2  of  the  License,  or  (at  your
       option) any later version.

       SLURM  is  distributed  in the hope that it will be useful, but WITHOUT
       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
       FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU General Public License
       for more details.

SEE ALSO

       sinfo(1), sattach(1), sbatch(1),  squeue(1),  scancel(1),  scontrol(1),
       slurm.conf(5), sched_setaffinity(2), numa(3)