Provided by: slurm-llnl_2.6.5-1_amd64 bug

NAME

       sbatch - Submit a batch script to SLURM.

SYNOPSIS

       sbatch [options] script [args...]

DESCRIPTION

       sbatch  submits  a batch script to SLURM.  The batch script may be given to sbatch through a file name on
       the command line, or if no file name is specified, sbatch will read in a script from standard input.  The
       batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

       sbatch  exits  immediately  after  the  script  is  successfully  transferred to the SLURM controller and
       assigned a SLURM job ID.  The batch script is not necessarily granted resources immediately, it  may  sit
       in the queue of pending jobs for some time before its required resources become available.

       By  default  both  standard  output and standard error are directed to a file of the name "slurm-%j.out",
       where the "%j" is replaced with the job allocation number.

       When the job allocation is finally granted for the batch script, SLURM runs a single copy  of  the  batch
       script on the first node in the set of allocated nodes.

       The  following  document describes the influence of various options on the allocation of cpus to jobs and
       tasks.
       http://slurm.schedmd.com/cpu_management.html

OPTIONS

       -a, --array=<indexes>
              Submit a job array,  multiple  jobs  to  be  executed  with  identical  parameters.   The  indexes
              specification  identifies what array index values should be used. Multiple values may be specified
              using a comma separated list and/or  a  range  of  values  with  a  "-"  separator.  For  example,
              "--array=0-15"  or  "--array=0,6,16-32".   A  step  function  can  also be specified with a suffix
              containing a colon and number. For example, "--array=0-15:4" is equivalent to  "--array=0,4,8,12".
              The  minimum  index  value  is  0.  the maximum value is one less than the configuration parameter
              MaxArraySize.

       -A, --account=<account>
              Charge resources used by this job to specified account.  The account is an arbitrary  string.  The
              account name may be changed after job submission using the scontrol command.

       --acctg-freq
              Define  the  job  accounting  and  profiling sampling intervals.  This can be used to override the
              JobAcctGatherFrequency parameter in SLURM's configuration file, slurm.conf.  The supported  format
              is as follows:

              --acctg-freq=<datatype>=<interval>
                          where   <datatype>=<interval>   specifies   the   task   sampling   interval  for  the
                          jobacct_gather  plugin  or  a  sampling  interval  for  a  profiling   type   by   the
                          acct_gather_profile  plugin. Multiple, comma-separated <datatype>=<interval> intervals
                          may be specified. Supported datatypes are as follows:

                          task=<interval>
                                 where  <interval>  is  the  task  sampling  interval   in   seconds   for   the
                                 jobacct_gather  plugins  and  for  task  profiling  by  the acct_gather_profile
                                 plugin.

                          energy=<interval>
                                 where <interval> is the sampling interval in seconds for energy profiling using
                                 the acct_gather_energy plugin

                          network=<interval>
                                 where <interval> is the sampling interval in seconds for  infiniband  profiling
                                 using the acct_gather_infiniband plugin.

                          filesystem=<interval>
                                 where  <interval>  is the sampling interval in seconds for filesystem profiling
                                 using the acct_gather_filesystem plugin.

              The default value for the task sampling interval is 30.
              The default value for all other intervals is 0.   An  interval  of  0  disables  sampling  of  the
              specified  type.   If the task sampling interval is 0, accounting information is collected only at
              job termination (reducing SLURM interference with the job).
              Smaller (non-zero) values have a greater impact upon job performance, but a value of 30 seconds is
              not likely to be noticeable for applications having less than 10,000 tasks.

       -B --extra-node-info=<sockets[:cores[:threads]]>
              Request a specific allocation of resources with details as to the number and type of computational
              resources within a cluster: number of sockets (or physical processors) per node, cores per socket,
              and threads per core.  The total amount of resources being requested is the product of all of  the
              terms.   Each  value  specified  is  considered  a  minimum.   An  asterisk  (*)  can be used as a
              placeholder indicating that all available resources of that type are  to  be  utilized.   As  with
              nodes, the individual levels can also be specified in separate options if desired:
                  --sockets-per-node=<sockets>
                  --cores-per-socket=<cores>
                  --threads-per-core=<threads>
              If  task/affinity  plugin  is  enabled,  then  specifying an allocation in this manner also sets a
              default --cpu_bind option of threads if the -B option  specifies  a  thread  count,  otherwise  an
              option  of  cores  if a core count is specified, otherwise an option of sockets.  If SelectType is
              configured to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket,  or
              CR_Socket_Memory  for this option to be honored.  This option is not supported on BlueGene systems
              (select/bluegene plugin is configured).  If not specified, the  scontrol  show  job  will  display
              'ReqS:C:T=*:*:*'.

       --begin=<time>
              Submit  the batch script to the SLURM controller immediately, like normal, but tell the controller
              to defer the allocation of the job until the specified time.

              Time may be of the form HH:MM:SS to run a job at a specific time of day  (seconds  are  optional).
              (If  that time is already past, the next day is assumed.)  You may also specify midnight, noon, or
              teatime (4pm) and you can have a time-of-day suffixed with AM or PM for running in the morning  or
              the  evening.   You  can  also  say what day the job will be run, by specifying a date of the form
              MMDDYY  or  MM/DD/YY  YYYY-MM-DD.   Combine   date   and   time   using   the   following   format
              YYYY-MM-DD[THH:MM[:SS]]. You can also give times like now + count time-units, where the time-units
              can  be  seconds  (default),  minutes, hours, days, or weeks and you can tell SLURM to run the job
              today with the keyword today and to run the job tomorrow with the keyword tomorrow.  The value may
              be changed after job submission using the scontrol command.  For example:
                 --begin=16:00
                 --begin=now+1hour
                 --begin=now+60           (seconds by default)
                 --begin=2010-01-20T12:34:00

              Notes on date/time specifications:
               - Although the 'seconds' field of the HH:MM:SS time specification is allowed by  the  code,  note
              that  the  poll time of the SLURM scheduler is not precise enough to guarantee dispatch of the job
              on the exact second.  The job will be eligible to start on the next poll following  the  specified
              time.  The  exact  poll interval depends on the SLURM scheduler (e.g., 60 seconds with the default
              sched/builtin).
               - If no time (HH:MM:SS) is specified, the default is (00:00:00).
               - If a date is specified without a year (e.g., MM/DD) then the current year  is  assumed,  unless
              the  combination  of  MM/DD  and HH:MM:SS has already passed for that year, in which case the next
              year is used.

       --checkpoint=<time>
              Specifies the interval between creating checkpoints of the job step.  By  default,  the  job  step
              will  have  no checkpoints created.  Acceptable time formats include "minutes", "minutes:seconds",
              "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".

       --checkpoint-dir=<directory>
              Specifies the directory into which the job or job step's checkpoint should be written (used by the
              checkpoint/blcrm and checkpoint/xlch plugins only).  The default  value  is  the  current  working
              directory.    Checkpoint   files   will   be   of   the   form   "<job_id>.ckpt"   for   jobs  and
              "<job_id>.<step_id>.ckpt" for job steps.

       --comment=<string>
              An arbitrary comment enclosed in double quotes if using spaces or some special characters.

       -C, --constraint=<list>
              Nodes can have features assigned to them by the SLURM administrator.  Users can specify  which  of
              these  features are required by their job using the constraint option.  Only nodes having features
              matching the job constraints will be used to satisfy the request.   Multiple  constraints  may  be
              specified with AND, OR, exclusive OR, resource counts, etc.  Supported constraint options include:

              Single Name
                     Only   nodes   which   have   the   specified   feature   will   be   used.   For  example,
                     --constraint="intel"

              Node Count
                     A request can specify the number of nodes needed with some feature by appending an asterisk
                     and count after the feature name.  For  example  "--nodes=16  --constraint=graphics*4  ..."
                     indicates that the job requires 16 nodes at that at least four of those nodes must have the
                     feature "graphics."

              AND    If  only  nodes  with all of specified features will be used.  The ampersand is used for an
                     AND operator.  For example, --constraint="intel&gpu"

              OR     If only nodes with at least one of specified features will be used.  The  vertical  bar  is
                     used for an OR operator.  For example, --constraint="intel|amd"

              Exclusive OR
                     If  only  one of a set of possible options should be used for all allocated nodes, then use
                     the  OR  operator  and  enclose  the  options  within  square   brackets.    For   example:
                     "--constraint=[rack1|rack2|rack3|rack4]"  might  be  used to specify that all nodes must be
                     allocated on a single rack of the cluster, but any of those four racks can be used.

              Multiple Counts
                     Specific counts of multiple resources may be  specified  by  using  the  AND  operator  and
                     enclosing      the      options      within      square     brackets.      For     example:
                     "--constraint=[rack1*2&rack2*4]" might be used to specify that two nodes must be  allocated
                     from nodes with the feature of "rack1" and four nodes must be allocated from nodes with the
                     feature "rack2".

       --contiguous
              If  set,  then the allocated nodes must form a contiguous set.  Not honored with the topology/tree
              or topology/3d_torus plugins, both of which can modify the node ordering.

       --cores-per-socket=<cores>
              Restrict node selection to nodes with at least the specified number  of  cores  per  socket.   See
              additional information under -B option above when task/affinity plugin is enabled.

       --cpu_bind=[{quiet,verbose},]type
              Bind  tasks  to  CPUs.   Used  only  when the task/affinity or task/cgroup plugin is enabled.  The
              configuration  parameter  TaskPluginParam  may  override   these   options.    For   example,   if
              TaskPluginParam  is  configured  to  bind  to  cores,  your  job will not be able to bind tasks to
              sockets.  NOTE: To have SLURM always report on the selected CPU binding for all commands  executed
              in  a  shell, you can enable verbose mode by setting the SLURM_CPU_BIND environment variable value
              to "verbose".

              The following informational environment variables are set when --cpu_bind is in use:
                      SLURM_CPU_BIND_VERBOSE
                      SLURM_CPU_BIND_TYPE
                      SLURM_CPU_BIND_LIST

              See  the  ENVIRONMENT  VARIABLE  section  for  a  more  detailed  description  of  the  individual
              SLURM_CPU_BIND* variables.

              When using --cpus-per-task to run multithreaded tasks, be aware that CPU binding is inherited from
              the  parent of the process.  This means that the multithreaded task should either specify or clear
              the CPU binding itself to avoid having all threads of the multithreaded task use the same mask/CPU
              as the parent.  Alternatively, fat masks (masks which specify more than one allowed CPU) could  be
              used for the tasks in order to provide multiple CPUs for the multithreaded tasks.

              By default, a job step has access to every CPU allocated to the job.  To ensure that distinct CPUs
              are allocated to each job step, use the --exclusive option.

              If  the  job  step  allocation  includes an allocation with a number of sockets, cores, or threads
              equal to the number of tasks to be started then  the  tasks  will  by  default  be  bound  to  the
              appropriate resources.  Disable this mode of operation by explicitly setting "--cpu-bind=none".

              Note  that a job step can be allocated different numbers of CPUs on each node or be allocated CPUs
              not starting at location zero. Therefore one of the options which automatically generate the  task
              binding is recommended.  Explicitly specified masks or bindings are only honored when the job step
              has been allocated every available CPU on the node.

              Binding  a task to a NUMA locality domain means to bind the task to the set of CPUs that belong to
              the NUMA locality domain or "NUMA node".  If NUMA locality domain options are used on systems with
              no NUMA support, then each socket is considered a locality domain.

              Supported options include:

              q[uiet]
                     Quietly bind before task runs (default)

              v[erbose]
                     Verbosely report binding before task runs

              no[ne] Do not bind tasks to CPUs (default)

              rank   Automatically bind by task rank.  Task zero is bound to socket (or core  or  thread)  zero,
                     etc.  Not supported unless the entire node is allocated to the job.

              map_cpu:<list>
                     Bind    by    mapping    CPU    IDs    to    tasks    as    specified   where   <list>   is
                     <cpuid1>,<cpuid2>,...<cpuidN>.  CPU IDs are interpreted as decimal values unless  they  are
                     preceded with '0x' in which case they are interpreted as hexadecimal values.  Not supported
                     unless the entire node is allocated to the job.

              mask_cpu:<list>
                     Bind by setting CPU masks on tasks as specified where <list> is <mask1>,<mask2>,...<maskN>.
                     CPU masks are always interpreted as hexadecimal values but can be preceded with an optional
                     '0x'.

              sockets
                     Automatically  generate  masks binding tasks to sockets.  Only the CPUs on the socket which
                     have been allocated to the job will be used.  If the  number  of  tasks  differs  from  the
                     number of allocated sockets this can result in sub-optimal binding.

              cores  Automatically  generate  masks binding tasks to cores.  If the number of tasks differs from
                     the number of allocated cores this can result in sub-optimal binding.

              threads
                     Automatically generate masks binding tasks to threads.  If the number of tasks differs from
                     the number of allocated threads this can result in sub-optimal binding.

              ldoms  Automatically generate masks binding tasks to NUMA locality  domains.   If  the  number  of
                     tasks  differs from the number of allocated locality domains this can result in sub-optimal
                     binding.

              help   Show this help message

       -c, --cpus-per-task=<ncpus>
              Advise the SLURM controller that ensuing job steps will require ncpus  number  of  processors  per
              task.  Without this option, the controller will just try to allocate one processor per task.

              For  instance,  consider  an  application  that  has 4 tasks, each requiring 3 processors.  If our
              cluster is comprised of quad-processors nodes and we simply ask for 12 processors, the  controller
              might give us only 3 nodes.  However, by using the --cpus-per-task=3 options, the controller knows
              that each task requires 3 processors on the same node, and the controller will grant an allocation
              of 4 nodes, one for each of the 4 tasks.

       -d, --dependency=<dependency_list>
              Defer  the  start  of  this  job  until  the specified dependencies have been satisfied completed.
              <dependency_list> is of the form  <type:job_id[:job_id][,type:job_id[:job_id]]>.   Many  jobs  can
              share  the  same  dependency and these jobs may even belong to different  users. The  value may be
              changed after job submission using the scontrol command.

              after:job_id[:jobid...]
                     This job can begin execution after the specified jobs have begun execution.

              afterany:job_id[:jobid...]
                     This job can begin execution after the specified jobs have terminated.

              afternotok:job_id[:jobid...]
                     This job can begin execution after the specified jobs have terminated in some failed  state
                     (non-zero exit code, node failure, timed out, etc).

              afterok:job_id[:jobid...]
                     This  job  can  begin execution after the specified jobs have successfully executed (ran to
                     completion with an exit code of zero).

              expand:job_id
                     Resources allocated to this job should be used to expand the specified  job.   The  job  to
                     expand  must  share  the  same  QOS (Quality of Service) and partition.  Gang scheduling of
                     resources in the partition is also not supported.

              singleton
                     This job can begin execution after any previously launched jobs sharing the same  job  name
                     and user have terminated.

       -D, --workdir=<directory>
              Set the working directory of the batch script to directory before it is executed.

       -e, --error=<filename pattern>
              Instruct SLURM to connect the batch script's standard error directly to the file name specified in
              the  "filename  pattern".   By default both standard output and standard error are directed to the
              same file.  For job arrays, the default file name is "slurm-%A_%a.out", "%A" is  replaced  by  the
              job  ID  and  "%a" with the array index.  For other jobs, the default file name is "slurm-%j.out",
              where the "%j" is replaced by the job ID.  See  the  --input  option  for  filename  specification
              options.

       --exclusive
              The  job allocation can not share nodes with other running jobs.  This is the opposite of --share,
              whichever option is seen last on the command line  will  be  used.  The  default  shared/exclusive
              behavior  depends  on system configuration and the partition's Shared option takes precedence over
              the job's option.

       --export=<environment variables | ALL | NONE>
              Identify which environment variables are  propagated  to  the  batch  job.   Multiple  environment
              variable  names  should  be  comma  separated.   Environment  variable  names  may be specified to
              propagate the current value of those variables (e.g. "--export=EDITOR") or specific values for the
              variables may be exported (e.g.. "--export=EDITOR=/bin/vi").  This option  particularly  important
              for jobs that are submitted on one cluster and execute on a different cluster (e.g. with different
              paths).  By  default all environment variables are propagated. If the argument is NONE or specific
              environment variable names, then the --get-user-env option will implicitly be set  to  load  other
              environment variables based upon the user's configuration on the cluster which executes the job.

       --export-file=<filename | fd>
              If  a  number  between 3 and OPEN_MAX is specified as the argument to this option, a readable file
              descriptor will be assumed (STDIN and STDOUT are not supported as valid arguments).   Otherwise  a
              filename  is assumed.  Export environment variables defined in <filename> or read from <fd> to the
              job's execution environment. The content is one or more environment variable  definitions  of  the
              form NAME=value, each separated by a null character.  This allows the use of special characters in
              environment definitions.

       -F, --nodefile=<node file>
              Much  like  --nodelist,  but the list is contained in a file of name node file.  The node names of
              the list may also span multiple lines in the file.    Duplicate node names in  the  file  will  be
              ignored.   The order of the node names in the list is not important; the node names will be sorted
              by SLURM.

       --get-user-env[=timeout][mode]
              This option will tell sbatch to retrieve the login environment variables for the user specified in
              the --uid option.  The environment variables are retrieved by running something of this sort "su -
              <username> -c /usr/bin/env" and parsing the output.   Be  aware  that  any  environment  variables
              already  set  in  sbatch's  environment will take precedence over any environment variables in the
              user's login environment. Clear any environment variables before calling sbatch that  you  do  not
              want  propagated  to the spawned program.  The optional timeout value is in seconds. Default value
              is 8 seconds.  The optional mode value control the "su" options.  With a mode value of  "S",  "su"
              is  executed  without  the  "-"  option.   With a mode value of "L", "su" is executed with the "-"
              option, replicating the login environment.  If mode not specified, the mode established  at  SLURM
              build   time   is   used.    Example   of   use   include   "--get-user-env",  "--get-user-env=10"
              "--get-user-env=10L", and "--get-user-env=S".  This option was originally created for use by Moab.

       --gid=<group>
              If sbatch is run as root, and the --gid option is used, submit the job with group's  group  access
              permissions.  group may be the group name or the numerical group ID.

       --gres=<list>
              Specifies a comma delimited list of generic consumable resources.  The format of each entry on the
              list  is "name[:count]".  The name is that of the consumable resource.  The count is the number of
              those resources with a default value of 1.  The specified resources will be allocated to  the  job
              on  each  node.   The  available  generic  consumable  resources  is  configurable  by  the system
              administrator.  A list of available generic consumable resources will be printed and  the  command
              will  exit  if  the  option  argument is "help".  Examples of use include "--gres=gpu:2,mic=1" and
              "--gres=help".

       -H, --hold
              Specify the job is to be submitted in a held state (priority of zero).  A  held  job  can  now  be
              released using scontrol to reset its priority (e.g. "scontrol release <job_id>").

       -h, --help
              Display help information and exit.

       --hint=<type>
              Bind tasks according to application hints

              compute_bound
                     Select  settings  for  compute bound applications: use all cores in each socket, one thread
                     per core

              memory_bound
                     Select settings for memory bound applications: use only one core in each socket, one thread
                     per core

              [no]multithread
                     [don't] use extra threads with in-core  multi-threading  which  can  benefit  communication
                     intensive applications

              help   show this help message

       -I, --immediate
              The  batch script will only be submitted to the controller if the resources necessary to grant its
              job allocation are immediately available.  If the job allocation will have to wait in a  queue  of
              pending  jobs,  the  batch  script will not be submitted.  NOTE: There is limited support for this
              option with batch jobs.

       --ignore-pbs
              Ignore any "#PBS" options specified in the batch script.

       -i, --input=<filename pattern>
              Instruct SLURM to connect the batch script's standard input directly to the file name specified in
              the "filename pattern".

              By default, "/dev/null" is open on the batch script's standard input and both standard output  and
              standard  error are directed to a file of the name "slurm-%j.out", where the "%j" is replaced with
              the job allocation number, as described below.

              The filename pattern may contain one or more replacement symbols, which are  a  percent  sign  "%"
              followed by a letter (e.g. %j).

              Supported replacement symbols are:

              %A     Job array's master job allocation number.

              %a     Job array ID (index) number.

              %j     Job allocation number.

              %N     Node  name.  Only one file is created, so %N will be replaced by the name of the first node
                     in the job, which is the one that runs the script.

              %u     User name.

       -J, --job-name=<jobname>
              Specify a name for the job allocation. The specified name will appear along with the job id number
              when querying running jobs on the system. The default is the name of the  batch  script,  or  just
              "sbatch" if the script is read on sbatch's standard input.

       --jobid=<jobid>
              Allocate resources as the specified job id.  NOTE: Only valid for user root.

       -k, --no-kill
              Do  not  automatically  terminate a job of one of the nodes it has been allocated fails.  The user
              will assume the responsibilities for fault-tolerance should a node fail.  When  there  is  a  node
              failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal
              error,  but  with --no-kill, the job allocation will not be revoked so the user may launch new job
              steps on the remaining nodes in their allocation.

              By default SLURM terminates the entire job allocation if any node fails in its range of  allocated
              nodes.

       -L, --licenses=<license>
              Specification of licenses (or other resources available on all nodes of the cluster) which must be
              allocated  to  this job.  License names can be followed by a colon and count (the default count is
              one).  Multiple license names should be comma separated (e.g.  "--licenses=foo:4,bar").

       -M, --clusters=<string>
              Clusters to issue commands to.  Multiple cluster names may be comma separated.  The  job  will  be
              submitted  to  the  one  cluster  providing the earliest expected job initiation time. The default
              value is the current cluster. A value of 'all' will query  to  run  on  all  clusters.   Note  the
              --export option to control environment variables exported between clusters.

       -m, --distribution=
              <block|cyclic|arbitrary|plane=<options>[:block|cyclic]>

              Specify  alternate  distribution  methods  for  remote  processes.   In  sbatch,  this  only  sets
              environment variables that will be used by subsequent srun requests.   This  option  controls  the
              assignment  of  tasks to the nodes on which resources have been allocated, and the distribution of
              those resources to tasks for binding (task affinity). The first distribution  method  (before  the
              ":")  controls the distribution of resources across nodes. The optional second distribution method
              (after the ":") controls the distribution of resources across sockets within a  node.   Note  that
              with select/cons_res, the number of cpus allocated on each socket and node may be different. Refer
              to   http://slurm.schedmd.com/mc_support.html   for   more  information  on  resource  allocation,
              assignment of tasks to nodes, and binding of tasks to CPUs.

              First distribution method:

              block  The block distribution method will distribute tasks to a node such that  consecutive  tasks
                     share  a  node.  For  example,  consider an allocation of three nodes each with two cpus. A
                     four-task block distribution request will distribute those tasks to the  nodes  with  tasks
                     one  and  two  on the first node, task three on the second node, and task four on the third
                     node.  Block distribution is the default behavior if the number of tasks exceeds the number
                     of allocated nodes.

              cyclic The cyclic distribution method will distribute tasks to a node such that consecutive  tasks
                     are distributed over consecutive nodes (in a round-robin fashion). For example, consider an
                     allocation  of three nodes each with two cpus. A four-task cyclic distribution request will
                     distribute those tasks to the nodes with tasks one and four on the first node, task two  on
                     the  second  node,  and  task  three  on  the  third  node.   Note  that when SelectType is
                     select/cons_res, the same  number  of  CPUs  may  not  be  allocated  on  each  node.  Task
                     distribution will be round-robin among all the nodes with CPUs yet to be assigned to tasks.
                     Cyclic  distribution  is  the default behavior if the number of tasks is no larger than the
                     number of allocated nodes.

              plane  The tasks are distributed in blocks of a specified size.   The  options  include  a  number
                     representing  the size of the task block.  This is followed by an optional specification of
                     the task distribution scheme within a block of tasks and between the blocks of tasks.   The
                     number  of  tasks  distributed to each node is the same as for cyclic distribution, but the
                     taskids assigned to each node depend on  the  plane  size.   For  more  details  (including
                     examples and diagrams), please see
                     http://slurm.schedmd.com/mc_support.html
                     and
                     http://slurm.schedmd.com/dist_plane.html

              arbitrary
                     The  arbitrary  method  of  distribution will allocate processes in-order as listed in file
                     designated by the environment variable SLURM_HOSTFILE.  If this variable is listed it  will
                     override  any other method specified.  If not set the method will default to block.  Inside
                     the hostfile must contain at minimum the number of hosts requested and be one per  line  or
                     comma  separated.   If  specifying a task count (-n, --ntasks=<number>), your tasks will be
                     laid out on the nodes in the order of the file.
                     NOTE: The arbitrary distribution option on a job allocation only controls the nodes  to  be
                     allocated  to  the  job and not the allocation of CPUs on those nodes. This option is meant
                     primarily to control a job step's task layout in an existing job allocation  for  the  srun
                     command.

              Second distribution method:

              block  The  block distribution method will distribute tasks to sockets such that consecutive tasks
                     share a socket.

              cyclic The cyclic distribution method will distribute tasks to sockets such that consecutive tasks
                     are distributed over consecutive sockets (in a round-robin fashion).

       --mail-type=<type>
              Notify user by email when certain event types occur.  Valid type  values  are  BEGIN,  END,  FAIL,
              REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.

       --mail-user=<user>
              User  to receive email notification of state changes as defined by --mail-type.  The default value
              is the submitting user.

       --mem=<MB>
              Specify the real memory required per node in MegaBytes.  Default value is  DefMemPerNode  and  the
              maximum  value  is MaxMemPerNode. If configured, both of parameters can be seen using the scontrol
              show config command.  This parameter would generally be used if whole nodes are allocated to  jobs
              (SelectType=select/linear).   Also  see  --mem-per-cpu.   --mem  and  --mem-per-cpu  are  mutually
              exclusive.  NOTE: Enforcement of memory limits currently relies upon  the  task/cgroup  plugin  or
              enabling  of  accounting,  which  samples memory use on a periodic basis (data need not be stored,
              just collected). In both cases memory use is based upon the job's Resident Set Size (RSS). A  task
              may exceed the memory limit until the next periodic accounting sample.

       --mem-per-cpu=<MB>
              Mimimum  memory  required  per  allocated CPU in MegaBytes.  Default value is DefMemPerCPU and the
              maximum value is MaxMemPerCPU (see exception below). If configured, both of parameters can be seen
              using the scontrol show config command.  Note that if the job's --mem-per-cpu  value  exceeds  the
              configured  MaxMemPerCPU,  then  the  user's  limit  will  be  treated as a memory limit per task;
              --mem-per-cpu will be reduced to a value no larger than MaxMemPerCPU; --cpus-per-task will be  set
              and  value  of  --cpus-per-task  multiplied by the new --mem-per-cpu value will equal the original
              --mem-per-cpu value specified by the user.  This parameter would generally be used  if  individual
              processors  are  allocated  to  jobs  (SelectType=select/cons_res).   Also  see  --mem.  --mem and
              --mem-per-cpu are mutually exclusive.

       --mem_bind=[{quiet,verbose},]type
              Bind tasks to memory. Used only when the task/affinity plugin  is  enabled  and  the  NUMA  memory
              functions  are  available.   Note that the resolution of CPU and memory binding may differ on some
              architectures. For example, CPU binding may be performed at  the  level  of  the  cores  within  a
              processor  while  memory  binding will be performed at the level of nodes, where the definition of
              "nodes" may differ from system to system. The use of any type other than "none" or "local" is  not
              recommended.   If  you  want  greater  control,  try  running  a simple test code with the options
              "--cpu_bind=verbose,none --mem_bind=verbose,none" to determine the specific configuration.

              NOTE: To have SLURM always report on the selected memory binding for all commands  executed  in  a
              shell,  you  can  enable  verbose mode by setting the SLURM_MEM_BIND environment variable value to
              "verbose".

              The following informational environment variables are set when --mem_bind is in use:

                      SLURM_MEM_BIND_VERBOSE
                      SLURM_MEM_BIND_TYPE
                      SLURM_MEM_BIND_LIST

              See the  ENVIRONMENT  VARIABLES  section  for  a  more  detailed  description  of  the  individual
              SLURM_MEM_BIND* variables.

              Supported options include:

              q[uiet]
                     quietly bind before task runs (default)

              v[erbose]
                     verbosely report binding before task runs

              no[ne] don't bind tasks to memory (default)

              rank   bind by task rank (not recommended)

              local  Use memory local to the processor in use

              map_mem:<list>
                     bind   by   mapping   a   node's   memory   to   tasks   as   specified   where  <list>  is
                     <cpuid1>,<cpuid2>,...<cpuidN>.  CPU IDs are interpreted as decimal values unless  they  are
                     preceded with '0x' in which case they interpreted as hexadecimal values (not recommended)

              mask_mem:<list>
                     bind    by    setting    memory   masks   on   tasks   as   specified   where   <list>   is
                     <mask1>,<mask2>,...<maskN>.  memory masks are always  interpreted  as  hexadecimal  values.
                     Note  that  masks  must  be preceded with a '0x' if they don't begin with [0-9] so they are
                     seen as numerical values by srun.

              help   show this help message

       --mincpus=<n>
              Specify a minimum number of logical cpus/processors per node.

       -N, --nodes=<minnodes[-maxnodes]>
              Request that a minimum of minnodes nodes be allocated to this job.  A maximum node count may  also
              be specified with maxnodes.  If only one number is specified, this is used as both the minimum and
              maximum  node  count.   The  partition's  node limits supersede those of the job.  If a job's node
              limits are outside of the range permitted for its associated partition, the job will be left in  a
              PENDING  state.   This  permits  possible  execution  at a later time, when the partition limit is
              changed.  If a job node limit exceeds the number of nodes configured in  the  partition,  the  job
              will  be  rejected.   Note  that the environment variable SLURM_NNODES will be set to the count of
              nodes actually allocated to the job. See the ENVIRONMENT VARIABLES  section for more  information.
              If  -N  is  not  specified,  the  default  behavior  is  to  allocate  enough nodes to satisfy the
              requirements of the -n and -c options.  The job will be allocated as many nodes as possible within
              the range specified and without delaying the initiation of the job.  The node count  specification
              may include a numeric value followed by a suffix of "k" (multiplies numeric value by 1,024) or "m"
              (multiplies numeric value by 1,048,576).

       -n, --ntasks=<number>
              sbatch  does  not launch tasks, it requests an allocation of resources and submits a batch script.
              This option advises the SLURM controller that job steps run within the allocation  will  launch  a
              maximum  of  number  tasks  and  to provide for sufficient resources.  The default is one task per
              node, but note that the --cpus-per-task option will change this default.

       --network=<type>
              Specify the communication protocol to be used.  The interpretation of type  is  system  dependent.
              This  option  is  current  supported  on  systems with IBM's Parallel Environment (PE).  See IBM's
              LoadLeveler job command keyword documentation about the keyword "network"  for  more  information.
              Multiple  values  may  be specified in a comma separated list.  All options are case in-sensitive.
              Supported values include:

              BULK_XFER[=<resources>]
                          Enable bulk transfer of data using Remote Direct-Memory Access (RDMA).   The  optional
                          resources  specification  is a numeric value which can have a suffix of "k", "K", "m",
                          "M",  "g"  or  "G"  for  kilobytes,  megabytes  or  gigabytes.   NOTE:  The  resources
                          specification  is  not  supported  by the underlying IBM infrastructure as of Parallel
                          Environment version 2.2 and no value should be specified at this time.

              CAU=<count> Number of Collective Acceleration Units (CAU) required.  Applies only to IBM Power7-IH
                          processors.  Default value is zero.   Independent  CAU  will  be  allocated  for  each
                          programming interface (MPI, LAPI, etc.)

              DEVNAME=<name>
                          Specify the device name to use for communications (e.g. "eth0" or "mlx4_0").

              DEVTYPE=<type>
                          Specify  the device type to use for communications.  The supported values of type are:
                          "IB" (InfiniBand), "HFI" (P7 Host Fabric Interface),  "IPONLY"  (IP-Only  interfaces),
                          "HPCE"  (HPC  Ethernet), and "KMUX" (Kernel Emulation of HPCE).  The devices allocated
                          to a job must all be of the same type.  The default value depends  upon  depends  upon
                          what  hardware  is  available  and  in  order  of  preferences is IPONLY (which is not
                          considered in User Space mode), HFI, IB, HPCE, and KMUX.

              IMMED =<count>
                          Number of immediate send slots per window required.  Applies  only  to  IBM  Power7-IH
                          processors.  Default value is zero.

              INSTANCES =<count>
                          Specify  number  of network connections for each task on each network connection.  The
                          default instance count is 1.

              IPV4        Use Internet Protocol (IP) version 4 communications (default).

              IPV6        Use Internet Protocol (IP) version 6 communications.

              LAPI        Use the LAPI programming interface.

              MPI         Use the MPI programming interface.  MPI is the default interface.

              PAMI        Use the PAMI programming interface.

              SHMEM       Use the OpenSHMEM programming interface.

              SN_ALL      Use all available switch networks (default).

              SN_SINGLE   Use one available switch network.

              UPC         Use the UPC programming interface.

              US          Use User Space communications.

              Some examples of network specifications:

              Instances=2,US,MPI,SN_ALL
                          Create two user space connections for MPI communications on every switch  network  for
                          each task.

              US,MPI,Instances=3,Devtype=IB
                          Create three user space connections for MPI communications on every InfiniBand network
                          for each task.

              IPV4,LAPI,SN_Single
                          Create  a  IP  version  4 connection for LAPI communications on one switch network for
                          each task.

              Instances=2,US,LAPI,MPI
                          Create two user space connections each for LAPI and MPI communications on every switch
                          network for each task. Note that SN_ALL is the default option so every switch  network
                          is used. Also note that Instances=2 specifies that two connections are established for
                          each  protocol (LAPI and MPI) and each task.  If there are two networks and four tasks
                          on the node then a total of 32 connections are established (2 instances x 2  protocols
                          x 2 networks x 4 tasks).

       --nice[=adjustment]
              Run  the  job  with  an  adjusted  scheduling priority within SLURM.  With no adjustment value the
              scheduling priority is decreased by 100. The adjustment range is from -10000 (highest priority) to
              10000 (lowest priority). Only privileged users can  specify  a  negative  adjustment.  NOTE:  This
              option is presently ignored if SchedulerType=sched/wiki or SchedulerType=sched/wiki2.

       --no-requeue
              Specifies  that the batch job should not be requeued after node failure.  Setting this option will
              prevent system administrators from being able to restart the job (for example, after  a  scheduled
              downtime).   When  a  job is requeued, the batch script is initiated from its beginning.  Also see
              the --requeue option.  The JobRequeue configuration parameter controls the default behavior on the
              cluster.

       --ntasks-per-core=<ntasks>
              Request the maximum ntasks be invoked on each core.  Meant to be used with  the  --ntasks  option.
              Related  to  --ntasks-per-node  except  at  the  core level instead of the node level.  Masks will
              automatically be generated to bind the tasks to specific core unless --cpu_bind=none is specified.
              NOTE:    This    option    is    not    supported    unless    SelectTypeParameters=CR_Core     or
              SelectTypeParameters=CR_Core_Memory is configured.

       --ntasks-per-socket=<ntasks>
              Request  the maximum ntasks be invoked on each socket.  Meant to be used with the --ntasks option.
              Related to --ntasks-per-node except at the socket level instead of the  node  level.   Masks  will
              automatically  be  generated  to  bind  the  tasks  to  specific sockets unless --cpu_bind=none is
              specified.   NOTE:  This  option  is  not  supported  unless   SelectTypeParameters=CR_Socket   or
              SelectTypeParameters=CR_Socket_Memory is configured.

       --ntasks-per-node=<ntasks>
              Request  the  maximum  ntasks  be invoked on each node.  Meant to be used with the --nodes option.
              This is related to --cpus-per-task=ncpus, but does not require knowledge of the actual  number  of
              cpus on each node.  In some cases, it is more convenient to be able to request that no more than a
              specific  number  of  tasks be invoked on each node.  Examples of this include submitting a hybrid
              MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while  allowing  the
              OpenMP  portion  to  utilize  all  of  the parallelism present in the node, or submitting a single
              setup/cleanup/monitoring job to each node of a pre-existing allocation as one step in a larger job
              script.

       -O, --overcommit
              Overcommit resources.  Normally, sbatch will allocate  one  task  per  processor.   By  specifying
              --overcommit  you  are explicitly allowing more than one task per processor.  However no more than
              MAX_TASKS_PER_NODE tasks are permitted to execute per node.

       -o, --output=<filename pattern>
              Instruct SLURM to connect the batch script's standard output directly to the file  name  specified
              in the "filename pattern".  By default both standard output and standard error are directed to the
              same  file.   For  job arrays, the default file name is "slurm-%A_%a.out", "%A" is replaced by the
              job ID and "%a" with the array index.  For other jobs, the default file  name  is  "slurm-%j.out",
              where  the  "%j"  is  replaced  by  the job ID.  See the --input option for filename specification
              options.

       --open-mode=append|truncate
              Open the output and error files using append or truncate mode as specified.  The default value  is
              specified by the system configuration parameter JobFileAppend.

       -p, --partition=<partition_names>
              Request  a specific partition for the resource allocation.  If not specified, the default behavior
              is to allow the slurm controller to select the default  partition  as  designated  by  the  system
              administrator. If the job can use more than one partition, specify their names in a comma separate
              list and the one offering earliest initiation will be used.

       --profile=<all|none|[energy[,|task[,|lustre[,|network]]]]>
              enables  detailed  data collection by the acct_gather_profile plugin.  Detailed data are typically
              time-series that are stored in an HDF5 file for the job.

              All       All data types are collected. (Cannot be combined with other values.)

              None      No data types are collected. This is the default.
                         (Cannot be combined with other values.)

              Energy    Energy data is collected.

              Task      Task (I/O, Memory, ...) data is collected.

              Lustre    Lustre data is collected.

              Network   Network (InfiniBand) data is collected.

       --propagate[=rlimitfR]
              Allows users to specify which of the modifiable (soft) resource limits to propagate to the compute
              nodes and apply to their jobs.  If rlimits is not specified, then  all  resource  limits  will  be
              propagated.   The  following rlimit names are supported by Slurm (although some options may not be
              supported on some systems):

              ALL       All limits listed below

              AS        The maximum address space for a process

              CORE      The maximum size of core file

              CPU       The maximum amount of CPU time

              DATA      The maximum size of a process's data segment

              FSIZE     The maximum size of files created. Note that if the user sets FSIZE  to  less  than  the
                        current size of the slurmd.log, job launches will fail with a 'File size limit exceeded'
                        error.

              MEMLOCK   The maximum size that may be locked into memory

              NOFILE    The maximum number of open files

              NPROC     The maximum number of processes available

              RSS       The maximum resident set size

              STACK     The maximum stack size

       -Q, --quiet
              Suppress informational messages from sbatch. Errors will still be displayed.

       --qos=<qos>
              Request a quality of service for the job.  QOS values can be defined for each user/cluster/account
              association  in  the  SLURM database.  Users will be limited to their association's defined set of
              qos's when the SLURM configuration parameter, AccountingStorageEnforce,  includes  "qos"  in  it's
              definition.

       --requeue
              Specifies  that  the batch job should be requeued after node failure.  When a job is requeued, the
              batch script is initiated from its beginning.  Also see the --no-requeue option.   The  JobRequeue
              configuration parameter controls the default behavior on the cluster.

       --reservation=<name>
              Allocate resources for the job from the named reservation.

       -s, --share
              The  job allocation can share nodes with other running jobs.  This is the opposite of --exclusive,
              whichever option is seen last on the command line  will  be  used.  The  default  shared/exclusive
              behavior  depends  on system configuration and the partition's Shared option takes precedence over
              the job's option.  This option may result the allocation being granted sooner than if the  --share
              option  was  not  set and allow higher system utilization, but application performance will likely
              suffer due to competition for resources within a node.

       --signal=<sig_num>[@<sig_time>]
              When a job is within sig_time seconds of its end time, send it the signal  sig_num.   Due  to  the
              resolution  of  event  handling  by  SLURM,  the  signal may be sent up to 60 seconds earlier than
              specified.  sig_num may either be a signal number or name (e.g. "10" or  "USR1").   sig_time  must
              have  integer  value  between  zero and 65535.  By default, no signal is sent before the job's end
              time.  If a sig_num is specified without any sig_time, the default time will be 60 seconds.

       --sockets-per-node=<sockets>
              Restrict node selection to nodes with at least the specified number of  sockets.   See  additional
              information under -B option above when task/affinity plugin is enabled.

       --switches=<count>[@<max-time>]
              When  a  tree  topology  is  used,  this defines the maximum count of switches desired for the job
              allocation and optionally the maximum time to wait for that number of switches. If SLURM finds  an
              allocation  containing  more  switches  than the count specified, the job remains pending until it
              either finds an allocation with desired switch count or the time limit expires.  It  there  is  no
              switch  count  limit,  there  is  no  delay  in starting the job.  Acceptable time formats include
              "minutes",  "minutes:seconds",  "hours:minutes:seconds",  "days-hours",  "days-hours:minutes"  and
              "days-hours:minutes:seconds".   The  job's  maximum  time  delay  may  be  limited  by  the system
              administrator using the  SchedulerParameters  configuration  parameter  with  the  max_switch_wait
              parameter option.  The default max-time is the max_switch_wait SchedulerParameter.

       -t, --time=<time>
              Set  a limit on the total run time of the job allocation.  If the requested time limit exceeds the
              partition's time limit, the job will be left in a  PENDING  state  (possibly  indefinitely).   The
              default  time  limit  is the partition's default time limit.  When the time limit is reached, each
              task in each job step is sent SIGTERM followed  by  SIGKILL.   The  interval  between  signals  is
              specified  by  the  SLURM configuration parameter KillWait.  A time limit of zero requests that no
              time  limit  be  imposed.   Acceptable  time   formats   include   "minutes",   "minutes:seconds",
              "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".

       --tasks-per-node=<n>
              Specify the number of tasks to be launched per node.  Equivalent to --ntasks-per-node.

       --threads-per-core=<threads>
              Restrict  node  selection  to nodes with at least the specified number of threads per core.  NOTE:
              "Threads" refers to the number of processing  units  on  each  core  rather  than  the  number  of
              application  tasks to be launched per core.  See additional information under -B option above when
              task/affinity plugin is enabled.

       --time-min=<time>
              Set a minimum time limit on the job allocation.  If specified, the job may have it's --time  limit
              lowered to a value no lower than --time-min if doing so permits the job to begin execution earlier
              than  otherwise  possible.   The  job's  time limit will not be changed after the job is allocated
              resources.  This is performed by a backfill scheduling algorithm to allocate  resources  otherwise
              reserved  for higher priority jobs.  Acceptable time formats include "minutes", "minutes:seconds",
              "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".

       --tmp=<MB>
              Specify a minimum amount of temporary disk space.

       -u, --usage
              Display brief help message and exit.

       --uid=<user>
              Attempt to submit and/or run a job as user instead of the invoking user id.  The  invoking  user's
              credentials  will  be used to check access permissions for the target partition. User root may use
              this option to run jobs as a normal user in a RootOnly partition for  example.  If  run  as  root,
              sbatch  will  drop  its permissions to the uid specified after node allocation is successful. user
              may be the user name or numerical user ID.

       -V, --version
              Display version information and exit.

       -v, --verbose
              Increase the verbosity of sbatch's informational messages.  Multiple -v's  will  further  increase
              sbatch's verbosity.  By default only errors will be displayed.

       -w, --nodelist=<node name list>
              Request  a  specific  list  of node names.  The list may be specified as a comma-separated list of
              node names, or a range of node names (e.g. mynode[1-5,7,...]).  Duplicate node names in  the  list
              will be ignored.  The order of the node names in the list is not important; the node names will be
              sorted by SLURM.

       --wait-all-nodes=<value>
              Controls  when  the  execution  of the command begins.  By default the job will begin execution as
              soon as the allocation is made.

              0    Begin execution as soon as allocation can be made.  Do not wait for all nodes to be ready for
                   use (i.e. booted).

              1    Do not begin execution until all nodes are ready for use.

       --wckey=<wckey>
              Specify wckey to be used with job.  If TrackWCKey=no (default) in the  slurm.conf  this  value  is
              ignored.

       --wrap=<command string>
              Sbatch  will  wrap  the  specified  command  string in a simple "sh" shell script, and submit that
              script to the slurm controller.  When --wrap is used, a script  name  and  arguments  may  not  be
              specified on the command line; instead the sbatch-generated wrapper script is used.

       -x, --exclude=<node name list>
              Explicitly exclude certain nodes from the resources granted to the job.

       The following options support Blue Gene systems, but may be applicable to other systems as well.

       --blrts-image=<path>
              Path  to  Blue  GeneL Run Time Supervisor, or blrts, image for bluegene block.  BGL only.  Default
              from blugene.conf if not set.

       --cnload-image=<path>
              Path to compute node image for bluegene block.  BGP only.  Default from blugene.conf if not set.

       --conn-type=<type>
              Require the block connection type to be of a certain type.  On Blue Gene the  acceptable  of  type
              are  MESH,  TORUS  and  NAV.   If  NAV,  or  if  not  set,  then  SLURM will try to fit a what the
              DefaultConnType is set to in the bluegene.conf if that isn't set the default is TORUS.  You should
              not normally set this option.  If running on a BGP system and wanting to run in HTC mode (only for
              1 midplane and below).  You can use HTC_S for SMP, HTC_D for Dual, HTC_V for  virtual  node  mode,
              and  HTC_L  for  Linux mode.  For systems that allow a different connection type per dimension you
              can supply a comma separated list of connection types may be specified,  one  for  each  dimension
              (i.e. M,T,T,T will give you a torus connection is all dimensions expect the first).

       -g, --geometry=<XxYxZ> | <AxXxYxZ>
              Specify  the  geometry  requirements  for  the job. On BlueGene/L and BlueGene/P systems there are
              three numbers giving dimensions in the X, Y and Z directions, while on  BlueGene/Q  systems  there
              are four numbers giving dimensions in the A, X, Y and Z directions and can not be used to allocate
              sub-blocks.   For  example "--geometry=1x2x3x4", specifies a block of nodes having 1 x 2 x 3 x 4 =
              24 nodes (actually midplanes on BlueGene).

       --ioload-image=<path>
              Path to io image for bluegene block.  BGP only.  Default from blugene.conf if not set.

       --linux-image=<path>
              Path to linux image for bluegene block.  BGL only.  Default from blugene.conf if not set.

       --mloader-image=<path>
              Path to mloader image for bluegene block.  Default from blugene.conf if not set.

       -R, --no-rotate
              Disables rotation of the job's requested geometry in  order  to  fit  an  appropriate  block.   By
              default the specified geometry can rotate in three dimensions.

       --ramdisk-image=<path>
              Path to ramdisk image for bluegene block.  BGL only.  Default from blugene.conf if not set.

       --reboot
              Force the allocated nodes to reboot before starting the job.

INPUT ENVIRONMENT VARIABLES

       Upon  startup,  sbatch will read and handle the options set in the following environment variables.  Note
       that environment variables will override any options set in a batch script, and command line options will
       override any environment variables.

       SBATCH_ACCOUNT        Same as -A, --account

       SBATCH_ACCTG_FREQ     Same as --acctg-freq

       SBATCH_ARRAY_INX      Same as -a, --array

       SBATCH_BLRTS_IMAGE    Same as --blrts-image

       SLURM_CHECKPOINT      Same as --checkpoint

       SLURM_CHECKPOINT_DIR  Same as --checkpoint-dir

       SBATCH_CLUSTERS or SLURM_CLUSTERS
                             Same as --clusters

       SBATCH_CNLOAD_IMAGE   Same as --cnload-image

       SBATCH_CONN_TYPE      Same as --conn-type

       SBATCH_CPU_BIND       Same as --cpu_bind

       SBATCH_DEBUG          Same as -v, --verbose

       SBATCH_DISTRIBUTION   Same as -m, --distribution

       SBATCH_EXCLUSIVE      Same as --exclusive

       SLURM_EXIT_ERROR      Specifies the exit code generated when a SLURM error occurs (e.g. invalid options).
                             This can be used by a script to distinguish application  exit  codes  from  various
                             SLURM error conditions.

       SBATCH_EXPORT         Same as --export

       SBATCH_GEOMETRY       Same as -g, --geometry

       SBATCH_GET_USER_ENV   Same as --get-user-env

       SBATCH_IGNORE_PBS     Same as --ignore-pbs

       SBATCH_IMMEDIATE      Same as -I, --immediate

       SBATCH_IOLOAD_IMAGE   Same as --ioload-image

       SBATCH_JOBID          Same as --jobid

       SBATCH_JOB_NAME       Same as -J, --job-name

       SBATCH_LINUX_IMAGE    Same as --linux-image

       SBATCH_MEM_BIND       Same as --mem_bind

       SBATCH_MLOADER_IMAGE  Same as --mloader-image

       SBATCH_NETWORK        Same as --network

       SBATCH_NO_REQUEUE     Same as --no-requeue

       SBATCH_NO_ROTATE      Same as -R, --no-rotate

       SBATCH_OPEN_MODE      Same as --open-mode

       SBATCH_OVERCOMMIT     Same as -O, --overcommit

       SBATCH_PARTITION      Same as -p, --partition

       SBATCH_PROFILE        Same as --profile

       SBATCH_QOS            Same as --qos

       SBATCH_RAMDISK_IMAGE  Same as --ramdisk-image

       SBATCH_RESERVATION    Same as --reservation

       SBATCH_REQ_SWITCH     When  a  tree  topology is used, this defines the maximum count of switches desired
                             for the job allocation and optionally the maximum time to wait for that  number  of
                             switches. See --switches

       SBATCH_REQUEUE        Same as --requeue

       SBATCH_SIGNAL         Same as --signal

       SBATCH_TIMELIMIT      Same as -t, --time

       SBATCH_WAIT_ALL_NODES Same as --wait-all-nodes

       SBATCH_WAIT4SWITCH    Max time waiting for requested switches. See --switches

       SBATCH_WCKEY          Same as --wckey

       SLURM_STEP_KILLED_MSG_NODE_ID=ID
                             If  set,  only  the  specified  node  will log when the job or step are killed by a
                             signal.

OUTPUT ENVIRONMENT VARIABLES

       The SLURM controller will set the following variables in the environment of the batch script.

       BASIL_RESERVATION_ID
              The reservation ID on Cray systems running ALPS/BASIL only.

       MPIRUN_NOALLOCATE
              Do not allocate a block on Blue Gene L/P systems only.

       MPIRUN_NOFREE
              Do not free a block on Blue Gene L/P systems only.

       MPIRUN_PARTITION
              The block name on Blue Gene systems only.

       SLURM_ARRAY_TASK_ID
              Job array ID (index) number.

       SLURM_ARRAY_JOB_ID
              Job array's master job ID number.   SLURM_CHECKPOINT_IMAGE_DIR  Directory  into  which  checkpoint
              images should  be written if specified on the execute line.

       SLURM_CPU_BIND
              Set to value of the --cpu_bind option.

       SLURM_CPU_BIND_LIST
              --cpu_bind  map  or  mask  list (list of SLURM CPU IDs or masks for this node, CPU_ID = Board_ID x
              threads_per_board + Socket_ID x threads_per_socket + Core_ID x threads_per_core + Thread_ID).

       SLURM_CPUS_ON_NODE
              Number of CPUS on the allocated node.

       SLURM_DISTRIBUTION
              Same as -m, --distribution

       SLURM_GTIDS
              Global task IDs running on this node.  Zero  origin and comma separated.

       SLURM_JOB_ID (and SLURM_JOBID for backwards compatibility)
              The ID of the job allocation.

       SLURM_JOB_CPUS_PER_NODE
              Count of processors available to the job on this node.  Note the  select/linear  plugin  allocates
              entire  nodes  to  jobs,  so  the  value  indicates  the  total  count  of  CPUs on the node.  The
              select/cons_res plugin allocates individual processors to  jobs,  so  this  number  indicates  the
              number of processors on this node allocated to the job.

       SLURM_JOB_DEPENDENCY
              Set to value of the --dependency option.

       SLURM_JOB_NAME
              Name of the job.

       SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
              List of nodes allocated to the job.

       SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
              Total number of nodes in the job's resource allocation.

       SLURM_LOCALID
              Node local task ID for the process within a job.

       SLURM_MEM_BIND
              Set to value of the --mem_bind option.

       SLURM_NODE_ALIASES
              Sets  of  node  name,  communication  address and hostname for nodes allocated to the job from the
              cloud. Each element in the set if colon separated and each set is comma  separated.  For  example:
              SLURM_NODE_ALIASES=ec0:1.2.3.4:foo,ec1:1.2.3.5:bar

       SLURM_NODEID
              ID of the nodes allocated.

       SLURMD_NODENAME
              Names of all the allocated nodes.

       SLURM_NTASKS (and SLURM_NPROCS for backwards compatibility)
              Same as -n, --ntasks

       SLURM_NTASKS_PER_CORE
              Number of tasks requested per core.  Only set if the --ntasks-per-core option is specified.

       SLURM_NTASKS_PER_NODE
              Number of tasks requested per node.  Only set if the --ntasks-per-node option is specified.

       SLURM_NTASKS_PER_SOCKET
              Number of tasks requested per socket.  Only set if the --ntasks-per-socket option is specified.

       SLURM_PRIO_PROCESS
              The   scheduling  priority  (nice value) at the time of job submission.  This value is  propagated
              to the spawned processes.

       SLURM_PROCID
              The MPI rank (or relative process ID) of the current process

       SLURM_PROFILE
              Same as --profile

       SLURM_RESTART_COUNT
              If the job has been restarted due to system failure or has been explicitly requeued, this will  be
              sent to the number of times the job has been restarted.

       SLURM_SUBMIT_DIR
              The directory from which sbatch was invoked.

       SLURM_SUBMIT_HOST
              The hostname of the computer from which sbatch was invoked.

       SLURM_TASKS_PER_NODE
              Number  of tasks to be initiated on each node. Values are comma separated and in the same order as
              SLURM_NODELIST.  If two or more consecutive nodes are to have the same task count, that  count  is
              followed  by "(x#)" where "#" is the repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1"
              indicates that the first three nodes will each execute  three  tasks  and  the  fourth  node  will
              execute one task.

       SLURM_TASK_PID
              The process ID of the task being started.

       SLURM_TOPOLOGY_ADDR
              This  is  set only if the  system  has  the  topology/tree  plugin configured.   The value will be
              set to the names network switches which  may be  involved  in  the  job's  communications from the
              system's top level switch down to the leaf switch and  ending  with node name. A period is used to
              separate each hardware component name.

       SLURM_TOPOLOGY_ADDR_PATTERN
              This is set only if the  system  has  the  topology/tree  plugin configured. The value will be set
              component  types  listed   in SLURM_TOPOLOGY_ADDR.   Each  component will be identified as  either
              "switch" or "node".  A period is  used  to separate each hardware component type.

EXAMPLES

       Specify a batch script by filename on the command line.  The batch script specifies a 1 minute time limit
       for the job.

              $ cat myscript
              #!/bin/sh
              #SBATCH --time=1
              srun hostname |sort

              $ sbatch -N4 myscript
              salloc: Granted job allocation 65537

              $ cat slurm-65537.out
              host1
              host2
              host3
              host4

       Pass a batch script to sbatch on standard input:

              $ sbatch -N4 <<EOF
              > #!/bin/sh
              > srun hostname |sort
              > EOF
              sbatch: Submitted batch job 65541

              $ cat slurm-65541.out
              host1
              host2
              host3
              host4

COPYING

       Copyright  (C)  2006-2007  The  Regents  of the University of California.  Produced at Lawrence Livermore
       National Laboratory (cf, DISCLAIMER).
       Copyright (C) 2008-2010 Lawrence Livermore National Security.
       Copyright (C) 2010-2013 SchedMD LLC.

       This file is part of SLURM, a resource management program.  For details, see <http://slurm.schedmd.com/>.

       SLURM is free software; you can redistribute it and/or modify it under  the  terms  of  the  GNU  General
       Public License as published by the Free Software Foundation; either version 2 of the License, or (at your
       option) any later version.

       SLURM  is  distributed  in  the  hope  that it will be useful, but WITHOUT ANY WARRANTY; without even the
       implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.   See  the  GNU  General  Public
       License for more details.

SEE ALSO

       sinfo(1),  sattach(1),  salloc(1),  squeue(1),  scancel(1), scontrol(1), slurm.conf(5), sched_setaffinity
       (2), numa (3)

January 2013                                        SLURM 2.6                                          sbatch(1)