Provided by: slurm-llnl_2.1.0-1_i386 bug

NAME

       sbatch - Submit a batch script to SLURM.

SYNOPSIS

       sbatch [options] script [args...]

DESCRIPTION

       sbatch  submits a batch script to SLURM.  The batch script may be given
       to sbatch through a file name on the command line, or if no  file  name
       is  specified,  sbatch  will  read in a script from standard input. The
       batch script may contain options preceded  with  "#SBATCH"  before  any
       executable commands in the script.

       sbatch  exits  immediately after the script is successfully transferred
       to the SLURM controller and assigned a SLURM job ID.  The batch  script
       is  not  necessarily  granted  resources immediately, it may sit in the
       queue of pending jobs for  some  time  before  its  required  resources
       become available.

       When  the job allocation is finally granted for the batch script, SLURM
       runs a single copy of the batch script on the first node in the set  of
       allocated nodes.

OPTIONS

       -A, --account=<account>
              Charge  resources  used  by  this job to specified account.  The
              account is an arbitrary string. The account name may be  changed
              after job submission using the scontrol command.

       --acctg-freq=<seconds>
              Define  the  job accounting sampling interval.  This can be used
              to override  the  JobAcctGatherFrequency  parameter  in  SLURM’s
              configuration  file,  slurm.conf.  A value of zero disables real
              the periodic job sampling and  provides  accounting  information
              only  on  job  termination (reducing SLURM interference with the
              job).

       -B --extra-node-info=<sockets[:cores[:threads]]>
              Request a specific allocation of resources with  details  as  to
              the number and type of computational resources within a cluster:
              number of sockets (or physical processors) per node,  cores  per
              socket,  and  threads  per  core.  The total amount of resources
              being requested is the product of all of the terms.  Each  value
              specified  is considered a minimum.  An asterisk (*) can be used
              as a placeholder indicating that all available resources of that
              type  are  to be utilized.  As with nodes, the individual levels
              can also be specified in separate options if desired:
                  --sockets-per-node=<sockets>
                  --cores-per-socket=<cores>
                  --threads-per-core=<threads>
              When  the  task/affinity  plugin  is  enabled,   specifying   an
              allocation  in  this  manner  also  instructs SLURM to use a CPU
              affinity mask to guarantee the request is filled  as  specified.
              NOTE:  Support  for  these  options are configuration dependent.
              The task/affinity plugin must be configured.  In addition either
              select/linear  or select/cons_res plugin must be configured.  If
              select/cons_res is configured,  it  must  have  a  parameter  of
              CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory.

       --begin=<time>
              Submit  the  batch  script  to the SLURM controller immediately,
              like normal, but tell the controller to defer the allocation  of
              the job until the specified time.

              Time may be of the form HH:MM:SS to run a job at a specific time
              of day (seconds are optional).  (If that time is  already  past,
              the  next day is assumed.)  You may also specify midnight, noon,
              or teatime (4pm) and you can have a time-of-day suffixed with AM
              or  PM  for running in the morning or the evening.  You can also
              say what day the job will be run, by specifying a  date  of  the
              form  MMDDYY or MM/DD/YY YYYY-MM-DD. Combine date and time using
              the following format YYYY-MM-DD[THH:MM[:SS]]. You can also  give
              times  like  now + count time-units, where the time-units can be
              seconds (default), minutes, hours, days, or weeks  and  you  can
              tell  SLURM  to  run the job today with the keyword today and to
              run the job tomorrow with the keyword tomorrow.  The  value  may
              be changed after job submission using the scontrol command.  For
              example:
                 --begin=16:00
                 --begin=now+1hour
                 --begin=now+60           (seconds by default)
                 --begin=2010-01-20T12:34:00

              Notes on date/time specifications:
               -  Although  the  ’seconds’  field   of   the   HH:MM:SS   time
              specification is allowed by the code, note that the poll time of
              the SLURM scheduler is not precise enough to guarantee  dispatch
              of  the  job  on  the exact second.  The job will be eligible to
              start on the next poll following the specified time.  The  exact
              poll  interval  depends on the SLURM scheduler (e.g., 60 seconds
              with the default sched/builtin).
               -  If  no  time  (HH:MM:SS)  is  specified,  the   default   is
              (00:00:00).
               -  If a date is specified without a year (e.g., MM/DD) then the
              current year is assumed, unless the  combination  of  MM/DD  and
              HH:MM:SS  has  already  passed  for that year, in which case the
              next year is used.

       --checkpoint=<time>
              Specifies the interval between creating checkpoints of  the  job
              step.   By  default,  the  job step will no checkpoints created.
              Acceptable time formats  include  "minutes",  "minutes:seconds",
              "hours:minutes:seconds",  "days-hours", "days-hours:minutes" and
              "days-hours:minutes:seconds".

       --checkpoint-dir=<directory>
              Specifies the  directory  into  which  the  job  or  job  step’s
              checkpoint  should  be written (used by the checkpoint/blcrm and
              checkpoint/xlch plugins only).  The default value is the current
              working  directory.   Checkpoint  files  will  be  of  the  form
              "<job_id>.ckpt" for jobs and "<job_id>.<step_id>.ckpt"  for  job
              steps.

       --comment=<string>
              An arbitrary comment.

       -C, --constraint=<list>
              Specify  a  list  of  constraints.  The constraints are features
              that have been assigned to the nodes by the slurm administrator.
              The  list of constraints may include multiple features separated
              by ampersand (AND) and/or  vertical  bar  (OR)  operators.   For
              example:             --constraint="opteron&video"             or
              --constraint="fast|faster".  In the first  example,  only  nodes
              having  both  the feature "opteron" AND the feature "video" will
              be used.  There is no mechanism to specify  that  you  want  one
              node  with  feature  "opteron"  and  another  node  with feature
              "video" in that case that no node has both  features.   If  only
              one  of  a  set  of  possible  options  should  be  used for all
              allocated nodes, then  use  the  OR  operator  and  enclose  the
              options     within     square     brackets.      For    example:
              "--constraint=[rack1|rack2|rack3|rack4]"  might   be   used   to
              specify that all nodes must be allocated on a single rack of the
              cluster, but any of those four racks can be used.  A request can
              also  specify  the  number  of nodes needed with some feature by
              appending an asterisk and count after  the  feature  name.   For
              example    "sbatch   --nodes=16   --constraint=graphics*4   ..."
              indicates that the job requires 16 nodes at that at  least  four
              of  those  nodes  must have the feature "graphics."  Constraints
              with node counts may only be combined with AND operators.  If no
              nodes have the requested features, then the job will be rejected
              by the slurm job manager.

       --contiguous
              If set, then the allocated nodes must  form  a  contiguous  set.
              Not honored with the topology/tree or topology/3d_torus plugins,
              both of which can modify the node ordering.

       --cpu_bind=[{quiet,verbose},]type
              Bind tasks to CPUs. Used only when the task/affinity  plugin  is
              enabled.    The   configuration  parameter  TaskPluginParam  may
              override these options.   For  example,  if  TaskPluginParam  is
              configured  to  bind to cores, your job will not be able to bind
              tasks to sockets.  NOTE: To have  SLURM  always  report  on  the
              selected  CPU  binding for all commands executed in a shell, you
              can  enable  verbose  mode   by   setting   the   SLURM_CPU_BIND
              environment variable value to "verbose".

              The  following  informational environment variables are set when
              --cpu_bind is in use:
                      SLURM_CPU_BIND_VERBOSE
                      SLURM_CPU_BIND_TYPE
                      SLURM_CPU_BIND_LIST

              See  the  ENVIRONMENT  VARIABLE  section  for  a  more  detailed
              description of the individual SLURM_CPU_BIND* variables.

              When  using --cpus-per-task to run multithreaded tasks, be aware
              that CPU binding is inherited from the parent  of  the  process.
              This  means that the multithreaded task should either specify or
              clear the CPU binding itself to avoid having all threads of  the
              multithreaded   task  use  the  same  mask/CPU  as  the  parent.
              Alternatively, fat masks (masks  which  specify  more  than  one
              allowed  CPU)  could  be  used for the tasks in order to provide
              multiple CPUs for the multithreaded tasks.

              By default, a job step has access to every CPU allocated to  the
              job.   To  ensure  that  distinct CPUs are allocated to each job
              step, us the --exclusive option.

              If the job step allocation includes an allocation with a  number
              of sockets, cores, or threads equal to the number of tasks to be
              started  then  the  tasks  will  by  default  be  bound  to  the
              appropriate  resources.   Disable  this  mode  of  operation  by
              explicitly setting "--cpu-bind=none".

              Note that a job step can be allocated different numbers of  CPUs
              on each node or be allocated CPUs not starting at location zero.
              Therefore one of the options which  automatically  generate  the
              task  binding  is  recommended.   Explicitly  specified masks or
              bindings are only honored when the job step has  been  allocated
              every available CPU on the node.

              Binding  a task to a NUMA locality domain means to bind the task
              to the set of CPUs that belong to the NUMA  locality  domain  or
              "NUMA  node".   If  NUMA  locality  domain  options  are used on
              systems with no NUMA support, then each socket is  considered  a
              locality domain.

              Supported options include:

              q[uiet]
                     Quietly bind before task runs (default)

              v[erbose]
                     Verbosely report binding before task runs

              no[ne] Do not bind tasks to CPUs (default)

              rank   Automatically  bind  by task rank.  Task zero is bound to
                     socket (or core or  thread)  zero,  etc.   Not  supported
                     unless the entire node is allocated to the job.

              map_cpu:<list>
                     Bind  by  mapping  CPU  IDs  to  tasks as specified where
                     <list> is  <cpuid1>,<cpuid2>,...<cpuidN>.   CPU  IDs  are
                     interpreted  as  decimal  values unless they are preceded
                     with  ’0x’  in  which  case  they  are   interpreted   as
                     hexadecimal values.  Not supported unless the entire node
                     is allocated to the job.

              mask_cpu:<list>
                     Bind by setting CPU masks on  tasks  as  specified  where
                     <list>  is  <mask1>,<mask2>,...<maskN>.   CPU  masks  are
                     always interpreted  as  hexadecimal  values  but  can  be
                     preceded with an optional ’0x’.

              sockets
                     Automatically  generate  masks  binding tasks to sockets.
                     If the  number  of  tasks  differs  from  the  number  of
                     allocated sockets this can result in sub-optimal binding.

              cores  Automatically generate masks binding tasks to cores.   If
                     the  number of tasks differs from the number of allocated
                     cores this can result in sub-optimal binding.

              threads
                     Automatically generate masks binding  tasks  to  threads.
                     If  the  number  of  tasks  differs  from  the  number of
                     allocated threads this can result in sub-optimal binding.

              ldoms  Automatically   generate  masks  binding  tasks  to  NUMA
                     locality domains.  If the number of  tasks  differs  from
                     the  number of allocated locality domains this can result
                     in sub-optimal binding.

              help   Show this help message

       -c, --cpus-per-task=<ncpus>
              Advise the SLURM controller that ensuing job steps will  require
              ncpus  number  of processors per task.  Without this option, the
              controller will just try to allocate one processor per task.

              For instance, consider an application that  has  4  tasks,  each
              requiring   3  processors.   If  our  cluster  is  comprised  of
              quad-processors nodes and we simply ask for 12  processors,  the
              controller  might  give  us only 3 nodes.  However, by using the
              --cpus-per-task=3 options, the controller knows that  each  task
              requires  3 processors on the same node, and the controller will
              grant an allocation of 4 nodes, one for each of the 4 tasks.

       -d, --dependency=<dependency_list>
              Defer the start of this job  until  the  specified  dependencies
              have been satisfied completed.  <dependency_list> is of the form
              <type:job_id[:job_id][,type:job_id[:job_id]]>.   Many  jobs  can
              share  the  same  dependency  and  these jobs may even belong to
              different  users. The  value may be changed after job submission
              using the scontrol command.

              after:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have begun execution.

              afterany:job_id[:jobid...]
                     This job can begin execution  after  the  specified  jobs
                     have terminated.

              afternotok:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have terminated in some failed state (non-zero exit code,
                     node failure, timed out, etc).

              afterok:job_id[:jobid...]
                     This  job  can  begin  execution after the specified jobs
                     have successfully executed (ran to completion  with  non-
                     zero exit code).

              singleton
                     This   job  can  begin  execution  after  any  previously
                     launched jobs sharing the same job  name  and  user  have
                     terminated.

       -D, --workdir=<directory>
              Set  the  working  directory  of  the  batch script to directory
              before it it executed.

       -e, --error=<filename pattern>
              Instruct SLURM to connect  the  batch  script’s  standard  error
              directly  to  the file name specified in the "filename pattern".
              See the --input option for filename specification options.

       --exclusive
              The job allocation cannot share nodes with other  running  jobs.
              This is the oposite of --share, whichever option is seen last on
              the  command  line  will  win.   (The  default  shared/exclusive
              behaviour depends on system configuration.)

       -F, --nodefile=<node file>
              Much  like  --nodelist,  but  the list is contained in a file of
              name node file.  The node  names  of  the  list  may  also  span
              multiple  lines in the file.    Duplicate node names in the file
              will be ignored.  The order of the node names in the list is not
              important; the node names will be sorted by SLURM.

       --get-user-env[=timeout][mode]
              This  option  will tell sbatch to retrieve the login environment
              variables for the user  specified  in  the  --uid  option.   The
              environment variables are retrieved by running something of this
              sort "su - <username> -c /usr/bin/env" and parsing  the  output.
              Be  aware that any environment variables already set in sbatch’s
              environment will take precedence over any environment  variables
              in the user’s login environment. Clear any environment variables
              before calling sbatch that you do not  want  propagated  to  the
              spawned  program.   The  optional  timeout  value is in seconds.
              Default value is 8 seconds.  The optional mode value control the
              "su"  options.   With  a  mode  value  of  "S", "su" is executed
              without the "-" option.  With a  mode  value  of  "L",  "su"  is
              executed with the "-" option, replicating the login environment.
              If mode not specified, the mode established at SLURM build  time
              is    used.     Example   of   use   include   "--get-user-env",
              "--get-user-env=10"          "--get-user-env=10L",           and
              "--get-user-env=S".   NOTE: This option only works if the caller
              has an effective uid of  "root".   This  option  was  originally
              created for use by Moab.

       --gid=<group>
              If  sbatch  is run as root, and the --gid option is used, submit
              the job with group’s group access permissions.  group may be the
              group name or the numerical group ID.

       -h, --help
              Display help information and exit.

       --hint=<type>
              Bind tasks according to application hints

              compute_bound
                     Select  settings  for compute bound applications: use all
                     cores in each socket

              memory_bound
                     Select settings for memory bound applications:  use  only
                     one core in each socket

              [no]multithread
                     [don’t]  use  extra  threads with in-core multi-threading
                     which can benefit communication intensive applications

              help   show this help message

       -I, --immediate
              The batch script will only be submitted to the controller if the
              resources  necessary to grant its job allocation are immediately
              available.  If the job allocation will have to wait in  a  queue
              of pending jobs, the batch script will not be submitted.

       -i, --input=<filename pattern>
              Instruct  SLURM  to  connect  the  batch script’s standard input
              directly to the file name specified in the "filename pattern".

              By default, "/dev/null" is open on the batch  script’s  standard
              input  and  both standard output and standard error are directed
              to a file of the name "slurm-%j.out", where the "%j" is replaced
              with the job allocation number, as described below.

              The  filename  pattern  may  contain  one  or  more  replacement
              symbols, which are a percent sign "%" followed by a letter (e.g.
              %j).

              Supported replacement symbols are:
                 %j     Job allocation number.
                 %N     Node  name.   Only  one file is created, so %N will be
                        replaced by the name of the first  node  in  the  job,
                        which is the one that runs the script.

       -J, --job-name=<jobname>
              Specify  a  name for the job allocation. The specified name will
              appear along with the job id number when querying  running  jobs
              on  the  system. The default is the name of the batch script, or
              just "sbatch" if the script is read on sbatch’s standard  input.

       --jobid=<jobid>
              Allocate  resources  as  the specified job id.  NOTE: Only valid
              for user root.

       -k, --no-kill
              Do not automatically terminate a job of one of the nodes it  has
              been allocated fails.  The user will assume the responsibilities
              for fault-tolerance should a node fail.  When there  is  a  node
              failure,  any  active  job steps (usually MPI jobs) on that node
              will almost certainly suffer a fatal error, but with  --no-kill,
              the  job  allocation  will not be revoked so the user may launch
              new job steps on the remaining nodes in their allocation.

              By default SLURM terminates the entire  job  allocation  if  any
              node fails in its range of allocated nodes.

       -L, --licenses=<license>
              Specification  of  licenses (or other resources available on all
              nodes of the cluster) which  must  be  allocated  to  this  job.
              License  names  can  be  followed  by an asterisk and count (the
              default count is one).  Multiple license names should  be  comma
              separated (e.g.  "--licenses=foo*4,bar").

       -m, --distribution=
              <block|cyclic|arbitrary|plane=<options>>  Specify  an  alternate
              distribution method for remote processes.  In sbatch, this  only
              sets  environment variables that will be used by subsequent srun
              requests.
              block  The block distribution method will distribute tasks to  a
                     node  such  that  consecutive  tasks  share  a  node. For
                     example, consider an allocation of three nodes each  with
                     two  cpus.  A  four-task  block distribution request will
                     distribute those tasks to the nodes with  tasks  one  and
                     two on the first node, task three on the second node, and
                     task four on the third node.  Block distribution  is  the
                     default  behavior  if  the  number  of  tasks exceeds the
                     number of allocated nodes.
              cyclic The cyclic distribution method will distribute tasks to a
                     node  such  that  consecutive  tasks are distributed over
                     consecutive  nodes  (in  a  round-robin   fashion).   For
                     example,  consider an allocation of three nodes each with
                     two cpus. A four-task cyclic  distribution  request  will
                     distribute  those  tasks  to the nodes with tasks one and
                     four on the first node, task two on the second node,  and
                     task  three on the third node. Cyclic distribution is the
                     default behavior if the number of tasks is no larger than
                     the number of allocated nodes.
              plane  The  tasks are distributed in blocks of a specified size.
                     The options include a number representing the size of the
                     task   block.    This   is   followed   by   an  optional
                     specification of the task distribution  scheme  within  a
                     block of tasks and between the blocks of tasks.  For more
                     details (including examples and diagrams), please see
                     https://computing.llnl.gov/linux/slurm/mc_support.html
                     and
                     https://computing.llnl.gov/linux/slurm/dist_plane.html.
              arbitrary
                     The  arbitrary  method  of  distribution  will   allocate
                     processes  in-order  as  listed in file designated by the
                     environment variable SLURM_HOSTFILE.  If this variable is
                     listed  it will over ride any other method specified.  If
                     not set the method will default  to  block.   Inside  the
                     hostfile  must  contain  at  minimum  the number of hosts
                     requested.  If requesting tasks (-n) your tasks  will  be
                     laid out on the nodes in the order of the file.

       --mail-type=<type>
              Notify user by email when certain event types occur.  Valid type
              values are BEGIN, END, FAIL, ALL (any state change).   The  user
              to be notified is indicated with --mail-user.

       --mail-user=<user>
              User  to  receive email notification of state changes as defined
              by --mail-type.  The default value is the submitting user.

       --mem=<MB>
              Specify the real memory required per node in MegaBytes.  Default
              value  is  DefMemPerNode and the maximum value is MaxMemPerNode.
              If configured, both of parameters can be seen using the scontrol
              show  config command.  This parameter would generally be used if
              whole nodes are allocated  to  jobs  (SelectType=select/linear).
              Also  see  --mem-per-cpu.   --mem and --mem-per-cpu are mutually
              exclusive.

       --mem-per-cpu=<MB>
              Mimimum memory required per allocated CPU in MegaBytes.  Default
              value  is DefMemPerCPU and the maximum value is MaxMemPerCPU. If
              configured, both of parameters can be seen  using  the  scontrol
              show  config command.  This parameter would generally be used if
              individual     processors     are     allocated     to      jobs
              (SelectType=select/cons_res).    Also   see  --mem.   --mem  and
              --mem-per-cpu are mutually exclusive.

       --mem_bind=[{quiet,verbose},]type
              Bind tasks to memory. Used only when the task/affinity plugin is
              enabled  and the NUMA memory functions are available.  Note that
              the resolution of CPU and memory  binding  may  differ  on  some
              architectures.  For example, CPU binding may be performed at the
              level of the cores within a processor while memory binding  will
              be  performed  at  the  level  of nodes, where the definition of
              "nodes" may differ from system to system. The use  of  any  type
              other  than  "none"  or "local" is not recommended.  If you want
              greater control, try running a simple test code with the options
              "--cpu_bind=verbose,none  --mem_bind=verbose,none"  to determine
              the specific configuration.

              NOTE: To have SLURM always report on the selected memory binding
              for  all  commands  executed  in a shell, you can enable verbose
              mode by setting the SLURM_MEM_BIND environment variable value to
              "verbose".

              The  following  informational environment variables are set when
              --mem_bindis in use:

                      SLURM_MEM_BIND_VERBOSE
                      SLURM_MEM_BIND_TYPE
                      SLURM_MEM_BIND_LIST

              See the  ENVIRONMENT  VARIABLES  section  for  a  more  detailed
              description of the individual SLURM_MEM_BIND* variables.

              Supported options include:
              q[uiet]
                     quietly bind before task runs (default)
              v[erbose]
                     verbosely report binding before task runs
              no[ne] don’t bind tasks to memory (default)
              rank   bind by task rank (not recommended)
              local  Use memory local to the processor in use
              map_mem:<list>
                     bind  by  mapping  a  node’s memory to tasks as specified
                     where <list> is <cpuid1>,<cpuid2>,...<cpuidN>.   CPU  IDs
                     are   interpreted  as  decimal  values  unless  they  are
                     preceded with ’0x’ in  which  case  they  interpreted  as
                     hexadecimal values (not recommended)
              mask_mem:<list>
                     bind  by setting memory masks on tasks as specified where
                     <list> is <mask1>,<mask2>,...<maskN>.  memory  masks  are
                     always  interpreted  as  hexadecimal  values.   Note that
                     masks must be preceded with a ’0x’ if  they  don’t  begin
                     with  [0-9] so they are seen as numerical values by srun.
              help   show this help message

       --mincores=<n>
              Specify a minimum number of cores per socket.

       --mincpus=<n>
              Specify a minimum number of logical cpus/processors per node.

       --minsockets=<n>
              Specify a minimum number of sockets  (physical  processors)  per
              node.

       --minthreads=<n>
              Specify a minimum number of threads per core.

       -N, --nodes=<minnodes[-maxnodes]>
              Request  that  a  minimum of minnodes nodes be allocated to this
              job.  The scheduler may decide to launch the job  on  more  than
              minnodes  nodes.   A  limit  on  the  maximum  node count may be
              specified with maxnodes (e.g. "--nodes=2-4").  The  minimum  and
              maximum  node count may be the same to specify a specific number
              of nodes (e.g. "--nodes=2-2" will  ask  for  two  and  ONLY  two
              nodes).  The partition’s node limits supersede those of the job.
              If a job’s node limits are outside of the  range  permitted  for
              its  associated  partition,  the  job  will be left in a PENDING
              state.  This permits possible execution at a  later  time,  when
              the partition limit is changed.  If a job node limit exceeds the
              number of nodes configured in the partition,  the  job  will  be
              rejected.   Note that the environment variable SLURM_NNODES will
              be set to the count of nodes actually allocated to the job.  See
              the  ENVIRONMENT VARIABLES  section for more information.  If -N
              is not specified, the default behavior  is  to  allocate  enough
              nodes to satisfy the requirements of the -n and -c options.  The
              job will be allocated as many nodes as possible within the range
              specified and without delaying the initiation of the job.

       -n, --ntasks=<number>
              sbatch  does  not  launch  tasks,  it  requests an allocation of
              resources and submits a batch script. This  option  advises  the
              SLURM  controller that job steps run within this allocation will
              launch a maximum of number tasks and  sufficient  resources  are
              allocated  to  accomplish  this.   The  default  is one task per
              socket   or   core   (depending   upon   the   value   of    the
              SelectTypeParameters parameter in slurm.conf), but note that the
              --cpus-per-task option will change this default.

       --network=<type>
              Specify the communication protocol to be used.  This  option  is
              supported  on  AIX  systems.  Since POE is used to launch tasks,
              this option is not normally  used  or  is  specified  using  the
              SLURM_NETWORK  environment variable.  The interpretation of type
              is system dependent.  For systems with an IBM Federation switch,
              the  following  comma-separated  and  case insensitive types are
              recognized: IP (the default is user-space),  SN_ALL,  SN_SINGLE,
              BULK_XFER  and  adapter  names   (e.g. SNI0 and SNI1).  For more
              information,  on  IBM  systems  see  poe  documentation  on  the
              environment  variables  MP_EUIDEVICE and MP_USE_BULK_XFER.  Note
              that only four jobs steps may be active at once on a  node  with
              the BULK_XFER option due to limitations in the Federation switch
              driver.

       --nice[=adjustment]
              Run the job with an adjusted scheduling priority  within  SLURM.
              With no adjustment value the scheduling priority is decreased by
              100. The adjustment range is from -10000 (highest  priority)  to
              10000  (lowest  priority).  Only  privileged users can specify a
              negative adjustment. NOTE: This option is presently  ignored  if
              SchedulerType=sched/wiki or SchedulerType=sched/wiki2.

       --no-requeue
              Specifies  that  the batch job should not be requeued after node
              failure.  Setting this option will prevent system administrators
              from  being  able  to  restart  the  job  (for  example, after a
              scheduled downtime).  When a job is requeued, the  batch  script
              is initiated from its beginning.  Also see the --requeue option.
              The JobRequeue  configuration  parameter  controls  the  default
              behavior on the cluster.

       --ntasks-per-core=<ntasks>
              Request  that  no  more  than  ntasks  be  invoked on each core.
              Similar to --ntasks-per-node except at the core level instead of
              the  node  level.  Masks will automatically be generated to bind
              the tasks to specific core unless --cpu_bind=none is  specified.
              NOTE:     This     option     is     not     supported    unless
              SelectTypeParameters=CR_Core                                  or
              SelectTypeParameters=CR_Core_Memory is configured.

       --ntasks-per-socket=<ntasks>
              Request  that  no  more  than  ntasks be invoked on each socket.
              Similar to --ntasks-per-node except at the socket level  instead
              of  the  node  level.   Masks will automatically be generated to
              bind the tasks to specific  sockets  unless  --cpu_bind=none  is
              specified.    NOTE:   This   option   is  not  supported  unless
              SelectTypeParameters=CR_Socket                                or
              SelectTypeParameters=CR_Socket_Memory is configured.

       --ntasks-per-node=<ntasks>
              Request  that no more than ntasks be invoked on each node.  This
              is similar to using --cpus-per-task=ncpus but does  not  require
              knowledge  of  the  actual number of cpus on each node.  In some
              cases, it is more convenient to be able to request that no  more
              than  a  specific  number  of  ntasks  be  invoked on each node.
              Examples of this include  submitting  a  hybrid  MPI/OpenMP  app
              where  only  one MPI "task/rank" should be assigned to each node
              while  allowing  the  OpenMP  portion  to  utilize  all  of  the
              parallelism   present  in  the  node,  or  submitting  a  single
              setup/cleanup/monitoring job to  each  node  of  a  pre-existing
              allocation as one step in a larger job script.

       -O, --overcommit
              Overcommit  resources.   Normally, sbatch will allocate one task
              per processor.  By specifying --overcommit  you  are  explicitly
              allowing more than one task per processor.  However no more than
              MAX_TASKS_PER_NODE tasks are permitted to execute per node.

       -o, --output=<filename pattern>
              Instruct SLURM to connect the  batch  script’s  standard  output
              directly  to  the file name specified in the "filename pattern".
              See the --input option for filename specification options.

       --open-mode=append|truncate
              Open the output and error files using append or truncate mode as
              specified.   The  default  value  is  specified  by  the  system
              configuration parameter JobFileAppend.

       -p, --partition=<partition name>
              Request a specific partition for the  resource  allocation.   If
              not  specified,  the  default  behaviour  is  to allow the slurm
              controller to select the default partition as designated by  the
              system administrator.

       --propagate[=rlimits]
              Allows  users to specify which of the modifiable (soft) resource
              limits to propagate to the compute  nodes  and  apply  to  their
              jobs.   If  rlimits  is  not specified, then all resource limits
              will be propagated.  The following rlimit names are supported by
              Slurm  (although  some  options  may  not  be  supported on some
              systems):
              ALL       All limits listed below
              AS        The maximum address space for a processes
              CORE      The maximum size of core file
              CPU       The maximum amount of CPU time
              DATA      The maximum size of a process’s data segment
              FSIZE     The maximum size of files created
              MEMLOCK   The maximum size that may be locked into memory
              NOFILE    The maximum number of open files
              NPROC     The maximum number of processes available
              RSS       The maximum resident set size
              STACK     The maximum stack size

       -Q, --quiet
              Suppress informational messages from sbatch. Errors  will  still
              be displayed.

       --qos=<qos>
              Request  a  quality  of  service for the job.  QOS values can be
              defined for each user/cluster/account association in  the  SLURM
              database.   Users will be limited to their association’s defined
              set  of  qos’s   when   the   SLURM   configuration   parameter,
              AccountingStorageEnforce, includes "qos" in it’s definition.

       --requeue
              Specifies  that  the  batch  job  should  be requeued after node
              failure.  When a job is requeued, the batch script is  initiated
              from  its  beginning.   Also  see  the --no-requeue option.  The
              JobRequeue configuration parameter controls the default behavior
              on the cluster.

       --reservation=<name>
              Allocate resources for the job from the named reservation.

       -s, --share
              The  job  allocation  can  share  nodes with other running jobs.
              (The  default  shared/exclusive  behaviour  depends  on   system
              configuration.)   This  may  result the allocation being granted
              sooner than if the --share option was not set and  allow  higher
              system  utilization,  but  application  performance  will likely
              suffer due to competition for resources within a node.

       --signal=<sig_num>[@<sig_time>]
              When a job is within sig_time seconds of its end time,  send  it
              the  signal sig_num.  Due to the resolution of event handling by
              SLURM, the signal may be sent up  to  60  seconds  earlier  than
              specified.   Both  sig_time and sig_num must have integer values
              between zero and 65535.  By default, no signal  is  sent  before
              the  job’s  end  time.   If  a  sig_num is specified without any
              sig_time, the default time will be 60 seconds.

       -t, --time=<time>
              Set a limit on the total run time of the job allocation.  If the
              requested time limit exceeds the partition’s time limit, the job
              will be left in a PENDING state  (possibly  indefinitely).   The
              default time limit is the partition’s time limit.  When the time
              limit is reached, each task in each job  step  is  sent  SIGTERM
              followed  by SIGKILL.  The interval between signals is specified
              by the SLURM configuration parameter KillWait.  A time limit  of
              zero  requests  that  no time limit be imposed.  Acceptable time
              formats       include       "minutes",        "minutes:seconds",
              "hours:minutes:seconds",  "days-hours", "days-hours:minutes" and
              "days-hours:minutes:seconds".

       --tasks-per-node=<n>
              Specify the number of tasks to be launched per node.  Equivalent
              to --ntasks-per-node.

       --tmp=<MB>
              Specify a minimum amount of temporary disk space.

       -u, --usage
              Display brief help message and exit.

       --uid=<user>
              Attempt  to  submit  and/or  run  a  job  as user instead of the
              invoking user id. The invoking user’s credentials will  be  used
              to  check access permissions for the target partition. User root
              may use this option to run jobs as a normal user in  a  RootOnly
              partition  for  example.  If  run  as root, sbatch will drop its
              permissions to  the  uid  specified  after  node  allocation  is
              successful. user may be the user name or numerical user ID.

       -V, --version
              Display version information and exit.

       -v, --verbose
              Increase  the  verbosity  of  sbatch’s  informational  messages.
              Multiple -v’s will  further  increase  sbatch’s  verbosity.   By
              default only errors will be displayed.

       -w, --nodelist=<node name list>
              Request  a  specific  list  of  node  names.   The  list  may be
              specified as a comma-separated list of node names, or a range of
              node  names  (e.g.  mynode[1-5,7,...]).  Duplicate node names in
              the list will be ignored.  The order of the node  names  in  the
              list is not important; the node names will be sorted by SLURM.

       --wckey=<wckey>
              Specify  wckey  to be used with job.  If TrackWCKey=no (default)
              in the slurm.conf this value is ignored.

       --wrap=<command string>
              Sbatch will wrap the specified command string in a  simple  "sh"
              shell  script,  and  submit that script to the slurm controller.
              When --wrap is used, a script name  and  arguments  may  not  be
              specified  on  the  command  line;  instead the sbatch-generated
              wrapper script is used.

       -x, --exclude=<node name list>
              Explicitly exclude certain nodes from the resources  granted  to
              the job.

       The  following options support Blue Gene systems, but may be applicable
       to other systems as well.

       --blrts-image=<path>
              Path to blrts image for bluegene block.  BGL only.  Default from
              blugene.conf if not set.

       --cnload-image=<path>
              Path  to  compute  node  image  for  bluegene  block.  BGP only.
              Default from blugene.conf if not set.

       --conn-type=<type>
              Require the partition connection type to be of a  certain  type.
              On Blue Gene the acceptable of type are MESH, TORUS and NAV.  If
              NAV, or if not set, then SLURM will try  to  fit  a  TORUS  else
              MESH.   You  should  not  normally  set this option.  SLURM will
              normally allocate a TORUS if possible for a given geometry.   If
              running on a BGP system and wanting to run in HTC mode (only for
              1 midplane and below).  You can use HTC_S  for  SMP,  HTC_D  for
              Dual, HTC_V for virtual node mode, and HTC_L for Linux mode.

       -g, --geometry=<XxYxZ>
              Specify the geometry requirements for the job. The three numbers
              represent the required geometry giving dimensions in  the  X,  Y
              and  Z  directions.  For example "--geometry=2x3x4", specifies a
              block of nodes having 2 x 3  x  4  =  24  nodes  (actually  base
              partitions on Blue Gene).

       --ioload-image=<path>
              Path  to  io  image for bluegene block.  BGP only.  Default from
              blugene.conf if not set.

       --linux-image=<path>
              Path to linux image for bluegene block.  BGL only.  Default from
              blugene.conf if not set.

       --mloader-image=<path>
              Path   to  mloader  image  for  bluegene  block.   Default  from
              blugene.conf if not set.

       -R, --no-rotate
              Disables rotation of the job’s requested geometry  in  order  to
              fit an appropriate partition.  By default the specified geometry
              can rotate in three dimensions.

       --ramdisk-image=<path>
              Path to ramdisk image for bluegene block.   BGL  only.   Default
              from blugene.conf if not set.

       --reboot
              Force the allocated nodes to reboot before starting the job.

INPUT ENVIRONMENT VARIABLES

       Upon  startup,  sbatch  will  read  and  handle  the options set in the
       following environment variables.  Note that environment variables  will
       override  any  options  set in a batch script, and command line options
       will override any environment variables.

       SBATCH_ACCOUNT        Same as -A, --account
       SBATCH_ACCTG_FREQ     Same as --acctg-freq
       SLURM_CHECKPOINT      Same as --checkpoint
       SLURM_CHECKPOINT_DIR  Same as --checkpoint-dir
       SBATCH_CONN_TYPE      Same as --conn-type
       SBATCH_CPU_BIND       Same as --cpu_bind
       SBATCH_DEBUG          Same as -v, --verbose
       SBATCH_DISTRIBUTION   Same as -m, --distribution
       SBATCH_EXCLUSIVE      Same as --exclusive
       SLURM_EXIT_ERROR      Specifies the exit code generated  when  a  SLURM
                             error occurs (e.g. invalid options).  This can be
                             used by a script to distinguish application  exit
                             codes from various SLURM error conditions.
       SBATCH_GEOMETRY       Same as -g, --geometry
       SBATCH_IMMEDIATE      Same as -I, --immediate
       SBATCH_JOBID          Same as --jobid
       SBATCH_JOB_NAME       Same as -J, --job-name
       SBATCH_MEM_BIND       Same as --mem_bind
       SBATCH_NETWORK        Same as --network
       SBATCH_NO_REQUEUE     Same as --no-requeue
       SBATCH_NO_ROTATE      Same as -R, --no-rotate
       SBATCH_OPEN_MODE      Same as --open-mode
       SBATCH_OVERCOMMIT     Same as -O, --overcommit
       SBATCH_PARTITION      Same as -p, --partition
       SBATCH_QOS            Same as --qos
       SBATCH_SIGNAL         Same as --signal
       SBATCH_TIMELIMIT      Same as -t, --time

OUTPUT ENVIRONMENT VARIABLES

       The   SLURM   controller  will  set  the  following  variables  in  the
       environment of the batch script.
       BASIL_RESERVATION_ID
              The reservation ID on Cray systems running ALPS/BASIL only.
       SLURM_CPU_BIND
              Set to value of the --cpu_bind option.
       SLURM_JOB_ID (and SLURM_JOBID for backwards compatibility)
              The ID of the job allocation.
       SLURM_JOB_CPUS_PER_NODE
              Count of processors available to the job on this node.  Note the
              select/linear  plugin  allocates  entire  nodes  to jobs, so the
              value indicates the total  count  of  CPUs  on  the  node.   The
              select/cons_res  plugin allocates individual processors to jobs,
              so this number indicates the number of processors on  this  node
              allocated to the job.
       SLURM_JOB_DEPENDENCY
              Set to value of the --dependency option.
       SLURM_JOB_NAME
              Name of the job.
       SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
              List of nodes allocated to the job.
       SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
              Total number of nodes in the job’s resource allocation.
       SLURM_MEM_BIND
              Set to value of the --mem_bind option.
       SLURM_TASKS_PER_NODE
              Number  of  tasks to be initiated on each node. Values are comma
              separated and in the same order as SLURM_NODELIST.   If  two  or
              more  consecutive  nodes  are  to have the same task count, that
              count is followed by "(x#)" where "#" is the  repetition  count.
              For  example,  "SLURM_TASKS_PER_NODE=2(x3),1" indicates that the
              first three nodes will each execute three tasks and  the  fourth
              node will execute one task.
       MPIRUN_NOALLOCATE
              Do not allocate a block on Blue Gene systems only.
       MPIRUN_NOFREE
              Do not free a block on Blue Gene systems only.
       SLURM_NTASKS_PER_CORE
              Number   of   tasks   requested  per  core.   Only  set  if  the
              --ntasks-per-core option is specified.
       SLURM_NTASKS_PER_NODE
              Number  of  tasks  requested  per  node.   Only   set   if   the
              --ntasks-per-node option is specified.
       SLURM_NTASKS_PER_SOCKET
              Number   of  tasks  requested  per  socket.   Only  set  if  the
              --ntasks-per-socket option is specified.
       SLURM_RESTART_COUNT
              If the job has been restarted due to system failure or has  been
              explicitly  requeued,  this  will be sent to the number of times
              the job has been restarted.
       SLURM_SUBMIT_DIR
              The directory from which sbatch was invoked.
       MPIRUN_PARTITION
              The block name on Blue Gene systems only.

EXAMPLES

       Specify a batch script by filename on  the  command  line.   The  batch
       script specifies a 1 minute time limit for the job.
              $ cat myscript
              #!/bin/sh
              #SBATCH --time=1
              srun hostname |sort

              $ sbatch -N4 myscript
              salloc: Granted job allocation 65537

              $ cat slurm-65537.out
              host1
              host2
              host3
              host4

       Pass a batch script to sbatch on standard input:
              $ sbatch -N4 <<EOF
              > #!/bin/sh
              > srun hostname |sort
              > EOF
              sbatch: Submitted batch job 65541

              $ cat slurm-65541.out
              host1
              host2
              host3
              host4

COPYING

       Copyright  (C)  2006-2007  The Regents of the University of California.
       Copyright (C) 2008-2009 Lawrence Livermore National Security.  Produced
       at   Lawrence   Livermore   National   Laboratory   (cf,   DISCLAIMER).
       CODE-OCEC-09-009. All rights reserved.
       This file is  part  of  SLURM,  a  resource  management  program.   For
       details, see <https://computing.llnl.gov/linux/slurm/>.
       SLURM  is free software; you can redistribute it and/or modify it under
       the terms of the GNU General Public License as published  by  the  Free
       Software  Foundation;  either  version  2  of  the License, or (at your
       option) any later version.
       SLURM is distributed in the hope that it will be  useful,  but  WITHOUT
       ANY  WARRANTY;  without even the implied warranty of MERCHANTABILITY or
       FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General  Public  License
       for more details.

SEE ALSO

       sinfo(1),  sattach(1),  salloc(1),  squeue(1), scancel(1), scontrol(1),
       slurm.conf(5), sched_setaffinity(2), numa(3)