Provided by: dlm-controld_4.3.0-1_amd64 bug

NAME

       dlm.conf - dlm_controld configuration file

SYNOPSIS

       /etc/dlm/dlm.conf

DESCRIPTION

       The  configuration  options in dlm.conf mirror the dlm_controld command line options.  The
       config file additionally allows advanced fencing and lockspace configuration that are  not
       supported on the command line.

Command line equivalents

       If  an  option  is  specified on the command line and in the config file, the command line
       setting overrides the config file  setting.   See  dlm_controld(8)  for  descriptions  and
       dlm_controld -h for defaults.

       Format:

       setting=value

       Example:

       log_debug=1
       post_join_delay=10
       protocol=tcp

       Options:

       daemon_debug(*)
       log_debug(*)
       protocol
       mark
       debug_logfile(*)
       enable_plock
       plock_debug(*)
       plock_rate_limit(*)
       plock_ownership
       drop_resources_time(*)
       drop_resources_count(*)
       drop_resources_age(*)
       post_join_delay(*)
       enable_fencing
       enable_concurrent_fencing
       enable_startup_fencing
       enable_quorum_fencing(*)
       enable_quorum_lockspace(*)
       repeat_failed_fencing(*)
       enable_helper

       Options with (*) can be reloaded, see Reload config.

Reload config

       Some  dlm.conf  settings  can  be  changed  while  dlm_controld  is running using dlm_tool
       reload_config.  Edit dlm.conf, adding, removing, commenting or changing values,  then  run
       dlm_tool  reload_config  to  apply the changes in dlm_controld.  dlm_tool dump_config will
       show the new settings.

Fencing

       A fence device definition begins with a device line,  followed  by  a  number  of  connect
       lines, one for each node connected to the device.

       A blank line separates device definitions.

       Devices are used in the order they are listed.

       The  device  key  word is followed by a unique dev_name, the agent program to be used, and
       args, which are agent arguments specific to the device.

       The connect key word is followed by the dev_name of the device section, the node ID of the
       connected  node  in the format node=nodeid and args, which are agent arguments specific to
       the node for the given device.

       The format of args is key=val on both device and connect lines, each pair separated  by  a
       space, e.g. key1=val1 key2=val2 key3=val3.

       Format:

       device  dev_name agent [args]
       connect dev_name node=nodeid [args]
       connect dev_name node=nodeid [args]
       connect dev_name node=nodeid [args]

       Example:

       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
       connect foo node=1 port=1
       connect foo node=2 port=2
       connect foo node=3 port=3

       device  bar fence_bar ipaddr=2.2.2.2 login=x password=y
       connect bar node=1 port=1
       connect bar node=2 port=2
       connect bar node=3 port=3

   Parallel devices
       Some devices, like dual power or dual path, must all be turned off in parallel for fencing
       to succeed.  To define multiple devices as being parallel to each other, use the same base
       dev_name with different suffixes and a colon separator between base name and suffix.

       Format:

       device  dev_name:1 agent [args]
       connect dev_name:1 node=nodeid [args]
       connect dev_name:1 node=nodeid [args]
       connect dev_name:1 node=nodeid [args]

       device  dev_name:2 agent [args]
       connect dev_name:2 node=nodeid [args]
       connect dev_name:2 node=nodeid [args]
       connect dev_name:2 node=nodeid [args]

       Example:

       device  foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
       connect foo:1 node=1 port=1
       connect foo:2 node=2 port=2
       connect foo:3 node=3 port=3

       device  foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
       connect foo:2 node=1 port=1
       connect foo:2 node=2 port=2
       connect foo:2 node=3 port=3

   Unfencing
       A  node  may  sometimes  need  to  "unfence"  itself when starting.  The unfencing command
       reverses the effect of a previous fencing operation  against  it.   An  example  would  be
       fencing that disables a port on a SAN switch.  A node could use unfencing to re-enable its
       switch port when starting up after rebooting.  (Care must be taken to ensure it's safe for
       a  node  to  unfence  itself.   A node often needs to be cleanly rebooted before unfencing
       itself.)

       To specify that a node should unfence itself for a given device, the unfence line is added
       after the connect lines.

       Format:

       device  dev_name agent [args]
       connect dev_name node=nodeid [args]
       connect dev_name node=nodeid [args]
       connect dev_name node=nodeid [args]
       unfence dev_name

       Example:

       device  foo fence_foo ipaddr=1.1.1.1 login=x password=y
       connect foo node=1 port=1
       connect foo node=2 port=2
       connect foo node=3 port=3
       unfence foo

   Simple devices
       In  some  cases,  a  single  fence  device is used for all nodes, and it requires no node-
       specific args.  This would typically be a "bridge" fence  device  in  which  an  agent  is
       passing a fence request to another subsystem to handle.  (Note that a "node=nodeid" arg is
       always automatically included in agent args, so a node-specific nodeid is  always  present
       to minimally identify the victim.)

       In such a case, a simplified, single-line fence configuration is possible, with format:

       fence_all agent [args]

       Example:

       fence_all dlm_stonith

       A fence_all configuration is not compatible with a fence device configuration (above).

       Unfencing can optionally be applied with:

       fence_all agent [args]
       unfence_all

Lockspace configuration

       A lockspace definition begins with a lockspace line, followed by a number of master lines.
       A blank line separates lockspace definitions.

       Format:

       lockspace ls_name [ls_args]
       master    ls_name node=nodeid [node_args]
       master    ls_name node=nodeid [node_args]
       master    ls_name node=nodeid [node_args]

   Disabling resource directory
       Lockspaces usually use a resource directory to keep track of which node is the  master  of
       each  resource.  The dlm can operate without the resource directory, though, by statically
       assigning the master of a resource using a hash of the resource name.  To enable, set  the
       per-lockspace nodir option to 1.

       Example:

       lockspace foo nodir=1

   Lock-server configuration
       The nodir setting can be combined with node weights to create a configuration where select
       node(s) are the master of all resources/locks.  These master nodes can be viewed as  "lock
       servers" for the other nodes.

       Example of nodeid 1 as master of all resources:

       lockspace foo nodir=1
       master    foo node=1

       Example of nodeid's 1 and 2 as masters of all resources:

       lockspace foo nodir=1
       master    foo node=1
       master    foo node=2

       Lock  management will be partitioned among the available masters.  There can be any number
       of  masters  defined.   The  designated  master  nodes  will  master  all  resources/locks
       (according to the resource name hash).  When no masters are members of the lockspace, then
       the nodes revert to the common fully-distributed configuration.  Recovery is faster,  with
       little disruption, when a non-master node joins/leaves.

       There  is  no  special  mode  in  the  dlm for this lock server configuration, it's just a
       natural consequence of combining the "nodir" option with node weights.  When  a  lockspace
       has  master  nodes  defined, the master has a default weight of 1 and all non-master nodes
       have weight of 0.  An explicit non-zero weight can also be assigned to master nodes, e.g.

       lockspace foo nodir=1
       master    foo node=1 weight=2
       master    foo node=2 weight=1

       In which case node 1 will master 2/3 of the total resources and node  2  will  master  the
       other 1/3.

   Node configuration
       Node configurations can be set by the node keyword followed of key-value pairs.

       Keys:

       mark  The  mark key can be used to set a specific mark value which is then used by the in-
       kernel DLM socket creation. This can be used to match for DLM specific  packets  for  e.g.
       routing.

       Example of setting a per socket value for nodeid 1 and a mark value of 42:

       node id=1 mark=42

       For local nodes this value doesn't have any effect.

SEE ALSO

       dlm_controld(8), dlm_tool(8)