Provided by: cman_3.1.7-0ubuntu2_amd64 bug


       dlm_controld - daemon that configures dlm according to cluster events


       dlm_controld [OPTIONS]


       The dlm lives in the kernel, and the cluster infrastructure (corosync membership and group
       management) lives in user space.  The dlm  in  the  kernel  needs  to  adjust/recover  for
       certain  cluster  events.   It's  the  job  of  dlm_controld  to  receive these events and
       reconfigure the kernel dlm as  needed.   dlm_controld  controls  and  configures  the  dlm
       through sysfs and configfs files that are considered dlm-internal interfaces.

       The cman init script usually starts the dlm_controld daemon.


       Command line options override a corresponding setting in cluster.conf.

       -D     Enable debugging to stderr and don't fork.
              See also dlm_tool dump in dlm_tool(8).

       -L     Enable debugging to log file.
              See also logging in cluster.conf(5).

       -K     Enable kernel dlm debugging messages.
              See also log_debug below.

       -r num dlm  kernel  lowcomms protocol, 0 tcp, 1 sctp, 2 detect.  2 selects tcp if corosync
              rrp_mode is "none", otherwise sctp.
              Default 2.

       -g num groupd compatibility mode, 0 off, 1 on.
              Default 0.

       -f num Enable (1) or disable (0) fencing recovery dependency.
              Default 1.

       -q num Enable (1) or disable (0) quorum recovery dependency.
              Default 0.

       -d num Enable (1) or disable (0) deadlock detection code.
              Default 0.

       -p num Enable (1) or disable (0) plock code for cluster fs.
              Default 1.

       -l num Limit the rate of plock operations, 0 for no limit.
              Default 0.

       -o num Enable (1) or disable (0) plock ownership.
              Default 1.

       -t ms  Plock ownership drop resources time (milliseconds).
              Default 10000.

       -c num Plock ownership drop resources count.
              Default 10.

       -a ms  Plock ownership drop resources age (milliseconds).
              Default 10000.

       -P     Enable plock debugging messages (can produce excessive output).

       -h     Print a help message describing available options, then exit.

       -V     Print program version information, then exit.


       cluster.conf(5) is usually located at /etc/cluster/cluster.conf.  It is not read directly.
       Other  cluster  components  load  the  contents  into  memory, and the values are accessed
       through the libccs library.

       Configuration options for dlm (kernel) and dlm_controld are added to the <dlm  />  section
       of cluster.conf, within the top level <cluster> section.

   Kernel options
              The  network  protocol  can be set to tcp, sctp or detect which selects tcp or sctp
              based on the  corosync  rrp_mode  configuration  (redundant  ring  protocol).   The
              rrp_mode "none" results in tcp.  Default detect.

              <dlm protocol="detect"/>

              After waiting timewarn centiseconds, the dlm will emit a warning via netlink.  This
              only applies to lockspaces created with the DLM_LSFL_TIMEWARN flag, and is used for
              deadlock detection.  Default 500 (5 seconds).

              <dlm timewarn="500"/>

              DLM kernel debug messages can be enabled by setting log_debug to 1.  Default 0.

              <dlm log_debug="0"/>

              The  lock  directory  weight  can  be specified one the clusternode lines.  Weights
              would usually be used in the lock server configurations shown below instead.

              <clusternode name="node01" nodeid="1" weight="1"/>

   Daemon options
              See command line description.

              <dlm enable_fencing="1"/>

              See command line description.

              <dlm enable_quorum="0"/>

              See command line description.

              <dlm enable_deadlk="0"/>

              See command line description.

              <dlm enable_plock="1"/>

              See command line description.

              <dlm plock_rate_limit="0"/>

              See command line description.

              <dlm plock_ownership="1"/>

              See command line description.

              <dlm drop_resources_time="10000"/>

              See command line description.

              <dlm drop_resources_count="10"/>

              See command line description.

              <dlm drop_resources_age="10000"/>

              Enable (1) or disable (0) plock debugging messages (can produce excessive  output).
              Default 0.

              <dlm plock_debug="0"/>

   Disabling resource directory
       Lockspaces  usually  use a resource directory to keep track of which node is the master of
       each resource.  The dlm can operate without the resource directory, though, by  statically
       assigning  the master of a resource using a hash of the resource name.  To enable, set the
       per-lockspace nodir option to 1.

         <lockspace name="foo" nodir="1"/>

   Lock-server configuration
       The nodir setting can be combined with node weights to create a configuration where select
       node(s)  are the master of all resources/locks.  These master nodes can be viewed as "lock
       servers" for the other nodes.

         <lockspace name="foo" nodir="1">
           <master name="node01"/>


         <lockspace name="foo" nodir="1">
           <master name="node01"/>
           <master name="node02"/>

       Lock management will be partitioned among the available masters.  There can be any  number
       of  masters  defined.   The  designated  master  nodes  will  master  all  resources/locks
       (according to the resource name hash).  When no masters are members of the lockspace, then
       the  nodes revert to the common fully-distributed configuration.  Recovery is faster, with
       little disruption, when a non-master node joins/leaves.

       There is no special mode in the dlm for  this  lock  server  configuration,  it's  just  a
       natural  consequence  of combining the "nodir" option with node weights.  When a lockspace
       has master nodes defined, the master has a default weight of 1 and  all  non-master  nodes
       have weight of 0.  An explicit non-zero weight can also be assigned to master nodes, e.g.

         <lockspace name="foo" nodir="1">
           <master name="node01" weight="2"/>
           <master name="node02" weight="1"/>

       In  which  case  node01  will  master 2/3 of the total resources and node2 will master the
       other 1/3.


       dlm_tool(8), fenced(8), cman(5), cluster.conf(5)