Provided by: pacemaker_2.1.8-1ubuntu1_amd64 bug

NAME

       pacemaker-controld - Pacemaker controller options

SYNOPSIS

       [dc-version=string] [cluster-infrastructure=string] [cluster-name=string]
       [dc-deadtime=time] [cluster-recheck-interval=time] [fence-reaction=select]
       [election-timeout=time] [shutdown-escalation=time] [join-integration-timeout=time]
       [join-finalization-timeout=time] [transition-delay=time] [stonith-watchdog-timeout=time]
       [stonith-max-attempts=integer] [load-threshold=percentage] [node-action-limit=integer]

DESCRIPTION

       Cluster options used by Pacemaker's controller

SUPPORTED PARAMETERS

       dc-version = string
           Pacemaker version on cluster node elected Designated Controller (DC)

           Includes a hash which identifies the exact revision the code was built from. Used for
           diagnostic purposes.

       cluster-infrastructure = string
           The messaging layer on which Pacemaker is currently running

           Used for informational and diagnostic purposes.

       cluster-name = string
           An arbitrary name for the cluster

           This optional value is mostly for users' convenience as desired in administration, but
           may also be used in Pacemaker configuration rules via the #cluster-name node
           attribute, and by higher-level tools and resource agents.

       dc-deadtime = time [20s]
           How long to wait for a response from other nodes during start-up

           The optimal value will depend on the speed and load of your network and the type of
           switches used.

       cluster-recheck-interval = time [15min]
           Polling interval to recheck cluster state and evaluate rules with date specifications

           Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster
           state for failure-timeout settings and most time-based rules. However, it will also
           recheck the cluster after this amount of inactivity, to evaluate rules with date
           specifications and serve as a fail-safe for certain types of scheduler bugs. A value
           of 0 disables polling. A positive value sets an interval in seconds, unless other
           units are specified (for example, "5min").

       fence-reaction = select [stop]
           How a cluster node should react if notified of its own fencing

           A cluster node may receive notification of a "succeeded" fencing that targeted it if
           fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster
           communication. Use "stop" to attempt to immediately stop Pacemaker and stay stopped,
           or "panic" to attempt to immediately reboot the local node, falling back to stop on
           failure. Allowed values: stop, panic

       election-timeout = time [2min]
           *** Advanced Use Only ***

           Declare an election failed if it is not decided within this much time. If you need to
           adjust this value, it probably indicates the presence of a bug.

       shutdown-escalation = time [20min]
           *** Advanced Use Only ***

           Exit immediately if shutdown does not complete within this much time. If you need to
           adjust this value, it probably indicates the presence of a bug.

       join-integration-timeout = time [3min]
           *** Advanced Use Only ***

           If you need to adjust this value, it probably indicates the presence of a bug.

       join-finalization-timeout = time [30min]
           *** Advanced Use Only ***

           If you need to adjust this value, it probably indicates the presence of a bug.

       transition-delay = time [0s]
           *** Advanced Use Only *** Enabling this option will slow down cluster recovery under
           all conditions

           Delay cluster recovery for this much time to allow for additional events to occur.
           Useful if your configuration is sensitive to the order in which ping updates arrive.

       stonith-watchdog-timeout = time [0]
           How long before nodes can be assumed to be safely down when watchdog-based
           self-fencing via SBD is in use

           If this is set to a positive value, lost nodes are assumed to achieve self-fencing
           using watchdog-based SBD within this much time. This does not require a fencing
           resource to be explicitly configured, though a fence_watchdog resource can be
           configured, to limit use to specific nodes. If this is set to 0 (the default), the
           cluster will never assume watchdog-based self-fencing. If this is set to a negative
           value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT`
           environment variable if that is positive, or otherwise treat this as 0. WARNING: When
           used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use
           watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where
           this is not true for the local value or SBD is not active. When this is set to a
           negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that
           use SBD, otherwise data corruption or loss could occur.

       stonith-max-attempts = integer [10]
           How many times fencing can fail before it will no longer be immediately re-attempted
           on a target

       load-threshold = percentage [80%]
           Maximum amount of system load that should be used by cluster nodes

           The cluster will slow down its recovery process when the amount of system resources
           used (currently CPU) approaches this limit

       node-action-limit = integer [0]
           Maximum number of jobs that can be scheduled per node (defaults to 2x cores)

AUTHOR

       Andrew Beekhof <andrew@beekhof.net>
           Author.