Provided by: gulm_1.20060222-0ubuntu5_i386 bug

NAME

       lock_gulm - configuration section for lock_gulmd

DESCRIPTION

       This  is the subsection to the cluster section in the cluster.conf file
       for configuring gulm (Which is both the  server,  lock_gulmd,  and  the
       kernel module lock_gulm.o).

       Most  configurations  need  only  the  servers key.  All other keys are
       optional, as the defaults work for nearly all cases.

       All gulm options are withing the <gulm></gulm> section, which should be
       placed above the <clusternodes/> section.

       Most  of  the  config keys are equal to the long options on the command
       line.

       lockserver
              An IP or host name that is allowed to be a server.  This has  no
              default  value.   You must supply 1, 3, 4, or 5 of these values,
              and the nodes must be listed in the <clusternodes/> section.

              You can use either IPs or names.  Either IPv4 or IPv6  addresses
              can  be  used.  Node names are looked up via libresolv into IPs.
              IPv6 addresses will be used over IPv4 addresses.

       heartbeat_rate
              The rate at which the heartbeats are checked by  the  server  in
              seconds.   Two-thirds  of  this  time  is  the rate at which the
              heartbeats are sent.  Default is 15.

       allowed_misses
              How many consecutive heartbeats can be missed before we mark the
              node expired.  Default is 2.

       coreport
              The port used by the gulm core.  Default is 40040.

       ltpx_port
              The port used by the LTPX.  Default is 40042.

       lt_port
              What port the first lock table uses.  Each additional lock table
              will  increment  this  to  get  their   port.   (If   you   have
              lt_partitions greater than 1.)  Default is 41040.

       lt_partitions
              How   many  partitions  of  the  lock  space  should  there  be.
              Typically, one partition per cpu on the server nodes seems to be
              the best.  Default is 1.

EXAMPLES


       Using IPv4 addresses:

         <cluster name="example" config_version="1">

           <gulm>
             <lockserver name="192.168.1.1"/>
             <lockserver name="192.168.1.2"/>
             <lockserver name="192.168.1.3"/>
           </gulm>

           <!-- other require sections covered elsewhere -->

         </cluster>

       Using IPv6 addresses: (while still being on a IPv4 network)

         <cluster name="example" config_version="1">

           <gulm>
             <lockserver name="::ffff:192.168.1.1"/>
             <lockserver name="::ffff:192.168.1.2"/>
             <lockserver name="::ffff:192.168.1.3"/>
           </gulm>

           <!-- other require sections covered elsewhere -->

         </cluster>

       Using node names, these are looked up with libresolv to get IPs:

         <cluster name="example" config_version="1">

           <gulm>
             <lockserver name="node01"/>
             <lockserver name="node02"/>
             <lockserver name="node03"/>
           </gulm>

           <!-- other require sections covered elsewhere -->

         </cluster>

SEE ALSO

       lock_gulmd(8), ccs(7)

                                                                 lock_gulmd(5)