Provided by: gulm_1.20060222-0ubuntu5_i386
lock_gulmd - Grand Unified Lock Manager
lock_gulmd --cluster_name <string> [options]
lock_gulmd is a lock manager for GFS that was designed to take
advantage of the way GFS uses locks, and the way data is transferred
lock_gulmd supports failover so that your gfs cluster can keep running
if the lockserver machine dies (or if one machine of a lockserver
cluster dies). You can also run lock_gulmd on the same nodes that
mount your gfs filesystem(s).
lock_gulmd is really three servers in one: It contains the core,
locktable interface, and locktable servers. Each of these gets its own
process, and the locktable server may get more than one process
depending on your config. Core is responsible for client membership
and heartbeats. Locktable and locktable interface handle the locking.
Multiple locktable processes can be run to improve performance on SMP
systems via the lt_partitions option in the configuration.
You can completely configure gulm from the command line. If you do
this, you need to use the same options on every node in the cluster.
You should always specify the --cluster_name option, and then either
the --use_ccs or the --servers options. Then other options should
Print usage information, then exit.
Print version information, then exit.
Make the calls out to ccsd load config from cluster.conf. Will
override any previous options. Likewise, options that follow
this will override settings from ccs.
-v --verbosity verbose flags
Sets which types of messages can be logged.
verbose flags is a comma separated list of the possible flags.
If a flag is prefixed with a ’-’, it is unset, other wise it is
set. The special flag ’clear’ unsets all verbosity flags. Any
flag that is not recognized is ignored.
The verbosity flags for gulm:
Network Basic network related messages
Network2 More specific network messages
Network3 Nothing currently
Fencing When calling out to the fencing sub-
Heartbeat Every heartbeat sent and received
Locking Various internal informational
messages about the locks
Forking Anytime a child processes is spawned
ServerState Print out a message when ever the
server changes state, saying what
state it is now in.
JIDMap Details of each JID Mapping request
JIDUpdates JID Mapping updates sent to slaves
LockUpdates Lock requests sent to slaves
LoginLoops Messages related to searching for
and becoming the Master
ReallyAll All messages above
Default same as -v "Network,Fencing,Forking"
All same as -v
The verbose flags can be changed while lock_gulmd is running
Default is Default. (witty no?)
-s --servers server list
Comma seperated list of nodes that can be master servers. No
You can use either IPs or names. Either IPv4 or IPv6 addresses
can be used. Node names are looked up via libresolv into IPs.
IPv6 addresses will be used over IPv4 addresses.
-n --cluster_name string
The name of this cluster. No default.
Number of seconds to wait before checking for missed heartbeats.
2/3 of this value is the rate at which nodes send heartbeats to
the master server. You can specify this as a floating point
number to get less than a second times.
Use subsecond values at your own risk, as varying network loads
can cause false node expirations.
Default is 15.
How many heartbeats can be missed before the node is considered
to have expired. Default is 2.
How many seconds to wait before deciding a new socket is bogus
and dropping it. Can use floating point for sub second values.
Default is 15.
How many seconds between each probe for a new master server.
Can use floating point for sub second values. Default is 1.
Which port does the core server listen on and connect to.
Default is 40040.
Which port does the ltpx server listen on and lock clients
connect to. Default is 40042.
Which port does the LT server listen on, and LT and LTPX clients
connect to. If you have multiple lt_partitions, the LT’s id is
added to this to get its port. (Using the default, LT000 is at
41040, LT001 is at 41041, ect) Default is 41040.
The name of the program that handles fencing nodes for gulm.
This needs to be a full path.
The program takes a single argument, the name of the node to be
fenced. If the program returns an exit status of 0, then the
fencing was succesful. Otherwise gulm waits 5 seconds, and
calls it again.
Default is /sbin/fence_node
User to switch into and run as. Default is root. (which is not
The directory to place and store the pid lock files. Default is
Number of Lock Tables to run. If more than one there will be
multiple LTs, and the LTPXes will stripe the locks across the
LTs. This is for preformance on servers with multiple CPUs.
Default is 1.
Set the name of this gulm server to string instead of the
default. Default is output of ‘uname -n‘.
--ip ip addr
Set the IP of this gulm server to ip addr instead of trying to
resolve it from the name. Default is to resolve this gulm
servers name into an IP.
--ifdev network device name
Use the IP that is configured for this network device name. If
there is an IPv6 address, that is used. Otherwise the IPv4 will
be used. No default.
When switching into daemon mode, leave stderr and stdout open.
Do not damonize. (will still fork each server.)
Load all config items (command line arguments and ccs data), and
print configuration as we see it and exit.
This signal is ignored. To stop gulm you should use the
shutdown command to gulm_tool.
Dump out internal tables for debugging. This creates a bunch of
files in /tmp (or whatever you have TMPDIR set to). All of
these start with the prefix Gulm_ and will be appended to if the
file already exists.
Much of the information in these dump files is available via
gulm_tool, and gulm_tool is the preferred method of getting this
information; the action of dumping these tables out stops all
other activity and thus can have negative affects on the
performance of lock_gulmd. You should not send this signal
unless you really want those dump files and know what to do with
Getting gulm to use a private network is a matter of not relying on IP
lookups. Sepcify all of the <lockserver/> entries with IP addresses.
Then when starting lock_gulmd, make use of either the --ip option or
the --ifdev option.
When using ccs to configure gulm, start the daemon with:
lock_gulmd --cluster_name foo --use_ccs
lock_gulmd -n foo -c
lock_gulmd can be run without CCS:
lock_gulmd -n foo -s 192.168.1.1,192.168.1.2,192.168.1.3
This adds the following two verbose flags to the default set:
lock_gulmd -n foo -c -v "Heartbeat,Locking"
Show only the Network messages:
lock_gulmd -n foo -c -v "clear,Network"
Use the ip on the second ethernet device, and call me bar:
lock_gulmd -n foo -c --name bar --ifdev eth1
Stopping the server:
gulm_tool shutdown localhost
These are the pid lock files to keep more than one instance of
the servers running per node. They can be put elsewhere via a
gulm_tool(8), lock_gulmd(5), ccs(7)