|Red Hat Cluster Suite: Configuring and Managing a Cluster|
|Prev||Appendix F. Cluster Command-line Utilities||Next|
If power switches are used in the cluster hardware configuration, run the clufence utility on each cluster system to ensure that it can remotely power-cycle the other cluster members.
If the command succeeds, run the shutil -p command on both cluster systems to display a summary of the header data structure for the quorum partitions. If the output is different on the systems, the quorum partitions do not point to the same devices on both systems. Check to make sure that the raw devices exist and are correctly specified in the /etc/sysconfig/rawdevices file. See Section 220.127.116.11 Configuring Shared Cluster Partitions for more information.
If either network- or serial-attached power switches are employed in the cluster hardware configuration, install the cluster software and invoke the clufence command to test the power switches. Invoke the command on each cluster system to ensure that it can remotely power-cycle the other cluster members. If testing is successful, then the cluster can be started.
The clufence command can accurately test a power switch. The format of the clufence command is as follows:
usage: clufence [-d] [-[furs] <member>] -d Turn on debugging -f <member> Fence (power off) <member> -u <member> Unfence (power on) <member> -r <member> Reboot (power cycle) <member> -s <member> Check status of all switches controlling <member>
When testing power switches, the first step is to ensure that each cluster member can successfully communicate with its attached power switch. The following example of the clufence command output shows that the cluster member is able to communicate with its power switch:
 info: STONITH: baytech at 192.168.1.31, port 1 controls clu2  info: STONITH: baytech at 192.168.1.31, port 2 controls clu3  info: STONITH: wti_nps at 192.168.1.29, port clu4 controls clu4  info: STONITH: wti_nps at 192.168.1.29, port clu5 controls clu5
Any errors in the output could be indicative of the following types of problems:
For serial attached power switches:
Verify that the device special file for the remote power switch connection serial port (for example, /dev/ttyS0) is specified correctly in the cluster database, as established via Cluster Configuration Tool. If necessary, use a terminal emulation package such as minicom to test if the cluster system can access the serial port.
Ensure that a non-cluster program (for example, a getty program) is not using the serial port for the remote power switch connection. You can use the lsof command to perform this task.
Check that the cable connection to the remote power switch is correct. Verify that the correct type of cable is used (for example, an RPS-10 power switch requires a null modem cable), and that all connections are securely fastened.
Verify that any physical dip switches or rotary switches on the power switch are set properly.
For network based power switches:
Verify that the network connection to network-based switches is operational. Most switches have a link light that indicates connectivity.
It should be possible to ping the network switch; if not, then the switch may not be properly configured for its network parameters.
Verify that the correct password and login name (depending on switch type) have been specified in the cluster configuration database (as established by running Cluster Configuration Tool). A useful diagnostic approach is to verify Telnet access to the network switch using the same parameters as specified in the cluster configuration.
After successfully verifying communication with the switch, attempt to power cycle the other cluster member. Prior to doing this, it is recommended to verify that the other cluster member is not actively performing any important functions (such as serving cluster services to active clients). By executing the following command :
clufence -r clu3
The following depicts a successful power cycle operation:
Successfully power cycled host clu3.