2.2. Cluster Hardware Components

Use the following section to identify the hardware components required for the cluster configuration.

Cluster nodes16 (maximum supported)Each node must provide enough PCI slots, network slots, and storage adapters for the cluster hardware configuration. Because attached storage devices must have the same device special file on each node, it is recommended that the nodes have symmetric I/O subsystems. It is also recommended that the processor speed and amount of system memory be adequate for the processes run on the cluster nodes. Refer to Section 2.3.1 Installing the Basic Cluster Hardware for more information.Yes

Table 2-4. Cluster Node Hardware

Table 2-5 includes several different types of fence devices.

A single cluster requires only one type of power switch.

Network-attached power switches.Remote (LAN, Internet) fencing using RJ45 Ethernet connections and remote terminal access to the device.APC MasterSwitch 92xx/96xx; WTI NPS-115/NPS-230, IPS-15, IPS-800/IPS-800-CE and TPS-2
Fabric Switches.Fence control interface integrated in several models of fabric switches used for Storage Area Networks (SANs). Used as a way to fence a failed node from accessing shared data.Brocade Silkworm 2x00, McData Sphereon, Vixel 9200
Integrated Power Management InterfacesRemote power management features in various brands of server systems; can be used as a fencing agent in cluster systemsHP Integrated Lights-out (iLO), IBM BladeCenter with firmware dated 7-22-04 or later

Table 2-5. Fence Devices

Table 2-7 through Table 2-8 show a variety of hardware components for an administrator to choose from. An individual cluster does not require all of the components listed in these tables.

Network interfaceOne for each network connectionEach network connection requires a network interface installed in a node.Yes
Network switch or hubOneA network switch or hub allows connection of multiple nodes to a network.Yes
Network cableOne for each network interfaceA conventional network cable, such as a cable with an RJ45 connector, connects each network interface to a network switch or a network hub.Yes

Table 2-6. Network Hardware Table

Host bus adapterOne per node

To connect to shared disk storage, install either a parallel SCSI or a Fibre Channel host bus adapter in a PCI slot in each cluster node.
For parallel SCSI, use a low voltage differential (LVD) host bus adapter. Adapters have either HD68 or VHDCI connectors.

External disk storage enclosureAt least one

Use Fibre Channel or single-initiator parallel SCSI to connect the cluster nodes to a single or dual-controller RAID array. To use single-initiator buses, a RAID controller must have multiple host ports and provide simultaneous access to all the logical units on the host ports. To use a dual-controller RAID array, a logical unit must fail over from one controller to the other in a way that is transparent to the operating system.
SCSI RAID arrays that provide simultaneous access to all logical units on the host ports are recommended.
To ensure symmetry of device IDs and LUNs, many RAID arrays with dual redundant controllers must be configured in an active/passive mode.
Refer to Appendix A Supplementary Hardware Information for more information.

SCSI cableOne per nodeSCSI cables with 68 pins connect each host bus adapter to a storage enclosure port. Cables have either HD68 or VHDCI connectors. Cables vary based on adapter type.Only for parallel SCSI configurations
SCSI terminatorAs required by hardware configurationFor a RAID storage enclosure that uses "out" ports (such as FlashDisk RAID Disk Array) and is connected to single-initiator SCSI buses, connect terminators to the "out" ports to terminate the buses.Only for parallel SCSI configurations and only as necessary for termination
Fibre Channel hub or switchOne or twoA Fibre Channel hub or switch may be required.Only for some Fibre Channel configurations
Fibre Channel cableAs required by hardware configurationA Fibre Channel cable connects a host bus adapter to a storage enclosure port, a Fibre Channel hub, or a Fibre Channel switch. If a hub or switch is used, additional cables are needed to connect the hub or switch to the storage adapter ports.Only for Fibre Channel configurations

Table 2-7. Shared Disk Storage Hardware Table

UPS systemOne or moreUninterruptible power supply (UPS) systems protect against downtime if a power outage occurs. UPS systems are highly recommended for cluster operation. Connect the power cables for the shared storage enclosure and both power switches to redundant UPS systems. Note that a UPS system must be able to provide voltage for an adequate period of time, and should be connected to its own power circuit.Strongly recommended for availability

Table 2-8. UPS System Hardware Table

Terminal serverOneA terminal server enables you to manage many nodes remotely. No
KVM switchOneA KVM switch enables multiple nodes to share one keyboard, monitor, and mouse. Cables for connecting nodes to the switch depend on the type of KVM switch.No

Table 2-9. Console Switch Hardware Table