3.7. Adding and Deleting Members

The procedure to add a member to a cluster varies depending on whether the cluster is a newly-configured cluster or a cluster that is already configured and running. To add a member to a new cluster, refer to Section 3.7.1 Adding a Member to a Cluster. To add a member to an existing cluster, refer to Section 3.7.2 Adding a Member to a Running Cluster. To delete a member from a cluster, refer to Section 3.7.3 Deleting a Member from a Cluster.

3.7.1. Adding a Member to a Cluster

To add a member to a new cluster, follow these steps:

  1. Click Cluster Node.

  2. At the bottom of the right frame (labeled Properties), click the Add a Cluster Node button. Clicking that button causes a Node Properties dialog box to be displayed. For a DLM cluster, the Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes (refer to Figure 3-7). For a GULM cluster, the Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes, and presents a checkbox for GULM Lockserver (refer to Figure 3-8).

    Figure 3-7. Adding a Member to a New DLM Cluster

    Figure 3-8. Adding a Member to a New GULM Cluster

  3. At the Cluster Node Name text box, specify a node name. The entry can be a name or an IP address of the node on the cluster subnet.

    NoteNote
     

    Each node must be on the same subnet as the node from which you are running the Cluster Configuration Tool and must be defined either in DNS or in the /etc/hosts file of each cluster node.

    NoteNote
     

    The node on which you are running the Cluster Configuration Tool must be explicitly added as a cluster member; the node is not automatically added to the cluster configuration as a result of running the Cluster Configuration Tool.

  4. Optionally, at the Quorum Votes text box, you can specify a value; however in most configurations you can leave it blank. Leaving the Quorum Votes text box blank causes the quorum votes value for that node to be set to the default value of 1.

  5. If the cluster is a GULM cluster and you want this node to be a GULM lock server, click the GULM Lockserver checkbox (marking it as checked).

  6. Click OK.

  7. Configure fencing for the node:

    1. Click the node that you added in the previous step.

    2. At the bottom of the right frame (below Properties), click Manage Fencing For This Node. Clicking Manage Fencing For This Node causes the Fence Configuration dialog box to be displayed.

    3. At the Fence Configuration dialog box, bottom of the right frame (below Properties), click Add a New Fence Level. Clicking Add a New Fence Level causes a fence-level element (for example, Fence-Level-1, Fence-Level-2, and so on) to be displayed below the node in the left frame of the Fence Configuration dialog box.

    4. Click the fence-level element.

    5. At the bottom of the right frame (below Properties), click Add a New Fence to this Level. Clicking Add a New Fence to this Level causes the Fence Properties dialog box to be displayed.

    6. At the Fence Properties dialog box, click the Fence Device Type drop-down box and select the fence device for this node. Also, provide additional information required (for example, Port and Switch for an APC Power Device).

    7. At the Fence Properties dialog box, click OK. Clicking OK causes a fence device element to be displayed below the fence-level element.

    8. To create additional fence devices at this fence level, return to step 6d. Otherwise, proceed to the next step.

    9. To create additional fence levels, return to step 6c. Otherwise, proceed to the next step.

    10. If you have configured all the fence levels and fence devices for this node, click Close.

  8. Choose File => Save to save the changes to the cluster configuration.

3.7.2. Adding a Member to a Running Cluster

The procedure for adding a member to a running cluster depends on whether the cluster contains only two nodes or more than two nodes. To add a member to a running cluster, follow the steps in one of the following sections according to the number of nodes in the cluster:

3.7.2.1. Adding a Member to a Running Cluster That Contains Only Two Nodes

To add a member to an existing cluster that is currently in operation, and contains only two nodes, follow these steps:

  1. Add the node and configure fencing for it as in

    Section 3.7.1 Adding a Member to a Cluster.

  2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.

  3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.

  4. At the Red Hat Cluster Suite management GUI Cluster Status Tool tab, disable each service listed under Services.

  5. Stop the cluster software on the two running nodes by running the following commands at each node in this order:

    1. service rgmanager stop

    2. service gfs stop, if you are using Red Hat GFS

    3. service clvmd stop

    4. service fenced stop

    5. service cman stop

    6. service ccsd stop

  6. Start cluster software on all cluster nodes (including the added one) by running the following commands in this order:

    1. service ccsd start

    2. service cman start

    3. service fenced start

    4. service clvmd start

    5. service gfs start, if you are using Red Hat GFS

    6. service rgmanager start

  7. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

3.7.2.2. Adding a Member to a Running Cluster That Contains More Than Two Nodes

To add a member to an existing cluster that is currently in operation, and contains more than two nodes, follow these steps:

  1. Add the node and configure fencing for it as in

    Section 3.7.1 Adding a Member to a Cluster.

  2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.

  3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.

  4. Start cluster services on the new node by running the following commands in this order:

    1. service ccsd start

    2. service lock_gulmd start or service cman start according to the type of lock manager used

    3. service fenced start (DLM clusters only)

    4. service clvmd start

    5. service gfs start, if you are using Red Hat GFS

    6. service rgmanager start

  5. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.

3.7.3. Deleting a Member from a Cluster

To delete a member from an existing cluster that is currently in operation, follow these steps:

  1. At one of the running nodes (not to be removed), run the Red Hat Cluster Suite management GUI. At the Cluster Status Tool tab, under Services, disable or relocate each service that is running on the node to be deleted.

  2. Stop the cluster software on the node to be deleted by running the following commands at that node in this order:

    1. service rgmanager stop

    2. service gfs stop, if you are using Red Hat GFS

    3. service clvmd stop

    4. service fenced stop (DLM clusters only)

    5. service lock_gulmd stop or service cman stop according to the type of lock manager used

    6. service ccsd stop

  3. At the Cluster Configuration Tool (on one of the running members), delete the member as follows:

    1. If necessary, click the triangle icon to expand the Cluster Nodes property.

    2. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties), click the Delete Node button.

    3. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion (Figure 3-9).

      Figure 3-9. Confirm Deleting a Member

    4. At that dialog box, click Yes to confirm deletion.

    5. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.)

  4. Stop the cluster software on the all remaining running nodes (including GULM lock-server nodes for GULM clusters) by running the following commands at each node in this order:

    1. service rgmanager stop

    2. service gfs stop, if you are using Red Hat GFS

    3. service clvmd stop

    4. service fenced stop (DLM clusters only)

    5. service lock_gulmd stop or service cman stop according to the type of lock manager used

    6. service ccsd stop

  5. Start cluster software on all remaining cluster nodes (including the GULM lock-server nodes for a GULM cluster) by running the following commands in this order:

    1. service ccsd start

    2. service lock_gulmd start or service cman start according to the type of lock manager used

    3. service fenced start (DLM clusters only)

    4. service clvmd start

    5. service gfs start, if you are using Red Hat GFS

    6. service rgmanager start

  6. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.