Appendix A. Upgrading GFS

To upgrade a node to Red Hat GFS 6.1 from earlier versions of Red Hat GFS, you must convert the GFS cluster configuration archive (CCA) to a Red Hat Cluster Suite cluster configuration system (CCS) configuration file (/etc/cluster/cluster.conf) and convert GFS pool volumes to LVM2 volumes.

This appendix contains instructions for upgrading from GFS 6.0 (or GFS 5.2.1) to Red Hat GFS 6.1, using GULM as the lock manager.

NoteNote
 

You must retain GULM lock management for the upgrade to Red Hat GFS 6.1; that is, you cannot change from GULM lock management to DLM lock management during the upgrade to Red Hat GFS 6.1. However, after the upgrade to GFS 6.1, you can change lock managers. Refer to Red Hat Cluster Suite Configuring and Managing a Cluster for information about changing lock managers.

The following procedure demonstrates upgrading to Red Hat GFS 6.1 from a GFS 6.0 (or GFS 5.2.1) configuration with an example pool configuration for a pool volume named argus (refer to Example A-1).

 poolname argus
 subpools 1
 subpool 0 512 1 gfs_data
 pooldevice 0 0 /dev/sda1

Example A-1. Example pool Configuration Information for Pool Volume Named argus

  1. Halt the GFS nodes and the lock server nodes as follows:

    1. Unmount GFS file systems from all nodes.

    2. Stop the lock servers; at each lock server node, stop the lock server as follows:

      # service lock_gulmd stop
    3. Stop ccsd at all nodes; at each node, stop ccsd as follows:

      # service ccsd stop
    4. Deactivate pools; at each node, deactivate GFS pool volumes as follows:

      # service pool stop
    5. Uninstall Red Hat GFS RPMs.

  2. Install new software:

    1. Install Red Hat Enterprise Linux version 4 software (or verify that it is installed).

    2. Install Red Hat Cluster Suite and Red Hat GFS RPMs.

  3. At all GFS 6.1 nodes, create a cluster configuration file directory (/etc/cluster) and upgrade the CCA (in this example, located in /dev/pool/cca) to the new Red Hat Cluster Suite CCS configuration file format by running the ccs_tool upgrade command as shown in the following example:

    # mkdir /etc/cluster
    # ccs_tool upgrade /dev/pool/cca > /etc/cluster/cluster.conf
  4. At all GFS 6.1 nodes, start ccsd, run the lock_gulmd -c command, and start clvmd as shown in the following example:

    # ccsd
    # lock_gulmd -c 
    Warning! You didn't specify a cluster name before --use_ccs
      Letting ccsd choose which cluster we belong to.
    # clvmd

    NoteNote
     

    Ignore the warning message following the lock_gulmd -c command. Because the cluster name is already included in the converted configuration file, there is no need to specify a cluster name when issuing the lock_gulmd -c command.

  5. At all GFS 6.1 nodes, run vgscan as shown in the following example:

    # vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "argus" using metadata type pool
  6. At one GFS 6.1 node, convert the pool volume to an LVM2 volume by running the vgconvert command as shown in the following example:

    # vgconvert -M2 argus
      Volume group argus successfully converted
  7. At all GFS 6.1 nodes, run vgchange -ay as shown in the following example:

    # vgchange -ay
      1 logical volume(s) in volume group "argus" now active
  8. At the first node to mount a GFS file system, run the mount command with the upgrade option as shown in the following example:

    # mount -t gfs -o upgrade /dev/pool/argus /mnt/gfs1

    NoteNote
     

    This step only needs to be done once — on the first mount of the GFS file system.

    NoteNote
     

    If static minor numbers were used on pool volumes and the GFS 6.1 nodes are using LVM2 for other purposes (root file system) there may be problems activating the pool volumes under GFS 6.1. That is because of static minor conflicts. Refer to the following Bugzilla report for more information:

    https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=146035