1.2. Performance, Scalability, and Economy

You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device). A GNBD provides block-level storage access over an Ethernet LAN. (For more information about GNBD, refer to Chapter 11 Using GNBD.)

The following sections provide examples of how GFS can be deployed to suit your needs for performance, scalability, and economy:

NoteNote
 

The deployment examples in this chapter reflect basic configurations; your needs might require a combination of configurations shown in the examples.

1.2.1. Superior Performance and Scalability

You can obtain the highest shared-file performance when applications access storage directly. The GFS SAN configuration in Figure 1-1 provides superior file performance for shared files and file systems. Linux applications run directly on GFS clustered application nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with direct-connect storage; yet, each GFS application node has equal access to all data files. GFS supports over 300 GFS application nodes.

Figure 1-1. GFS with a SAN

1.2.2. Performance, Scalability, Moderate Price

Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1-2. SAN block storage is presented to network clients as block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. Storage devices and data can be equally shared by network client applications. File locking and sharing functions are handled by GFS for each network client.

NoteNote
 

Clients implementing ext2 and ext3 file systems can be configured to access their own dedicated slice of SAN storage.

Figure 1-2. GFS and GNBD with a SAN

1.2.3. Economy and Performance

Figure 1-3 shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application failover can be fully automated with Red Hat Cluster Suite.

Figure 1-3. GFS and GNBD with Directly Connected Storage