1.2. A Detailed Look at the Boot Process

The beginning of the boot process varies depending on the hardware platform being used. However, once the kernel is found and loaded by the boot loader, the default boot process is identical across all architectures. This chapter focuses primarily on the x86 architecture.

1.2.1. The BIOS

When an x86 computer is booted, the processor looks at the end of system memory for the Basic Input/Output System or BIOS program and runs it. The BIOS controls not only the first step of the boot process, but also provides the lowest level interface to peripheral devices. For this reason it is written into read-only, permanent memory and is always available for use.

Other platforms use different programs to perform low-level tasks roughly equivalent to those of the BIOS on an x86 system. For instance, Itanium-based computers use the Extensible Firmware Interface (EFI) Shell, while Alpha systems use the SRM console.

Once loaded, the BIOS tests the system, looks for and checks peripherals, and then locates a valid device with which to boot the system. Usually, it checks any diskette drives and CD-ROM drives present for bootable media, then, failing that, looks to the system's hard drives. In most cases, the order of the drives searched while booting is controlled with a setting in BIOS, and it looks on the master IDE device on the primary IDE bus. The BIOS then loads into memory whatever program is residing in the first sector of this device, called the Master Boot Record or MBR. The MBR is only 512 bytes in size and contains machine code instructions for booting the machine, called a boot loader, along with the partition table. Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it.

1.2.2. The Boot Loader

This section looks at the boot loaders for the x86 platform. Depending on the system's architecture, the boot process may differ slightly. Refer to Section 1.2.2.1 Boot Loaders for Other Architectures for a brief overview of non-x86 boot loaders.

When using Red Hat Enterprise Linux, two boot loaders are available: GRUB or LILO. GRUB is the default boot loader, but LILO is available for those who require or prefer it. For more information about configuring and using GRUB or LILO, see Chapter 2 Boot Loaders.

Both boot loaders for the x86 platform are broken into at least two stages. The first stage is a small machine code binary on the MBR. Its sole job is to locate the second stage boot loader and load the first part of it into memory.

GRUB is the newer boot loader and has the advantage of being able read ext2 and ext3[1] partitions and load its configuration file — /boot/grub/grub.conf — at boot time. Refer to Section 2.7 GRUB Menu Configuration File for information on how to edit this file.

With LILO, the second stage boot loader uses information on the MBR to determine the boot options available to the user. This means that any time a configuration change is made or kernel is manually upgraded, the /sbin/lilo -v -v command must be executed to write the appropriate information to the MBR. For details about doing this, refer to Section 2.8 LILO.

TipTip
 

If upgrading the kernel using the Red Hat Update Agent, the boot loader configuration file is updated automatically. More information on Red Hat Network can be found online at the following URL: https://rhn.redhat.com.

Once the second stage boot loader is in memory, it presents the user with a graphical screen showing the different operating systems or kernels it has been configured to boot. On this screen a user can use the arrow keys to choose which operating system or kernel they wish to boot and press [Enter]. If no key is pressed, the boot loader loads the default selection after a configurable period of time has passed.

NoteNote
 

If Symmetric Multi-Processor (SMP) kernel support is installed, there will be more than one option present the first time the system is booted. In this situation, LILO will display linux, which is the SMP kernel, and linux-up, which is for single processors. GRUB displays Red Hat Enterprise Linux (<kernel-version>-smp), which is the SMP kernel, and Red Hat Enterprise Linux (<kernel-version>), which is for single processors.

If any problems occur using the SMP kernel, try selecting the a non-SMP kernel upon rebooting.

Once the second stage boot loader has determined which kernel to boot, it locates the corresponding kernel binary in the /boot/ directory. The kernel binary is named using the following format — /boot/vmlinuz-<kernel-version> file (where <kernel-version> corresponds to the kernel version specified in the boot loader's settings).

For instructions on using the boot loader to supply command line arguments to the kernel, refer to Chapter 2 Boot Loaders. For information on changing the runlevel at the GRUB or LILO prompt, refer Section 2.10 Changing Runlevels at Boot Time.

The boot loader then places the appropriate initial RAM disk image, called an initrd, into memory. The initrd is used by the kernel to load drivers necessary to boot the system. This is particularly important if SCSI hard drives are present or if the systems uses the ext3 file system[2].

WarningWarning
 

Do not remove the /initrd/ directory from the file system for any reason. Removing this directory will cause the system to fail with a kernel panic error message at boot time.

Once the kernel and the initrd image are loaded into memory, the boot loader hands control of the boot process to the kernel.

For a more detailed overview of the GRUB and LILO boot loaders, refer to Chapter 2 Boot Loaders.

1.2.2.1. Boot Loaders for Other Architectures

Once the kernel loads and hands off the boot process to the init command, the same sequence of events occurs on every architecture. So the main difference between each architecture's boot process is in the application used to find and load the kernel.

For example, the Alpha architecture uses the aboot boot loader, the Itanium architecture uses the ELILO boot loader, IBM pSeries uses YABOOT, and IBM s390 systems use the z/IPL boot loader.

Consult the Red Hat Enterprise Linux Installation Guide specific to these platforms for information on configuring their boot loaders.

1.2.3. The Kernel

When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed initrd image in a predetermined location in memory, decompresses it, mounts it, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID before unmounting the initrd disk image and freeing up all the memory the disk image once occupied.

The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory.

At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with the system.

To set up the user environment, the kernel executes the /sbin/init program.

1.2.4. The /sbin/init Program

The /sbin/init program (also called init) coordinates the rest of the boot process and configures the environment for the user.

When the init command starts, it becomes the parent or grandparent of all of the processes that start up automatically on the system. First, it runs the /etc/rc.d/rc.sysinit script, which sets the environment path, starts swap, checks the file systems, and executes all other steps required for system initialization. For example, most systems use a clock, so on them rc.sysinit reads the /etc/sysconfig/clock configuration file to initialize the hardware clock. Another example is if there are special serial port processes which must be initialized, rc.sysinit will execute the /etc/rc.serial file.

The init command then runs the /etc/inittab script, which describes how the system should be set up in each SysV init runlevel [3]. Among other things, the /etc/inittab sets the default runlevel and dictates that /sbin/update should be run whenever it starts a given runlevel [4].

Next, the init command sets the source function library, /etc/rc.d/init.d/functions, for the system, which configures how to start, kill, and determine the PID of a program.

The init program starts all of the background processes by looking in the appropriate rc directory for the runlevel specified as default in /etc/inittab. The rc directories are numbered to corresponds to the runlevel they represent. For instance, /etc/rc.d/rc5.d/ is the directory for runlevel 5.

When booting to runlevel 5, the init program looks in the /etc/rc.d/rc5.d/ directory to determine which processes to start and stop.

Below is an example listing of the /etc/rc.d/rc5.d/ directory:

K05innd -> ../init.d/innd
K05saslauthd -> ../init.d/saslauthd
K10psacct -> ../init.d/psacct
K10radiusd -> ../init.d/radiusd
K12mysqld -> ../init.d/mysqld
K15httpd -> ../init.d/httpd
K15postgresql -> ../init.d/postgresql
K16rarpd -> ../init.d/rarpd
K20iscsi -> ../init.d/iscsi
K20netdump-server -> ../init.d/netdump-server
K20nfs -> ../init.d/nfs
K20tomcat -> ../init.d/tomcat
K24irda -> ../init.d/irda
K25squid -> ../init.d/squid
K28amd -> ../init.d/amd
K34dhcrelay -> ../init.d/dhcrelay
K34yppasswdd -> ../init.d/yppasswdd
K35dhcpd -> ../init.d/dhcpd
K35smb -> ../init.d/smb
K35vncserver -> ../init.d/vncserver
K35winbind -> ../init.d/winbind
K36lisa -> ../init.d/lisa
K45arpwatch -> ../init.d/arpwatch
K45named -> ../init.d/named
K45smartd -> ../init.d/smartd
K46radvd -> ../init.d/radvd
K50netdump -> ../init.d/netdump
K50snmpd -> ../init.d/snmpd
K50snmptrapd -> ../init.d/snmptrapd
K50tux -> ../init.d/tux
K50vsftpd -> ../init.d/vsftpd
K54pxe -> ../init.d/pxe
K61ldap -> ../init.d/ldap
K65kadmin -> ../init.d/kadmin
K65kprop -> ../init.d/kprop
K65krb524 -> ../init.d/krb524
K65krb5kdc -> ../init.d/krb5kdc
K70aep1000 -> ../init.d/aep1000
K70bcm5820 -> ../init.d/bcm5820
K74ntpd -> ../init.d/ntpd
K74ypserv -> ../init.d/ypserv
K74ypxfrd -> ../init.d/ypxfrd
K84bgpd -> ../init.d/bgpd
K84ospf6d -> ../init.d/ospf6d
K84ospfd -> ../init.d/ospfd
K84ripd -> ../init.d/ripd
K84ripngd -> ../init.d/ripngd
K85zebra -> ../init.d/zebra
K92ipvsadm -> ../init.d/ipvsadm
K95firstboot -> ../init.d/firstboot
S00microcode_ctl -> ../init.d/microcode_ctl
S08ip6tables -> ../init.d/ip6tables
S08iptables -> ../init.d/iptables
S09isdn -> ../init.d/isdn
S10network -> ../init.d/network
S12syslog -> ../init.d/syslog
S13irqbalance -> ../init.d/irqbalance
S13portmap -> ../init.d/portmap
S14nfslock -> ../init.d/nfslock
S17keytable -> ../init.d/keytable
S20random -> ../init.d/random
S24pcmcia -> ../init.d/pcmcia
S25netfs -> ../init.d/netfs
S26apmd -> ../init.d/apmd
S28autofs -> ../init.d/autofs
S44acpid -> ../init.d/acpid
S55sshd -> ../init.d/sshd
S56rawdevices -> ../init.d/rawdevices
S56xinetd -> ../init.d/xinetd
S59hpoj -> ../init.d/hpoj
S80sendmail -> ../init.d/sendmail
S85gpm -> ../init.d/gpm
S90canna -> ../init.d/canna
S90crond -> ../init.d/crond
S90cups -> ../init.d/cups
S90FreeWnn -> ../init.d/FreeWnn
S90xfs -> ../init.d/xfs
S95atd -> ../init.d/atd
S97rhnsd -> ../init.d/rhnsd
S99local -> ../rc.local
S99mdmonitor -> ../init.d/mdmonitor

As illustrated in this listing, none of the scripts that actually start and stop the services are located in the /etc/rc.d/rc5.d/ directory. Rather, all of the files in /etc/rc.d/rc5.d/ are symbolic links pointing to scripts located in the /etc/rc.d/init.d/ directory. Symbolic links are used in each of the rc directories so that the runlevels can be reconfigured by creating, modifying, and deleting the symbolic links without affecting the actual scripts they reference.

The name of each symbolic link begin with either a K or an S. The K links are processes that are killed on that runlevel, while those beginning with an S are started.

The init command first stops all of the K symbolic links in the directory by issuing the /etc/rc.d/init.d/<command> stop command, where <command> is the process to be killed. It then starts all of the S symbolic links by issuing /etc/rc.d/init.d/<command> start.

TipTip
 

After the system is finished booting, it is possible to log in as root and execute these same scripts to start and stop services. For instance, the command /etc/rc.d/init.d/httpd stop stops the Apache HTTP Server.

Each of the symbolic links are numbered to dictate start order. The order in which the services are started or stopped can be altered by changing this number. The lower the number, the earlier it is started. Those symbolic links with the same number are started alphabetically.

NoteNote
 

One of the last things the init program executes is the /etc/rc.d/rc.local file. This file is useful for system customization. Refer to Section 1.3 Running Additional Programs at Boot Time for more information about using the rc.local file.

After the init command has progressed through the appropriate rc directory for the runlevel, the /etc/inittab script forks an /sbin/mingetty process for each virtual console (login prompt) allocated to the runlevel. Runlevels 2 through 5 has all six virtual consoles, while runlevel 1 (single user mode) has one and runlevels 0 and 6 have none. The /sbin/mingetty process opens communication pathways to tty devices[5], sets their modes, prints the login prompt, accepts the user's username and password and initiates the login process.

In runlevel 5, the /etc/inittab runs a script called /etc/X11/prefdm. The prefdm script executes the preferred X display manager[6]gdm, kdm, or xdm, depending on the contents of the /etc/sysconfig/desktop file.

Once finished, the system is operating on runlevel 5 and displaying a login screen.

Notes

[1]

GRUB reads ext3 file systems as ext2, disregarding the journal file. Refer to the chapter titled The ext3 File System in the Red Hat Enterprise Linux System Administration Guide for more information on the ext3 file system.

[2]

For details on making an initrd, refer to the chapter titled The ext3 File System in the Red Hat Enterprise Linux System Administration Guide.

[3]

For more information on SysV init runlevels, refer to Section 1.4 SysV Init Runlevels.

[4]

The update command is used to flush dirty buffers back to disk.

[5]

Refer to Section 5.3.11 /proc/tty/ for more information about tty devices.

[6]

Refer to Section 7.5.2 Runlevel 5 for more information about display managers.