Rebuilding Your Kernel

Rebuilding Your Kernel

Rebuilding your kernel is the process by which you compile all the components of your source code to create a kernel from which you can then install and boot. A rebuild is important (actually, necessary) when you want to add or remove drivers or change parameters. Often, changes or updates to the kernel are made that you want to include to keep yourself up to date. (Note that adding new drivers does not always require you to rebuild your kernel and use -R for recursive reboot. This is where kernel modules come in, which we talk about later.)

Usually, the updates and or patch come too fast for commercial distributions to keep up with them and include them on their CD-ROM. Normally, new CD-ROMs are created only for minor releases and not revisions. So, if you want to keep up to date, you will have to implement them you yourself. This is also nice because you don’t need to get a new CD-ROM every time.

One very important thing to keep in mind is that many applications require specific kernel versions to run correctly (specifically ps). Therefore, if you are planning to install a new version of a program or applications, make sure that it is compatible with your kernel.

The kernel source code (normally) is kept in /usr/src/linux. Usually, on a fresh installation, the Linux directory is a symbolic link to a directory whose name is based on the kernel release number (for example, linux-2.4.14). A README file lists this version number and instructions on how to update the kernel. The version numbers take the following form:


To find out what release you are on, you can run

uname -a

Linux saturn 2.4.9 #5 Thu Nov 15 19:15:26 CET 2001 i686 unknown

Here we have 2.4.14

When problems (that is, bugs) are discovered in either the kernel or programs, patches are released. Depending on the extent of the patch, the patch level will change. It is a convention that the even numbered releases 1.2.? are “stable” releases. When patches are added to stable releases, they are only bug fixes and contain no new features. When a new feature is to be added, this gets an odd numbered version 1.3.?. These are called development releases. This probably also contains bug fixes as well. Often both the stable even number version and the development odd number version are being worked on (for example, 1.2 and 1.3).

When bug fixes are made, they are added to both versions, but only the 1.3 would get new features. When it becomes stable enough, the version might be changed to 1.4 and the entire process begins again.

Patches are available from all the Linux ftp sites, for example,, as well as many other places. The best idea is to check the newsgroups or one of the Web search engines like Yahoo (

Normally, patches are compressed tar archives. The files within the patch are actually the output of the diff command. The old version of the source is compared to the new version and the differences are written to the patch file. The patch command (which is used to apply the patches) then compares the source code on your system and makes the appropriate changes. Because the patch command compares lines in a particular location and makes changes, it is vital that patches be put in the correct order. Otherwise changes might be made where they don’t belong and nothing works anymore.

Before applying a patch, I would suggest making a backup copy of your entire source directory. First, change to the source directory (/usr/src) and run

cp -R linux linux.010997

Use -R for recursive so that all the subdirectories are included. At the end, include the date, so that you know when the copy was made.

To decompress and extract the new source directory at the same time, the command would be

tar xzf v.1.4.0.tar.gz

Lets assume that you have a patch file called patch.42. The command would be

gunzip -c patch.42 | patch -p0

where the -c option to gunzip tells it to write the output to stdout and the -p0 option says not to strip of the path names. Often there are multiple patches between the release you installed and the current release, so you need to get all the patch files.

Some more clever administrators might think about putting all the patch files into a single directory and running a single command using wild cards, such as

gunzip -c patch.* | patch -p0

The problem with this is the way the shell expands the wild cards. The shell doesn’t understand the concept of numbers as we do. All it knows is ASCII, so patch.* would expand to something like this:


If you were to do this, you’d put the patches in the wrong order!

Even if you are going to apply only one patch, you need to be careful what version you have and what patch you are applying. You may find the script patch-kernel in the /usr/src/linux/tools directory, which will allow you to put on multiple patches and will figure out what order in which they should be applied.

Also keep in mind that the development kernel (those with odd numbered minor releases) are in development (that is, experimental). Just because a driver is included in the development kernel does not mean it will work with all devices. Speaking from experience, I know the problems this will cause. Several versions of Linux that I installed were still in a 1.2 kernel. However, the driver for my host adapted (AHA-2940 was not included). On the CD-ROM was a copy of a 1.3 kernel that contained the driver. I created a floppy using the driver and all was well, I thought.

I was able to install the entire system and get everything configured, and things looked good. Then my SCSI tape drive arrived and I added it to the system. Well, the host adapter driver recognized the tape drive correctly, but immediately after printing the message with the tape drive model, the system stopped. The message on the screen said a SCSI command had timed out.

The kernel that was booting was the one from the installation floppy and, therefore, the 1.3 kernel. However, the source on the hard disk was for the 1.2 kernel. I added the patch, rebuilt the kernel, and rebooted. It was smooth sailing from then on. (Note that the 2.0 kernel already included the changes. This was an attempt to see what worked and not just to get the system working.)

In the kernel source directory /usr/src/linux is a standard make file. Running make config will ask you a series of questions about what drivers to include in your kernel. These are yes/no questions that are pretty straightforward, provided you know about your system.

If you are new to Linux, you should consider running make menuconfig instead. This is a menu interface to the configuration routines that even has extensive on-line help. If you are running X, you should look at running

make xconfig

which will bring you a full GUI front-end. The commercial vendors can learn something from this baby!

The defaults that the systems provide are fine for normal operations, but you need to know how your system is configured. (Now are you beginning to understand why we got to this so far into the book?) In a lot of cases, the responses you give to questions will add functionality, while others simply change parameters.

I can’t go step-by-step through the rebuild process without explaining how make files work, but there are a few key points that I need to address. First, there are major changes between 1.2 and 2.0. A large number of drivers have been added and the overall flow is different. In several cases, you were prompted to configure one specific aspect but now are able to be more specific in what you define.

In /usr/src/linux, there is a new subdirectory: documentation. This contains, as you might expect, documentation for the kernel source. Before you reconfigure your kernel, I would suggest taking a look at this subdirectory. It has some very good information on what options to select depending on the hardware you have.

When you rebuild, what is actually run is a script in the arch/<type> subdirectory, where <type> is the type of architecture you have. If you run it on an Intel machine, then the script that is run is /usr/src/linux/arch/i386/ This is the script that prompts your for all of the configuration options. After each question, you will see a cryptic name. This is the variable that will be defined (or undefined, depending on your answer).

Based on your input, variables are set to particular values. These appear as #define statements and are written to a temporary file as the configuration script is being run. If something stops the configuration process (such as you pressing the interrupt key or you inputting an unacceptable value), this temporary file can be ignored. Once the configuration is complete, the temporary file is used as the new version of <linux/autoconf.h>. If you look in the autoconf.h file, you’ll see all of those cryptic variables names that you encountered when you ran the configure script.

If we have a Boolean variable (one that is either defined or undefined), then you’ll have the applicable definition in the autoconf.h file. For example, on my machine, I answered yes to configuring normal floppy support, so the line in autoconf.h looks like this:


Here we have defined the constant CONFIG_BLK_DEV_FD to 1. Certain parts of the kernel source code or the make file will test for this value and include certain modules or otherwise change its behavior based on how the constant is defined.

Lets look at an example. Because all I have is a SCSI hard disk, there was no need to include support for IDE hard disks, so it must be undefined. The entry looks like this:

#undef CONFIG_ST506

If you want to add support for a particular device without having to go through the configuration script each time, all you need to do is edit the autoconf.h file.

Next, gather the dependencies for the sources and include them in the various make files used during the kernel rebuild. This is done with make dep.

If you have done a kernel rebuild before, there may be a lot of files left lying around, so it’s a good idea to run make clean. Finally, run make with no options to start the kernel rebuilt.

One advantage that Linux has over other OSes is that it is completely component-based and is not distributed as a complete, non-divisible unit. This allows you to update/upgrade those components that have changed and leave the rest intact. This has been made very simple through the Red Hat Package Manager (RPM). (See the chapter on installing for more details.)

Unfortunately, this process is not always “plug-n-play”: Often, differences between the kernel and programs (or between different programs) can cause problems. For example, when the kernel was updated to 2.0, I simply tossed the source code onto a system that had the 1.2 kernel, rebuilt the kernel, and rebooted. All looked well. However, on shutting down the system, I encountered a stream of errors that occur only with the 2.0 kernel. However, installing the distribution with the 2.0 kernel in its entirely did not cause this problem.

A fairly new concept is the idea of a loadable module. These are parts of the kernel that are not linked directly to the kernel but are pulled in when you need them. For example, you only use your tape drive when you do backups, so whats the point of having it in your kernel all the time? If you load it only when you need it, you save memory for other tasks.

To be able to use kernel modules like these, your kernel must support it. When you run make config, you are asked this question, among others. Modules can be loaded into the kernel by hand, using the insmod command, or automatically, using the kerneld daemon. See the kernel HOWTO or the kernel mini-HOWTO for more specifics.