3.9. Partitioning for Experts

This section provides detailed information for tailoring system partitioning to your needs. This information is mainly of interest for those who want to optimize a system for security and speed and who are prepared to reinstall the entire existing system if necessary.

The procedures described here require a basic understanding of the functions of a UNIX file system. You should be familiar with mount points and physical, extended, and logical partitions.

First, consider the following questions:

3.9.1. Size of the Swap Partition

Many sources state the rule that the swap size should be at least twice the size of the main memory. This is a relic of times when 8 MB RAM was considered a lot. In the past, the aim was to equip the machine with about 30 to 40 MB of virtual memory (RAM plus swap). Modern applications require even more memory. For normal users, 512 MB of virtual memory is a reasonable value. Never configure your system without any swap memory.

3.9.2. Partitioning Proposals for Special Purposes File Server

Here, hard disk performance is crucial. Use SCSI devices if possible. Keep in mind the performance of the disk and the controller. A file server is used to save data, such as user directories, a database, or other archives, centrally. This approach greatly simplifies the data administration.

Optimizing the hard disk access is vital for file servers in networks of more than twenty users. Suppose you want to set up a Linux file server for the home directories of 25 users. If the average user requires 100–150 MB for personal data, a 4 GB partition mounted under /home is probably sufficient. For fifty users, you would need 8 GB. If possible, split /home to two 4 GB hard disks that share the load (and access time).


Web browser caches should be stored on local hard disks. Compute Server

A compute server is generally a powerful machine that carries out extensive calculations in the network. Normally, such a machine is equipped with a large main memory (more than 512 RAM). Fast disk throughput is only needed for the swap partitions. If possible, distribute swap partitions to multiple hard disks.

3.9.3. Optimization

The hard disks are normally the limiting factor. To avoid this bottleneck, combine the following three possibilities:

  • Distribute the load evenly to multiple disks.

  • Use an optimized file system, such as reiserfs.

  • Equip your file server with a sufficient amount of memory (at least 256 MB). Parallel Use of Multiple Disks

The total amount of time needed for providing requested data consists of the following elements:

  1. Time elapsed until the request reaches the disk controller.

  2. Time elapsed until this request is send to the hard disk.

  3. Time elapsed until the hard disk positions its head.

  4. Time elapsed until the media turns to the respective sector.

  5. Time elapsed for the transmission.

The first item depends on the network connection and must be regulated there. Item two is a relatively insignificant period that depends on the hard disk controller itself. Items three and four are the main parts. The positioning time is measured in ms. Compared to the access times of the main memory, which are measured in ns, this represents a factor of one million. Item four depends on the disk rotation speed, which is usually several ms. Item five depends on the rotation speed, the number of heads, and the current position of the head (inside or outside).

To optimize the performance, the third item should be improved. For SCSI devices, the disconnect feature comes into play. When this feature is used, the controller sends the command Go to track x, sector y to the connected device (in this case, the hard disk). Now the inactive disk mechanism starts moving. If the disk is smart (if it supports disconnect) and the controller driver also supports this feature, the controller immediately sends the hard disk a disconnect command and the disk is disconnected from the SCSI bus. Now, other SCSI devices can proceed with their transfers. After some time (depending on the strategy or load on the SCSI bus) the connection to the disk is reactivated. In the ideal case, the device will have reached the requested track.

On a multitasking, multiuser system like Linux, these parameters can be optimized effectively. For example, examine the excerpt of the output of the command df in Example 3.1. “Example df Output”.

Example 3.1. Example df Output

Filesystem Size Used Avail Use% Mounted on
/dev/sda5 1.8G 1.6G 201M 89% /
/dev/sda1 23M 3.9M 17M 18% /boot
/dev/sdb1 2.9G 2.1G 677M 76% /usr
/dev/sdc1 1.9G 958M 941M 51% /usr/lib
shmfs 185M 0 184M 0% /dev/shm

To demonstrate the advantages, consider what happens if root enters the following in /usr/src:

tar xzf package.tgz -C /usr/lib

This command extracts package.tgz to /usr/lib/package. To do this, the shell runs tar and gzip (both located in /bin on /dev/sda) then package.tgz is read by /usr/src (on /dev/sdb). Finally, the extracted data is written to /usr/lib (on /dev/sdc). Thus, the positioning as well as the reading and writing of the disks' internal buffers can be performed almost concurrently.

This is only one of many examples. As a general rule, if you have several hard disks (with the same speed), /usr and /usr/lib should be placed on separate disks. /usr/lib should have about seventy percent of the capacity of /usr. Due to the frequency of access, / should be placed on the disk containing /usr/lib. Speed and Main Memory

In Linux, the size of main memory is often more important than the processor speed. One reason, if not the main reason, for this is the ability of Linux to create dynamic buffers containing hard disk data. For this purpose, Linux uses various tricks, such as read ahead (reading of sectors in advance) and delayed write (postponement and bundling of write access). The latter is the reason why you should not simply switch off your Linux machine. Both factors contribute to the fact that the main memory seems to fill up over time and that Linux is so fast. See Section 10.2.6. “The free Command”.