This document was originally written for Slackware 8.0. It has been recently updated to reflect changes in Slackware 9.1 and 10.0 RC1.
Setting up a RAID system with Slackware is not extremely difficult once you have a good idea of what you are doing. It can be a slightly daunting task the first few times, especially if you aren't comfortable with the intricacies of LILO and working on the command line. The existing documentation for configuring software RAID I found on the web and Usenet was either from 1998, or specific to one distribution, such as Red Hat. The most recent document I could find was dated January 2000, which is still very old at the time of the original writing, late September 2001. So why a Slackware specific RAID document? Other than the fact that Slackware is the only distribution I use, it is such a 'generic' distribution that almost all of what is presented here can be easily ported to other distributions.
The goal of this document is to configure a system with software RAID 1 (mirroring) arrays under recent releases of Slackware. Slackware has utilized the 2.4 series of linux kernels since the days of Slackware 8.0. This document is dependent on a 2.4 kernel, as it has all necessary RAID support built-in. If you are working with an older 2.2 kernel, either by choice in Slackware 8, or from an older release, you will need to compile a custom kernel with all of the appropriate RAID patches.
Some assumptions:
fdisk
command, or a reasonable facsimile.
If you are using IDE disks, configure your system so that each of the disks resides on separate
IDE busses. Not only will this increase your redundancy, it will also do
wonders for your performance. Also, ensure that your BIOS is configured
to boot from the CD-ROM drive. In systems with two IDE busses I usually
add the CD-ROM to the secondary IDE chain. When the system comes up to
the Boot:
prompt, boot an appropriate kernel for your hardware (i.e. SCSI adapters, if necessary).
You should now see the system come up as it normally would. So far, nothing is different from a regular
install.
Before running setup, we must create partitions on the disks. It is sufficient at this time to only create partitions on the one disk to which we will be installing the operating system. It is important to note that this will be the secondary disk, NOT the primary disk. Once at the command prompt, type the following (for example) to partition the second disk:
# fdisk /dev/hdcThere is no need for special partitioning schemes. The disk can be setup as a normal system would. When building a general purpose server, I like to create partitions for
/
, swap
,
/usr/local
, /var
, and /home
,
Because disk space is so cheap I usually create generous partitions for
/
and swap
. Using a 20GB disk, here is the
partition table for the disk used in this example:
Disk /dev/hdc: 255 heads, 63 sectors, 2434 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/hdc1 * 1 262 2104483+ 83 Linux native /dev/hdc2 263 524 2104515 82 Linux swap /dev/hdc3 525 1314 6345675 83 Linux native /dev/hdc4 1315 2434 8996400 5 Extended /dev/hdc5 1315 1837 4200966 83 Linux native /dev/hdc6 1838 2434 4795371 83 Linux nativeNote:
/dev/hdc1
has the bootable
flag set. This
is important or the system will not boot properly.
Once an appropriate partition table has been created, exit
fdisk
(make sure to write your table to the disk!). It is always a good idea
to reboot after creating partitions and before entering setup, especially if you will be using
reiserfs. Once the system is back up, enter setup and proceed as with any other install,
with the following notes and caveats:
/dev/hdc
and specify
/dev/hdc1
as the partition to boot (It should have an
*
by it, representing the bootable flag from
fdisk
).
Once setup is complete, reboot the system (remove the CD-ROM from the drive). If all went well, the system
should come up using /dev/hdc1
as the root partition. If the
system does not boot on its own, boot from CD or a boot floppy and
double-check the LILO configuration. Also, double-check that the system
BIOS is setup to boot from the second IDE disk. This can be somewhat
confusing, especially if the two disks are reported exactly the same way
in the BIOS. Another thing to try at this point if you are having trouble booting is to install
LILO on the MBR of the first disk and specify the root partition on the second disk. This is OK for now, as we
will be re-installing LILO later. This trick worked well for me with a particular SCSI card which would not let
me specify the boot
The next step is to properly partition our second disk,
/dev/hda
. For optimal use of drive space, both drives should
be partitioned exactly the same. For quick reference, the fdisk -l
/dev/hdc
command can be used to print the partition table from the
second disk.
The one significant change that will be made when partitioning the first
disk is to change the partition type of each partition to Linux raid
autodetect
, which is type fd
. Here is what the
properly partitioned primary disk looks like in our example:
Disk /dev/hda: 255 heads, 63 sectors, 2434 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 262 2104483+ fd Linux raid autodetect /dev/hda2 263 524 2104515 82 Linux swap /dev/hda3 525 1314 6345675 fd Linux raid autodetect /dev/hda4 1315 2434 8996400 5 Extended /dev/hda5 1315 1837 4200966 fd Linux raid autodetect /dev/hda6 1838 2434 4795371 fd Linux raid autodetect
/etc/raidtab
File
Once the primary disk has been partitioned it is now necessary to create
the /etc/raidtab
file, which is used by the md
device driver to properly configure the various arrays. Create this file
using your favorite text editor to look like the following example:
raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdc1 failed-disk 1 chunk-size 32 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda3 raid-disk 0 device /dev/hdc3 failed-disk 1 chunk-size 32 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda5 raid-disk 0 device /dev/hdc5 failed-disk 1 chunk-size 32 raiddev /dev/md3 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda6 raid-disk 0 device /dev/hdc6 failed-disk 1 chunk-size 32For each pair of matching partitions on the physical disks there should be a
raiddev
block in the file corresponding to an
md
device. Alter the file so that it matches the partitions
created on your disks.
Note here the use of the failed-disk
directive for the
secondary disk. This ensures that the md
driver will not
alter the secondary disk (the current operating system disk) at this time.
For more information on the directives used in this file, the
raidtab(5)
man page has detailed information.
Once the /etc/raidtab
file has been built we can now make the
actual RAID devices. Issue the mkraid
command for each RAID
device specified in the raidtab
. For example,
# mkraid /dev/md0The progress of the device creation can be watched through the
/proc/mdstat
file, which is an extremely useful tool to
determine what is going on with RAID devices on the system. Looking at
/proc/mdstat
should show each of the devices being
constructed. Here is an example of what /proc/mdstat
looks
like with only one half of the array present:
Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hda1[0] 2104384 blocks [2/1] [U_] md1 : active raid1 hda3[0] 6345600 blocks [2/1] [U_] md2 : active raid1 hda5[0] 4200896 blocks [2/1] [U_] md3 : active raid1 hda6[0] 4795264 blocks [2/1] [U_] unused devices:
Once the md
devices have been created through
mkraid
, they must be further setup by having filesystems
created on them. At this point, the md
devices are
treated just as any other disk device, so file system creation is
done using the standard mkreiserfs
command. For example:
# mkreiserfs /dev/md0The standard options to
mkreiserfs
can be used here to specify
the block size of the new filesystem, or any other tunable parameter.
Once all of the RAID devices have filesystems on them, it is necessary to
mount them and copy all of the existing files on the secondary disk to the
md
devices. There are many ways to do this, some easier and
some harder. The way presented here is neither elegant nor quick, but it
should be easy to understand and reproduce for all skill levels.
First, mount the new root partition on /mnt
:
# mount /dev/md0 /mntNext, create directories on the newly mounted filesystem to be used as mount points for all of our other filesystems. Change these to match the filesystems that were created on your disks.
# mkdir -p /mnt/usr/local # mkdir -p /mnt/var # mkdir -p /mnt/homeNow, mount the remaining file systems on those new mount points:
# mount /dev/md1 /mnt/usr/local # mount /dev/md2 /mnt/var # mount /dev/md3 /mnt/homeNow, copy the contents of the existing disk to the newly mounted filesystems and create any special directories:
# cp -a /bin /mnt # cp -a /boot /mnt # cp -a /dev /mnt # cp -a /etc /mnt # cp -a /home /mnt # cp -a /lib /mnt # cp -a /root /mnt # cp -a /sbin /mnt # cp -a /tmp /mnt # cp -a /usr /mnt # cp -a /var /mnt # mkdir -p /mnt/mnt # mkdir -p /mnt/proc
This should leave you with an exact duplicate of the running OS on the new
RAID devices. A few final changes need to be made at this point. The
/etc/fstab
file must be modified so that the system will
mount the new md
devices at boot time. Make sure that the
fstab
being edited is the new copy that resides in
/mnt/etc
, not the old copy on the second disk! Here is our
example fstab
:
/dev/hda2 swap swap defaults 0 0 /dev/md0 / reiserfs defaults 1 1 /dev/md1 /usr/local reiserfs defaults 1 1 /dev/md2 /var reiserfs defaults 1 1 /dev/md3 /home reiserfs defaults 1 1 none /dev/pts devpts gid=5,mode=620 0 0 none /proc proc defaults 0 0
The final step before booting off of our new RAID arrays is to reconfigure
LILO. This is where having the most recent version of LILO mentioned
above comes in handy. Edit the /mnt/etc/lilo.conf
file so
that the boot
and root
directives point to the
new /dev/md0
device. Earlier versions of LILO will not
support booting from an md
device. Note that the raid-extra-boot
option is only
supported in versions of LILO greater than 22.0. Here is what our
lilo.conf
file looks like after the changes:
# LILO configuration file # generated by 'liloconfig' # # Start LILO global section boot = /dev/md0 raid-extra-boot = mbr #compact # faster, but won't work on all systems. # delay = 5 # Normal VGA console vga = normal # ramdisk = 0 # paranoia setting # End LILO global section # Linux bootable partition config begins image = /vmlinuz root = /dev/md0 label = Linux read-only # Non-UMSDOS filesystems should be mounted read-only for checking # Linux bootable partition config endsIn order for these changes to take effect, the
lilo
command
must be run. We need to ensure that lilo
knows we want to
use the new config file and alter the boot records on
/dev/md0
, not the currently mounted root, which is
/dev/hdc1
. Using the -r
flag will achieve this,
as shown below:
# lilo -r /mntThis instructs
lilo
to change its root directory to
/mnt
before doing anything else.
The system can now be safely rebooted. As the system reboots, ensure that
the BIOS is now configured to boot the primary IDE disk. If everything
was successful so far the system should boot from the RAID devices. A
quick check with df
should show something similar to the
following:
Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 2071288 566236 1399836 29% / /dev/md1 6245968 185484 5743204 4% /usr/local /dev/md2 4134832 3220 3921568 1% /var /dev/md3 4719868 38036 4442072 1% /home
We have now successfully booted the system using our newly created RAID devices. Many will notice, though, that we are still only running on one half of the RAID 1 mirrors. Completing the mirrors is a simple process compared to the previous task.
In order to add the partitions on the secondary disk to the mirrors the
partition type must be properly set. Using fdisk
change the
partition types of /dev/hdc
to Linux raid
autodetect
, type fd
. Again, be sure to save the new
partition table to the disk.
Before beginning this final step, ensure that there is no data on the second disk that you wish to keep, as adding these devices to the mirror will wipe out the disk contents. If you have correctly followed these steps everything that was on the disk was already copied to the first half of the mirror.
Using the raidhotadd
command, we are now going to complete
our RAID 1 mirrors. As each device is added to the array the
md
driver will begin "reconstruction", which in the case of
RAID 1 is duplicating the contents of the first disk onto the second. The
reconstruction process can be monitored through the
/proc/mdstat
file.
# raidhotadd /dev/md0 /dev/hdc1 # raidhotadd /dev/md1 /dev/hdc3 # raidhotadd /dev/md2 /dev/hdc5 # raidhotadd /dev/md3 /dev/hdc6Once reconstruction of all of the devices is complete, the
/proc/mdstat
file will look like this:
Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hdc1[1] hda1[0] 2104384 blocks [2/2] [UU] md1 : active raid1 hdc3[1] hda3[0] 6345600 blocks [2/2] [UU] md2 : active raid1 hdc5[1] hda5[0] 4200896 blocks [2/2] [UU] md3 : active raid1 hdc6[1] hda6[0] 4795264 blocks [2/2] [UU] unused devices:
Once the mirrors are completed the /etc/raidtab
file can be
updated to replace the failed-disk
directive with a standard
raid-disk
directive.
If you chose to do a bare minimum install earlier it is now time to go back and install whatever additional software packages you desire. A final setup step for many people may be upgrading and rebuilding their Linux kernel. When building a new kernel, or rebuilding the existing kernel, it is important to make sure that general RAID support and specific RAID-1 support are built directly into the kernel, and not as modules.
The lilo.conf
man page has the following useful description the the raid-extra-boot
option:
This option only has meaning for RAID1 installations. The