The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]

Поиск:  Каталог документации

Next Previous Contents

4. RAID setup

4.1 General setup

This is what you need for any of the RAID levels:

All of this is included as standard in most GNU/Linux distributions today.

If your system has RAID support, you should have a file called /proc/mdstat. Remember it, that file is your friend. If you do not have that file, maybe your kernel does not have RAID support. See what the contains, by doing a cat /proc/mdstat. It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that no RAID devices are currently active.

Create the partitions you want to include in your RAID set.

Now, let's go mode-specific.

4.2 Linear mode

Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.

Set up the /etc/raidtab file to describe your setup. I set up a raidtab for two disks in linear mode, and the file looked like this:

raiddev /dev/md0
        raid-level      linear
        nr-raid-disks   2
        chunk-size      32
        persistent-superblock 1
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
Spare-disks are not supported here. If a disk dies, the array dies with it. There's no information to put on a spare disk.

You're probably wondering why we specify a chunk-size here when linear mode just appends the disks into one large array with no parallelism. Well, you're completely right, it's odd. Just put in some chunk size and don't worry about this any more.

Ok, let's create the array. Run the command

  mkraid /dev/md0

This will initialize your array, write the persistent superblocks, and start the array.

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your fstab and so on.

4.3 RAID-0

You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.

Set up the /etc/raidtab file to describe your configuration. An example raidtab looks like:

raiddev /dev/md0
        raid-level      0
        nr-raid-disks   2
        persistent-superblock 1
        chunk-size     4
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it.

Again, you just run

  mkraid /dev/md0
to initialize the array. This should initialize the superblocks and start the raid device. Have a look in /proc/mdstat to see what's going on. You should see that your device is now running.

/dev/md0 is now ready to be formatted, mounted, used and abused.

4.4 RAID-1

You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

Set up the /etc/raidtab file like this:

raiddev /dev/md0
        raid-level      1
        nr-raid-disks   2
        nr-spare-disks  0
        chunk-size     4
        persistent-superblock 1
        device          /dev/sdb6
        raid-disk       0
        device          /dev/sdc5
        raid-disk       1
If you have spare disks, you can add them to the end of the device specification like
        device          /dev/sdd5
        spare-disk      0
Remember to set the nr-spare-disks entry correspondingly.

Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.

Issue the

  mkraid /dev/md0
command to begin the mirror initialization.

Check out the /proc/mdstat file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.

Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.

The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.

Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck.

4.5 RAID-4

Note! I haven't tested this setup myself. The setup below is my best guess, not something I have actually had up running. If you use RAID-4, please write to the author and share your experiences.

You have three or more devices of roughly the same size, one device is significantly faster than the other devices, and you want to combine them all into one larger device, still maintaining some redundancy information. Eventually you have a number of devices you wish to use as spare-disks.

Set up the /etc/raidtab file like this:

raiddev /dev/md0
        raid-level      4
        nr-raid-disks   4
        nr-spare-disks  0
        persistent-superblock 1
        chunk-size      32
        device          /dev/sdb1
        raid-disk       0
        device          /dev/sdc1
        raid-disk       1
        device          /dev/sdd1
        raid-disk       2
        device          /dev/sde1
        raid-disk       3
If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;
        device         /dev/sdf1
        spare-disk     0
as usual.

Your array can be initialized with the

   mkraid /dev/md0
command as usual.

You should see the section on special options for mke2fs before formatting the device.

4.6 RAID-5

You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.

If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This ``missing'' space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.

Set up the /etc/raidtab file like this:

raiddev /dev/md0
        raid-level      5
        nr-raid-disks   7
        nr-spare-disks  0
        persistent-superblock 1
        parity-algorithm        left-symmetric
        chunk-size      32
        device          /dev/sda3
        raid-disk       0
        device          /dev/sdb1
        raid-disk       1
        device          /dev/sdc1
        raid-disk       2
        device          /dev/sdd1
        raid-disk       3
        device          /dev/sde1
        raid-disk       4
        device          /dev/sdf1
        raid-disk       5
        device          /dev/sdg1
        raid-disk       6
If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;
        device         /dev/sdh1
        spare-disk     0
And so on.

A chunk size of 32 kB is a good default for many general purpose filesystems of this size. The array on which the above raidtab is used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36) device. It holds an ext2 filesystem with a 4 kB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files.

Ok, enough talking. You set up the raidtab, so let's see if it works. Run the

  mkraid /dev/md0
command, and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in /proc/mdstat to see what's going on.

If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.

See the section on special options for mke2fs before formatting the array.

Ok, now when you have your RAID device running, you can always stop it or re-start it using the

  raidstop /dev/md0
or
  raidstart /dev/md0
commands.

Instead of putting these into init-files and rebooting a zillion times to make that work, read on, and get autodetection running.

4.7 The Persistent Superblock

Back in ``The Good Old Days'' (TM), the raidtools would read your /etc/raidtab file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab resided was mounted. This is unfortunate if you want to boot on a RAID.

Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab file as usual, but would have to be mounted from the init-scripts.

The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the beginning of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.

You should however still maintain a consistent /etc/raidtab file, since you may need this file for later reconstruction of the array.

The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. This is described in the Autodetection section.

4.8 Chunk sizes

The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest ``atomic'' mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB, will cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.

Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.

For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array.

The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So ``4'' means ``4 kB''.

RAID-0

Data is written ``almost'' in parallel to the disks in the array. Actually, chunk-size bytes are written to each disk, serially.

If you specify a 4 kB chunk size, and write 16 kB to an array of three disks, the RAID system will write 4 kB to disks 0, 1 and 2, in parallel, then the remaining 4 kB to disk 0.

A 32 kB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.

RAID-0 with ext2

The following tip was contributed by michael@freenet-ag.de:

There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk. Example:

With 4k stripe size and 4k block size, each block occupies one stripe. With two disks, the stripe-#disk-product is 2*4k=8k. The default block group size is 32768 blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance. Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), so you can not avoid the problem by adjusting the block group size with the -g option of mkfs(8).

If you add a disk, the stripe-#disk-product is 12, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1. The load caused by disk activity at the block group beginnings spreads over all disks.

In case you can not add a disk, try a stripe size of 32k. The stripe-#disk-product is 64k. Since you can change the block group size in steps of 8 blocks (32k), using a block group size of 32760 solves the problem.

Additionally, the block group boundaries should fall on stripe boundaries. That is no problem in the examples above, but it could easily happen with larger stripe sizes.

RAID-1

For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, the RAID layer has complete freedom in choosing from which disk information is read - this is used by the RAID code to improve average seek times by picking the disk best suited for any given read operation.

RAID-4

When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well.

The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way.

RAID-5

On RAID-5, the chunk size has the same meaning for reads as for RAID-0. Writing on RAID-5 is a little more complicated: When a chunk is written on a RAID-5 array, the corresponding parity chunk must be updated as well. Updating a parity chunk requires either

The RAID code will pick the easiest way to update each parity chunk as the write progresses. Naturally, if your server has lots of memory and/or if the writes are nice and linear, updating the parity chunks will only impose the overhead of one extra write going over the bus (just like RAID-1). The parity calculation itself is extremely efficient, so while it does of course load the main CPU of the system, this impact is negligible. If the writes are small and scattered all over the array, the RAID layer will almost always need to read in all the untouched chunks from each stripe that is written to, in order to calculate the parity chunk. This will impose extra bus-overhead and latency due to extra reads.

A reasonable chunk-size for RAID-5 is 128 kB, but as always, you may want to experiment with this.

Also see the section on special options for mke2fs. This affects RAID-5 performance.

4.9 Options for mke2fs

There is a special option available when formatting RAID-4 or -5 devices with mke2fs. The -R stride=nn option will allow mke2fs to better place different ext2 specific data-structures in an intelligent way on the RAID device.

If the chunk-size is 32 kB, it means, that 32 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be eight filesystem blocks in one array chunk. We can pass this information on the mke2fs utility, when creating the filesystem:

  mke2fs -b 4096 -R stride=8 /dev/md0

RAID-{4,5} performance is severely influenced by this option. I am unsure how the stride option will affect other RAID levels. If anyone has information on this, please send it in my direction.

The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.

4.10 Autodetection

Autodetection allows the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done.

This requires several things:

  1. You need autodetection support in the kernel. Check this
  2. You must have created the RAID devices using persistent-superblock
  3. The partition-types of the devices used in the RAID must be set to 0xFD (use fdisk and set the type to ``fd'')

NOTE: Be sure that your RAID is NOT RUNNING before changing the partition types. Use raidstop /dev/md0 to stop the device.

If you set up 1, 2 and 3 from above, autodetection should be set up. Try rebooting. When the system comes up, cat'ing /proc/mdstat should tell you that your RAID is running.

During boot, you could see messages similar to these:

 Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512
  bytes. Sectors= 12657717 [6180 MB] [6.2 GB]
 Oct 22 00:51:59 malthe kernel: Partition check:
 Oct 22 00:51:59 malthe kernel:  sda: sda1 sda2 sda3 sda4
 Oct 22 00:51:59 malthe kernel:  sdb: sdb1 sdb2
 Oct 22 00:51:59 malthe kernel:  sdc: sdc1 sdc2
 Oct 22 00:51:59 malthe kernel:  sdd: sdd1 sdd2
 Oct 22 00:51:59 malthe kernel:  sde: sde1 sde2
 Oct 22 00:51:59 malthe kernel:  sdf: sdf1 sdf2
 Oct 22 00:51:59 malthe kernel:  sdg: sdg1 sdg2
 Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays
 Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdb1,1>
 Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdc1,2>
 Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sdd1,3>
 Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872
 Oct 22 00:51:59 malthe kernel: bind<sde1,4>
 Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376
 Oct 22 00:51:59 malthe kernel: bind<sdf1,5>
 Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376
 Oct 22 00:51:59 malthe kernel: bind<sdg1,6>
 Oct 22 00:51:59 malthe kernel: autorunning md0
 Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1>
 Oct 22 00:51:59 malthe kernel: now!
 Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean --
  starting background reconstruction 
This is output from the autodetection of a RAID-5 array that was not cleanly shut down (eg. the machine crashed). Reconstruction is automatically initiated. Mounting this device is perfectly safe, since reconstruction is transparent and all data are consistent (it's only the parity information that is inconsistent - but that isn't needed until a device fails).

Autostarted devices are also automatically stopped at shutdown. Don't worry about init scripts. Just use the /dev/md devices as any other /dev/sd or /dev/hd devices.

Yes, it really is that easy.

You may want to look in your init-scripts for any raidstart/raidstop commands. These are often found in the standard RedHat init scripts. They are used for old-style RAID, and has no use in new-style RAID with autodetection. Just remove the lines, and everything will be just fine.

4.11 Booting on RAID

There are several ways to set up a system that mounts it's root filesystem on a RAID device. Some distributions allow for RAID setup in the installation process, and this is by far the easiest way to get a nicely set up RAID system.

Newer LILO distributions can handle RAID-1 devices, and thus the kernel can be loaded at boot-time from a RAID device. LILO will correctly write boot-records on all disks in the array, to allow booting even if the primary disk fails.

The author does not yet know of any easy method for making the Grub boot-loader write the boot-records on all disks of a RAID-1. Please share your wisdom if you know how to do this.

Another way of ensuring that your system can always boot is, to create a boot floppy when all the setup is done. If the disk on which the /boot filesystem resides dies, you can always boot from the floppy. On RedHat and RedHat derived systems, this can be accomplished with the mkbootdisk command.

4.12 Root filesystem on RAID

In order to have a system booting on RAID, the root filesystem (/) must be mounted on a RAID device. Two methods for achieving this is supplied bellow. The methods below assume that you install on a normal partition, and then - when the installation is complete - move the contents of your non-RAID root filesystem onto a new RAID device. Please not that this is no longer needed in general, as most newer GNU/Linux distributions support installation on RAID devices (and creation of the RAID devices during the installation process). However, you may still want to use the methods below, if you are migrating an existing system to RAID.

Method 1

This method assumes you have a spare disk you can install the system on, which is not part of the RAID you will be configuring.

If you're doing this with IDE disks, be sure to tell your BIOS that all disks are ``auto-detect'' types, so that the BIOS will allow your machine to boot even when a disk is missing.

Method 2

This method requires that your kernel and raidtools understand the failed-disk directive in the /etc/raidtab file - if you are working on a really old system this may not be the case, and you will need to upgrade your tools and/or kernel first.

You can only use this method on RAID levels 1 and above, as the method uses an array in "degraded mode" which in turn is only possible if the RAID level has redundancy. The idea is to install a system on a disk which is purposely marked as failed in the RAID, then copy the system to the RAID which will be running in degraded mode, and finally making the RAID use the no-longer needed ``install-disk'', zapping the old installation but making the RAID run in non-degraded mode.

4.13 Making the system boot on RAID

For the kernel to be able to mount the root filesystem, all support for the device on which the root filesystem resides, must be present in the kernel. Therefore, in order to mount the root filesystem on a RAID device, the kernel must have RAID support.

The normal way of ensuring that the kernel can see the RAID device is to simply compile a kernel with all necessary RAID support compiled in. Make sure that you compile the RAID support into the kernel, and not as loadable modules. The kernel cannot load a module (from the root filesystem) before the root filesystem is mounted.

However, since RedHat-6.0 ships with a kernel that has new-style RAID support as modules, I here describe how one can use the standard RedHat-6.0 kernel and still have the system boot on RAID.

Booting with RAID as module

You will have to instruct LILO to use a RAM-disk in order to achieve this. Use the mkinitrd command to create a ramdisk containing all kernel modules needed to mount the root partition. This can be done as:

 mkinitrd --with=<module> <ramdisk name> <kernel>
For example:
 mkinitrd --preload raid5 --with=raid5 raid-ramdisk 2.2.5-22

This will ensure that the specified RAID module is present at boot-time, for the kernel to use when mounting the root device.

4.14 Converting a non-RAID RedHat System to run on Software RAID

This section was written and contributed by Mark Price, IBM. It was formatted by the HOWTO author. All remaining text in this section is the work of Mark Price.

Notice: the following information is provided "AS IS" with no representation or warranty of any kind either express or implied. You may use it freely at your own risk, and no one else will be liable for any damages arising out of such usage.

Introduction

The technote details how to convert a linux system with non RAID devices to run with a Software RAID configuration.

Scope

This scenario was tested with Redhat 7.1, but should be applicable to any release which supports Software RAID (md) devices.

Pre-conversion example system

The test system contains two SCSI disks, sda and sdb both of of which are the same physical size. As part of the test setup, I configured both disks to have the same partition layout, using fdisk to ensure the number of blocks for each partition was identical.

DEVICE      MOUNTPOINT  SIZE        DEVICE      MOUNTPOINT  SIZE
/dev/sda1   /           2048MB      /dev/sdb1               2048MB
/dev/sda2   /boot       80MB        /dev/sdb2               80MB
/dev/sda3   /var/       100MB       /dev/sdb3               100MB
/dev/sda4   SWAP        1024MB      /dev/sdb4   SWAP        1024MB
In our basic example, we are going to set up a simple RAID-1 Mirror, which requires only two physical disks.

Step-1 - boot rescue cd/floppy

The redhat installation CD provides a rescue mode which boots into linux from the CD and mounts any filesystems it can find on your disks.

At the lilo prompt type

    lilo: linux rescue ide=nodma

With the setup described above, the installer may ask you which disk your root filesystem in on, either sda or sdb. Select sda.

The installer will mount your filesytems in the following way.

DEVICE      MOUNTPOINT  TEMPORARY MOUNT POINT
/dev/sda1   /           /mnt/sysimage
/dev/sda2   /boot       /mnt/sysimage/boot
/dev/sda3   /var        /mnt/sysimage/var
/dev/sda6   /home       /mnt/sysimage/home

Note: - Please bear in mind other distributions may mount your filesystems on different mount points, or may require you to mount them by hand.

Step-2 - create a raidtab file

Create the file /mnt/sysimage/etc/raidtab (or wherever your real /etc file system has been mounted.

For our test system, the raidtab file would like like this.

raiddev /dev/md0
    raid-level              1
    nr-raid-disks           2
    nr-spare-disks          0
    chunk-size              4
    persistent-superblock   1
    device                  /dev/sda1
    raid-disk               0
    device                  /dev/sdb1
    raid-disk               1

raiddev /dev/md1
    raid-level              1
    nr-raid-disks           2
    nr-spare-disks          0
    chunk-size              4
    persistent-superblock   1
    device                  /dev/sda2
    raid-disk               0
    device                  /dev/sdb2
    raid-disk               1

raiddev /dev/md2
    raid-level              1
    nr-raid-disks           2
    nr-spare-disks          0
    chunk-size              4
    persistent-superblock   1
    device                  /dev/sda3
    raid-disk               0
    device                  /dev/sdb3
    raid-disk               1

Note: - It is important that the devices are in the correct order. ie. that /dev/sda1 is raid-disk 0 and not raid-disk 1. This instructs the md driver to sync from /dev/sda1, if it were the other way around it would sync from /dev/sdb1 which would destroy your filesystem.

Now copy the raidtab file from your real root filesystem to the current root filesystem.

(rescue)# cp /mnt/sysimage/etc/raidtab /etc/raidtab

Step-3 - create the md devices

There are two ways to do this, copy the device files from /mnt/sysimage/dev or use mknod to create them. The md device, is a (b)lock device with major number 9.

(rescue)# mknod /dev/md0 b 9 0
(rescue)# mknod /dev/md1 b 9 1
(rescue)# mknod /dev/md2 b 9 2

Step-4 - unmount filesystems

In order to start the raid devices, and sync the drives, it is necessary to unmount all the temporary filesystems.

(rescue)# umount /mnt/sysimage/var
(rescue)# umount /mnt/sysimage/boot
(rescue)# umount /mnt/sysimage/proc
(rescue)# umount /mnt/sysimage

Step-5 - start raid devices

Because there are filesystems on /dev/sda1, /dev/sda2 and /dev/sda3 it is necessary to force the start of the raid device.

(rescue)# mkraid --really-force /dev/md2

You can check the completion progress by cat'ing the /proc/mdstat file. It shows you status of the raid device and percentage left to sync.

Continue with /boot and /

(rescue)# mkraid --really-force /dev/md1
(rescue)# mkraid --really-force /dev/md0

The md driver syncs one device at a time.

Step-6 - remount filesystems

Mount the newly synced filesystems back into the /mnt/sysimage mount points.

(rescue)# mount /dev/md0 /mnt/sysimage
(rescue)# mount /dev/md1 /mnt/sysimage/boot
(rescue)# mount /dev/md2 /mnt/sysimage/var

Step-7 - change root

You now need to change your current root directory to your real root file system.

(rescue)# chroot /mnt/sysimage

Step-8 - edit config files

You need to configure lilo and /etc/fstab appropriately to boot from and mount the md devices.

Note: - The boot device MUST be a non-raided device. The root device is your new md0 device. eg.

boot=/dev/sda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
message=/boot/message
linear
default=linux

image=/boot/vmlinuz
    label=linux
    read-only
    root=/dev/md0

Alter /etc/fstab

/dev/md0               /                       ext3    defaults        1 1
/dev/md1               /boot                   ext3    defaults        1 2
/dev/md2               /var                    ext3    defaults        1 2
/dev/sda4              swap                    swap    defaults        0 0

Step-9 - run LILO

With the /etc/lilo.conf edited to reflect the new root=/dev/md0 and with /dev/md1 mounted as /boot, we can now run /sbin/lilo -v on the chrooted filesystem.

Step-10 - change partition types

The partition types of the all the partitions on ALL Drives which are used by the md driver must be changed to type 0xFD.

Use fdisk to change the partition type, using option 't'.

(rescue)# fdisk /dev/sda
(rescue)# fdisk /dev/sdb

Use the 'w' option after changing all the required partitions to save the partion table to disk.

Step-11 - resize filesystem

When we created the raid device, the physical partion became slightly smaller because a second superblock is stored at the end of the partition. If you reboot the system now, the reboot will fail with an error indicating the superblock is corrupt.

Resize them prior to the reboot, ensure that the all md based filesystems are unmounted except root, and remount root read-only.

(rescue)# mount / -o remount,ro

You will be required to fsck each of the md devices. This is the reason for remounting root read-only. The -f flag is required to force fsck to check a clean filesystem.

(rescue)# e2fsck -f /dev/md0

This will generate the same error about inconsistent sizes and possibly corrupted superblock.Say N to 'Abort?'.

(rescue)# resize2fs /dev/md0

Repeat for all /dev/md devices.

Step-12 - checklist

The next step is to reboot the system, prior to doing this run through the checklist below and ensure all tasks have been completed.

Step-13 - reboot

You can now safely reboot the system, when the system comes up it will auto discover the md devices (based on the partition types).

Your root filesystem will now be mirrored.

4.15 Pitfalls

Never NEVER never re-partition disks that are part of a running RAID. If you must alter the partition table on a disk which is a part of a RAID, stop the array first, then repartition.

It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus can sustain 10 MB/s which is less than many disks can do alone today. Putting six such disks on the bus will of course not give you the expected performance boost. It is becoming equally easy to saturate the PCI bus - remember, a normal 32-bit 33 MHz PCI bus has a theoretical maximum bandwidth of around 133 MB/sec, considering command overhead etc. you will see a somewhat lower real-world transfer rate. Some disks today has a throughput in excess of 30 MB/sec, so just four of those disks will actually max out your PCI bus! When designing high-performance RAID systems, be sure to take the whole I/O path into consideration - there are boards with more PCI busses, with 64-bit and 66 MHz busses, and with PCI-X.

More SCSI controllers will only give you extra performance, if the SCSI busses are nearly maxed out by the disks on them. You will not see a performance improvement from using two 2940s with two old SCSI disks, instead of just running the two disks on one controller.

If you forget the persistent-superblock option, your array may not start up willingly after it has been stopped. Just re-create the array with the option set correctly in the raidtab. Please note that this will destroy the information on the array!

If a RAID-5 fails to reconstruct after a disk was removed and re-inserted, this may be because of the ordering of the devices in the raidtab. Try moving the first ``device ...'' and ``raid-disk ...'' pair to the bottom of the array description in the raidtab file.


Next Previous Contents


Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру