Sysop:SWRaidLinux
Not yet finished
Goal
The Goal is to move an existing system installation with no raid to move to a RAID 1 (Mirroring). For that we need two identically disks (in size) or at least a same partition setup on both disks.
Links
Mostly based on this http://wwwhomes.uni-bielefeld.de/schoppa/raid/woody-raid-howto.html
Gentoo notes: http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID and http://www.gentoo.org/doc/de/gentoo-x86-tipsntricks.xml#software-raid
See also: http://www.tldp.org/HOWTO/Software-RAID-HOWTO-1.html
mismatch_cnt
From: http://en.gentoo-wiki.com/wiki/Software_RAID_Install#Abnormal_Causes_of_Inconsistent_Blocks
Inconsistent blocks may occur spontaneously as disk drives may discover and replace unreadable blocks on their own or as a result of SMART tests. Ideally, an error occurs when an attempt is made to read the block and software RAID transparently corrects the problem. It is possible due to flaws in the drive for errors not to be reported.
The check command attempts to rewrite unreadable blocks. The check command does not correct mismatched blocks. A count of these mismatched blocks is available after the check command runs:
cat /sys/block/mdX/md/mismatch_cnt
If the mismatch occurs in free space there is no impact.
Running an fsck may not fix your problem. This is because the fsck may read data from the correct block rather than the block containing undefined data.
The repair command can be used to place the mismatched blocks into a consistent state:
echo repair >> /sys/block/mdX/md/sync_action
You can monitor the progress of the repair with:
watch -n .1 cat /proc/mdstat
For RAID with parity (e.g. RAID-5) the parity block will be reconstructed. For RAID without parity (e.g. RAID-1) a block will be chosen at random as the correct block. Therefore, although running the repair command will make your RAID consistent it will not guarantee your partition is not corrupt.
To ensure your partition is not corrupt repair the RAID device and then reformat the partition.
HowTo
Description
First we prepare the second disk, then we move all the data to the second disk, configure grub etc. then reboot the machine with the second disk as a raid 1 (in which the first disk is failed), include the first disk into the raid, let it synch then reboot it with the correct raid.
sound easy, so let's do it...
Preparing the second disk
Example like a Partitionsetup could look like:
Mountpoint | hda Partition (first HD) |
/ | /dev/hda3 |
/boot | /dev/hda1 |
swap | /dev/hda2 |
/data | /dev/hda5 |
so you have to setup your second harddisk with cfdisk (i.e.) like this:
hdb Partition (second HD) | filesystem |
/dev/hdb3 | Linux raid autodected |
/dev/hdb1 | Linux raid autodected |
/dev/hdb2 | Linux Swap |
/dev/hdb5 | Linux raid autodected |
Note: Swap is in a way automatically "raided" if there are more than one.
Enabling KernelSupport
That we can access Raiddisks we maybe have to enable this in the Kernel.
//ToDo: How do I find out if it is already enabled
This you find in the Section:
Multilevel Devices -> RAID
Just compile your prefered (in our case RAID 1) in the Kernel (do not load it as a Module as we also want to have the / Partition raided)
Rebuild the Kernel and Boot with it. See here how to build you're own kernel.
Installing RaidTools
Install the raidtools (mostly called mdadm):
Gentoo: emerge sys-fs/mdadm raidtools Debian: apt-get install mdadm raidtools (but not sure if this is anymore correct)
Generating Raidtab
remember we want something like that
Mountpoint RAID Partition hda Partition (first HD) hdb Partition (second HD) /boot /dev/md0 /dev/hda1 /dev/hdb1 / /dev/md1 /dev/hda3 /dev/hdb3 /data /dev/md2 /dev/hda5 /dev/hdb5
so let's generate /etc/raidtab with the following entries:
It's important that the first disk is still marked as failed!
Wichtig ist, dass die erste Festplatte(hda) hier noch als "failed-disk" markiert ist!
# example /etc/raidtab # md0 is the boot array raiddev /dev/md0 raid-level 1 nr-raid-disks 2 chunk-size 32 # Spare disks for hot reconstruction nr-spare-disks 0 persistent-superblock 1 device /dev/hdb1 raid-disk 0 # this is our old disk, mark as failed for now device /dev/hda1 failed-disk 1 # md1 is the root array raiddev /dev/md1 raid-level 1 nr-raid-disks 2 chunk-size 32 # Spare disks for hot reconstruction nr-spare-disks 0 persistent-superblock 1 device /dev/hdb3 raid-disk 0 # boot is marked failed as well device /dev/hda3 failed-disk 1 # md2 is the /data array raiddev /dev/md2 raid-level 1 nr-raid-disks 2 chunk-size 32 # Spare disks for hot reconstruction nr-spare-disks 0 persistent-superblock 1 device /dev/hdb5 raid-disk 0 # boot is marked failed as well device /dev/hda5 failed-disk 1
now we generate the raid devices:
mkraid /dev/md0 mkraid /dev/md1 mkraid /dev/md2
Note: maybe you have to do first this: cd /dev ; MAKEDEV md as if it reports that /dev/md* doesn't yet exists
Format Raid
mkfs.ext2 /dev/md0 mkfs.ext3 /dev/md1 mkfs.ext3 /dev/md2
copy the system
so let's copy the system on to the new raid:
create dir structure:
mkdir /mnt/root mkdir /mnt/boot mkdir /mnt/data
mount it:
mount /dev/md0 /mnt/boot mount /dev/md1 /mnt/root mount /dev/md2 /mnt/data
copy the non root partitions:
cp -a /boot/* /mnt/boot/ cp -a /data/* /mnt/data/
copy root partition:
cp -a /bin /mnt/root/bin cp -a /dev /mnt/root/dev cp -a /etc /mnt/root/etc cp -a /home /mnt/root/home cp -a /lib /mnt/root/lib cp -a /opt /mnt/root/opt cp -a /root /mnt/root/root cp -a /sbin /mnt/root/sbin cp -a /usr /mnt/root/usr cp -a /var /mnt/root/var cp -a /*.* /mnt/root/
depending on your structure some dirs aren't here or there are others. but the concept should be clear ;)
creating additional dirs:
mkdir /mnt/root/proc mkdir /mnt/root/boot mkdir /mnt/root/sys mkdir /mnt/root/data
changing fstab
change the fstab" in "/mnt/root/etc/" as the following:
/dev/md0 /boot ext2 noauto,noatime 1 2 /dev/md1 / ext3 noatime 0 1 /dev/hda2 none swap sw 0 0 /dev/hdb2 none swap sw 0 0 /dev/md2 /data ext3 noatime 0 1
chaning grub
add a second entry before the first in /boot/grub.conf (menu.lst) exactly as the first, except that you change /dev/hda3 to /dev/md1 like:
title=Gentoo Linux 2.6.16-r9 # Partition where the kernel image (or operating system) is located root (hd0,0) kernel /vmlinuz-2.6.16-gentoo-r9 root=/dev/md1 title=Gentoo Linux 2.6.16-r9 # Partition where the kernel image (or operating system) is located root (hd0,0) kernel /vmlinuz-2.6.16-gentoo-r9 root=/dev/hda3
then reboot!
so if something fails you can still boot with the second entry and try to fix it.
Hints
For Debian users with modules
Modular RAID on Debian GNU/Linux after move to RAID
Debian users may encounter problems using an initrd to mount their root filesystem from RAID, if they have migrated a standard non-RAID Debian install to root on RAID.
If your system fails to mount the root filesystem on boot (you will see this in a "kernel panic" message), then the problem may be that the initrd filesystem does not have the necessary support to mount the root filesystem from RAID.
Debian seems to produce its initrd.img files on the assumption that the root filesystem to be mounted is the current one. This will usually result in a kernel panic if the root filesystem is moved to the raid device and you attempt to boot from that device using the same initrd image. The solution is to use the mkinitrd command but specifying the proposed new root filesystem. For example, the following commands should create and set up the new initrd on a Debian system:
% mkinitrd -r /dev/md0 -o /boot/initrd.img-2.4.22raid % mv /initrd.img /initrd.img-nonraid % ln -s /boot/initrd.img-raid /initrd.img"
Things to do with mdadm
"rename" a md device
md1 in md2
mdadm --stop /dev/md1 mdadm --create --verbose /dev/md2 --level=1 --raid-devices=2 /dev/sda5 /dev/sdb5
then it askes, because there exists an array, say y and it will resync