SMEServer LVMRAID

From Realm Business Systems Ltd
Jump to: navigation, search

Lvmraid - cpoied from French translation;

http://wiki.contribs.org/Raid#Upgrading_the_Hard_Drive_Size

On a physical server Install the first new disc Restart your server.

We will use sdb added as disc name at every step considering that the previously added disk goes under the name sda to the next step. Adding the first new disc in the raid array

Once rebooted you may find that the raid focntionne in gradation:

 cat /proc/mdstat

partition the drive

Above all check the size of the boot partition

 df /boot

should return:

 Sys.  to File.  1K-blocks Occupied Available Capacity Mounted on
/Dev/md1 101018 20802 75000 22% / boot

You then create the partitions raid

 fdisk /dev/sdb

type p to view the partition table: it must be empty if your drive is new, so this is not the case you are probably about to destroy a system disk !!!

type: n p 1 and then 1 + 101018K for the boot partition size (a change according to your case)

then second partition: n p 2 and then complete the disc with the second partition (press enter to use all)

choose to raid a linux system on each partition: type 1 and t fd then choose to raid a linux system on each partition: type t then 2 then fd

add the boot flag: a then 1

leaving fdisk finish writing the partition table: type w add the disks in the array raid

By observing the raid with cat /proc/mdstat you may find that there are two arrays (one for boot and one for the system) so add partitions created has two array (in that admetant md1 the array and boot md2 is the array system which is to check with mdstat)

mdadm /dev/md1 --add /dev/sdb1
mdadm /dev/md2 --add /dev/sdb2

Now your disks are added and (provided that the partitions are at least as large as the old) the array will synchronize data:

check the progress with:

 watch -n 3 cat /proc/mdstat

expect everything to be synchronized grub and MBR

 grub

then into the grub console (provided that sdb is properly the name of your drive):

device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)

Install the second new disk

turn off the server to remove the old drive second, add any new reconnect and restart. add the second new disk partitioner disk

First check the size of the boot partition

 df /boot

should return:

Sys.  to File.  1K-blocks Occupied Available Capacity Mounted on
/Dev/md1 101018 20802 75000 22% /boot

You must first create the raid partitions (sdb attention is not necessarily the name of the added disk)

 fdisk /dev/sdb

type n p 1 and then 1 101018K to the size of the boot partition, a change following your case)

then second partition: n p 2 and then complete the disc with the second partition (press enter to use all)

choose to raid a linux system on each partition: type 1 and t fd then choose to raid a linux system on each partition: type t then 2 then fd

add the boot flag: a then 1

leaving fdisk finish writing the partition table: type w


alternatively type:

 sfdisk -d /dev/sda | sfdisk /dev/sdb

grub and MBR

 grub

then into the grub console (provided that bathroom is properly the name of your drive):

device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)

add the disks in the array raid

By observing the raid with cat /proc/mdstat you may find that there are two arrays (one for boot and system) so add partitions created has two array

mdadm /dev/md1 --add /dev/sdb1
mdadm /dev/md2 --add /dev/sdb2

Now your disks are added and (provided that the partitions are at least as large as the old) the array will synchronize data:

check the progress with:

 watch -n 3 cat /proc/mdstat

expect everything to be synchronized larger space

First increase the size of the raid:

--grow mdadm /dev/md1 --size = max
--grow mdadm /dev/md2 --size = max

Then increase the size of the LVM volume

pvresize /dev/md1
pvresize /dev/md2

Finally increase the size of the LVM volume group.

 lvresize + -l $ (vgdisplay hand -c | cut -d: -f16) hand /root

or if you installed your sme in 7 RC

 lvresize + -l $ (vgdisplay vg_primary -c | cut -d: -f16) vg_primary /lv_root

you can check this by typing "df -h" if you can read "/dev/mapper/vg_primary-lv_root" this is the second line you must use, otherwise "/dev/mapper/main-root" is the first which helps. Warning: [-l (lowercase L)]


 ext2online -C0 /dev/main/root 

or

 ext2online -C0 /dev/mapper/vg_primary-lv_root

Attention: [C -C0 to dash zero]

 * These instructions should work for any raid level-have you as long as you-have> = 2 drives
 * If you disabled-have lvm 
 1. you do not need the old pvresize lvresize command
 2. Becomes the final line ext2online -C0 /dev/md2 (or whatever / is mounted to)


 mdadm /dev/md1 --add /dev/sdb3


http://www.debian-administration.org/articles/424