Table of Contents
RAID - mdadm - LVM on RAID
LVM can group multiple disks and RAID arrays into a Volume Group.
- This Volume Group can be split into Logical Volumes; in essence different Partitions.
- LVM provides the ability to resize these Logical Volumes very easily.
- LVM is not a replacement for RAID, as LVM does not provide any options for redundancy or parity that RAID provides.
- Therefore it is best to be have LVM in conjunction with RAID.
|--------------------------------------------------------------------------------------------| |LVM Logical Volumes | / | /var | /usr | /home | /mnt | |--------------------------------------------------------------------------------------------| |LVM Volume Group | /dev/VolGroupArray | |--------------------------------------------------------------------------------------------| |RAID Arrays | /dev/md0 | /dev/md1 | |--------------------------------------------------------------------------------------------| |Physical Partitions | /dev/sda1 | /dev/sda2 | /dev/sdb1 | /dev/sdb2 | /dev/sdc1 | /dev/sdc2 | |--------------------------------------------------------------------------------------------| |Devices | /dev/sda | /dev/sdb | /dev/sdc | |--------------------------------------------------------------------------------------------| |Hard Drives | Drive 1 | Drive 2 | Drive 3 | |--------------------------------------------------------------------------------------------|
NOTE: A filesystem that can grow must be used with LVM.
Resizing a RAID array
To resize an existing RAID5:
mdadm --add /dev/md1 /dev/sdb1 mdadm --grow /dev/md1 --raid-disks=4 (new number of disks)
NOTE: The –raid-disks=4 is the new number of disks.
- This will result in the raid having to restripe itself which can take a very long time.
Prepare The Disks
fdisk /dev/sda
and:
- Enter an n to create a new partition.
- Enter an e for an extended partition.
- Enter a p for a primary partition (1-4).
- Enter a 1 for primary partition number 1.
- Accept the default sizes.
- Enter a t to change the partition type.
- Enter fd to change to Linux raid autodetect.
- Enter a w to write the changes.
Wipe everything
mdadm --stop /dev/md0 mdadm --zero-superblock /dev/sda1 mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdc1
Create the RAID Array
mdadm -v --create /dev/md0 --chunk=128 --level=raid5 --raid-devices=4 /dev/sda1 \ /dev/sdb1 /dev/sdc1 missing
returns:
mdadm: layout defaults to left-symmetric mdadm: size set to 245111616K mdadm: array /dev/md0 started.
NOTE: /dev/sdc1 is marked as missing.
- This will created the array as if one of the disks is dead.
- At a later stage the disk can be hot-added; which will result in the array rebuilding itself.
Check the RAID Array
mdadm --detail /dev/md0
returns:
/dev/md0: Version : 00.90.01 Creation Time : Thu Jun 3 20:24:17 2004 Raid Level : raid5 Array Size : 735334656 (701.27 GiB 752.98 GB) Device Size : 245111552 (233.76 GiB 250.99 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 3 20:24:17 2004 State : clean, no-errors Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K Number Major Minor Raid Device State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 -1 removed UUID : d6ac1605:db6659e1:6460b9c0:a451b7c8 Events : 0.5078
Update the mdadm config file
echo 'DEVICE /dev/sd*' >/etc/mdadm/mdadm.conf echo 'PROGRAM /bin/echo' >>/etc/mdadm/mdadm.conf echo 'MAILADDR some@email_address.com' >>/etc/mdadm/mdadm.conf mdadm --detail --scan >>/etc/mdadm/mdadm.conf cat /etc/mdadm/mdadm.conf
NOTE: This should be done whenever the config is changed, including adding a spare disk, mark a disk as faulty etc.
Create a LVM Physical Volume
pvcreate /dev/md0
returns:
No physical volume label read from /dev/md0 Physical volume "/dev/md0" successfully created
NOTE: This makes the RAID Array usable by LVM.
The LVM Physical Volume can be checked by running
pvdisplay
Create a LVM Volume Group
vgcreate myVolumeGroup1 /dev/md0
returns:
Adding physical volume '/dev/md0' to volume group 'myVolumeGroup1' Archiving volume group "myVolumeGroup1" metadata. Creating volume group backup "/etc/lvm/backup/myVolumeGroup1" Volume group "myVolumeGroup1" successfully created
NOTE: This Volume Group can be used to make logical volumes from.
NOTE: The LVM Volume Group can be checked by running
vgdisplay
Create a LVM Logical Volume
lvcreate -L 100G --name myDataVolume myVolumeGroup1
returns:
Logical volume "myDataVolume" created
NOTE: The LVM Logical Volume can be checked by running
lvdisplay
Format the LVM Logical Volume
mkfs.ext4 /dev/myVolumeGroup1/myDataVolume
NOTE: Other filesystems besides ext4 can be used if preferred.
- Reiser
mkfs -treiserfs /dev/myVolumeGroup1/myDataVolume
- xfs
mkfs.xfs /dev/myVolumeGroup1/myDataVolume
Mount the LVM Logical Partition
mount /dev/myVolumeGroup1/myDataVolume /mnt/data
Check the mount
df -k
returns:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 19283776 697988 18585788 4% / /dev/hda1 97826 13003 79604 15% /boot /dev/myVolumeGroup1/myDataVolume 629126396 32840 629093556 1% /mnt/data
NOTE: To auto mount, add following to /etc/fstab:
- /etc/fstab
/dev/myVolumeGroup1-myDataVolume /mnt/data ext4 defaults 1 2
Performance Enhancements / Tuning
There may be performance issues with LVM on RAID.
A potential fix:
blockdev --setra 4096 /dev/md0 blockdev --setra 4096 /dev/myVolumeGroup1/myDataVolume
WARNING: This may lock up the machine and destroy data.
- Ensure data is backed up before running this!
If the system has sufficient RAM the size of the software RAID MD cache can be increased:
echo 8192 > /sys/block/md0/md/stripe_cache_size