====== RAID - mdadm - Growing an array ======
If a RAID is running out of space, additional disks can be added to the RAID to grow the array.
* Multiple drives can be added in once to grow the RAID much bigger if needed.
**NOTE:** The drive needs to be the same size as all the others of course.
----
===== Initial Array =====
mdadm --detail /dev/md0
returns:
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 6 18:31:41 2011
Raid Level : raid6
Array Size : 3144192 (3.00 GiB 3.22 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Thu Sep 8 18:54:26 2011
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : raidtest.loc:0 (local to host raidtest.loc)
UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf
Events : 2058
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
5 8 48 2 active sync /dev/sdd
4 8 64 3 active sync /dev/sde
6 8 80 4 active sync /dev/sdf
----
===== Add more drives =====
mdadm --add /dev/md0 /dev/sdg /dev/sdh
returns:
mdadm: added /dev/sdg
mdadm: added /dev/sdh
**NOTE:** In this example 2 drives are added to the RAID.
----
===== Grow the array =====
mdadm --grow /dev/md0 --raid-devices=7
returns:
mdadm: Need to backup 7680K of critical section..
----
===== Expand the File System Volume =====
A RAID device is like a hard drive.
* Just because the hard drive is bigger, this does not mean the file system sees that extra space!
* Therefore, the file system volume needs to be expanded.
* This procedure has nothing to do with mdadm and is based on the file system.
resize2fs /dev/md0
returns:
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/md0 is mounted on /mnt/md0; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/md0 to 1310080 (4k) blocks.
The filesystem on /dev/md0 is now 1310080 blocks long.
----
===== Display Disks =====
df -hl
returns:
Filesystem Size Used Avail Use% Mounted on
...
/dev/md0 5.0G 70M 4.7G 2% /mnt/md0
----
===== Final Result =====
mdadm --detail /dev/md0
returns:
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 6 18:31:41 2011
Raid Level : raid6
Array Size : 5240320 (5.00 GiB 5.37 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Thu Sep 8 19:01:15 2011
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : raidtest.loc:0 (local to host raidtest.loc)
UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf
Events : 2089
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
5 8 48 2 active sync /dev/sdd
4 8 64 3 active sync /dev/sde
6 8 80 4 active sync /dev/sdf
8 8 112 5 active sync /dev/sdh
7 8 96 6 active sync /dev/sdg
**NOTE:** The RAID size has been increased, without ever bringing the file system offline.
* There could be VMs or any other data running on the RAID during any of these procedures.
----