I have 4 disks in a RAID5 array. I want to add a 5th. So I did mdadm --add /dev/md3 /dev/sde1 This worked but, as expected, the disk isn't being used in the raid5 array.
md3 : active raid5 sde1[4] sdd4[3] sdc3[2] sdb2[1] sda1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
So then I tried the next step: mdadm --grow --raid-devices=5 /dev/md3
But now I have problems... mdadm: Cannot set device size/shape for /dev/md3: Invalid argument
Can CentOS 4.6 grow md5 arrays? Or is the kernel and mdadm version too old?
( http://www.economysizegeek.com/2006/07/15/migrate-raid1-to-raid5-and-grow/ hints that I need 2.6.17 and mdadm 2.5.2, but it's hard to know what the RHEL/CentOS kernel has in it 'cos version numbers no longer match)
I wonder if I could boot off a Ubuntu CD or something and grow the array that way. Would be annoying (many hours of server downtime)...
So then I tried the next step: mdadm --grow --raid-devices=5 /dev/md3 But now I have problems... mdadm: Cannot set device size/shape for /dev/md3: Invalid argument
What happens, if you add --size=max ?
- Jussi
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * SMS +358 40 771 2098 jussi.hirvi@greenspot.fi * http://www.greenspot.fi
On Fri, Aug 22, 2008 at 08:26:01PM +0300, Jussi Hirvi wrote:
So then I tried the next step: mdadm --grow --raid-devices=5 /dev/md3 But now I have problems... mdadm: Cannot set device size/shape for /dev/md3: Invalid argument
What happens, if you add --size=max
% mdadm --grow --raid-devices=5 --size=max /dev/md3 mdadm: can change at most one of size, raiddisks, and layout
"--size=max" is for use when a failed disk is replaced with a bigger one.
Good thought, though.
What happens, if you add --size=max
Stephen Harris lists@spuddy.org escribio (22.8.2008 20:27)
% mdadm --grow --raid-devices=5 --size=max /dev/md3 mdadm: can change at most one of size, raiddisks, and layout
"--size=max" is for use when a failed disk is replaced with a bigger one.
Ok. I haven't done raid5 - this as a disclaimer.
How about simply % mdadm --grow /dev/md3
What do you get with % mdadm --detail /dev/md3 ?
- Jussi
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * SMS +358 40 771 2098 jussi.hirvi@greenspot.fi * http://www.greenspot.fi
On Fri, Aug 22, 2008 at 08:41:25PM +0300, Jussi Hirvi wrote:
How about simply % mdadm --grow /dev/md3
% mdadm --grow /dev/md3 mdadm: no changes to --grow
What do you get with % mdadm --detail /dev/md3
/dev/md3: Version : 00.90.01 Creation Time : Wed Aug 20 08:44:30 2008 Raid Level : raid5 Array Size : 2930279808 (2794.53 GiB 3000.61 GB) Device Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 3 Persistence : Superblock is persistent
Update Time : Fri Aug 22 13:56:47 2008 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1
Layout : left-symmetric Chunk Size : 64K
UUID : 8263db8a:f99c070f:349a59c2:2129ca73 Events : 0.80605
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 18 1 active sync /dev/sdb2 2 8 35 2 active sync /dev/sdc3 3 8 52 3 active sync /dev/sdd4
4 8 65 - spare /dev/sde1
Stephen Harris wrote:
On Fri, Aug 22, 2008 at 08:41:25PM +0300, Jussi Hirvi wrote:
How about simply % mdadm --grow /dev/md3
% mdadm --grow /dev/md3 mdadm: no changes to --grow
What do you get with % mdadm --detail /dev/md3
/dev/md3:
<snip>
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 18 1 active sync /dev/sdb2 2 8 35 2 active sync /dev/sdc3 3 8 52 3 active sync /dev/sdd4 4 8 65 - spare /dev/sde1
Stephen,
I don't think you can grow it without backing it up, destroying it, rebuilding it with 5 devices, then restoring.
From the man page:
Grow Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.
I take it to mean you can grow the segment size on all devices in the array, say you swapped out 160GB drives with 320GB drives one by one and now you want your array to fill up the remaining 160GB, then you can grow it, but you can only add devices to a RAID1...
What do you think this is ZFS?
Sheesh!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Aug 22, 2008 at 02:25:20PM -0400, Ross S. W. Walker wrote:
I don't think you can grow it without backing it up, destroying it, rebuilding it with 5 devices, then restoring.
You _can_... but it requires a newer kernel. See, for example,
http://linux-raid.osdl.org/index.php/Growing#Adding_partitions
Newer kernels have this option:
config MD_RAID5_RESHAPE bool "Support adding drives to a raid-5 array" depends on MD_RAID456 default y ---help--- A RAID-5 set can be expanded by adding extra drives. This requires "restriping" the array which means (almost) every block must be written to a different place.
This option allows such restriping to be done while the array is online.
But it seems this isn't available in the CentOS 4.6 kernel.
Stephen Harris wrote:
A RAID-5 set can be expanded by adding extra drives. This requires "restriping" the array which means (almost) every block must be written to a different place. This option allows such restriping to be done while the array is online.
thats also a very risky operation as its extremely difficult to make it restartable in case of a mishap during hte many-hours-long restriping operation. I wouldn't undertake this on any production system without a full backup first.
John R Pierce wrote:
Stephen Harris wrote:
A RAID-5 set can be expanded by adding extra drives. This requires "restriping" the array which means (almost) every block must be written to a different place. This option allows such restriping to be done while the array is online.
thats also a very risky operation as its extremely difficult to make it restartable in case of a mishap during hte many-hours-long restriping operation. I wouldn't undertake this on any production system without a full backup first.
It would probably be faster to backup, rebuild and restore too...
Besides saying it is available in the latest kernels is like saying it's available in another OS... That's nice, but does nobody here any good.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Aug 22, 2008 at 02:50:29PM -0400, Ross S. W. Walker wrote:
It would probably be faster to backup, rebuild and restore too...
The whole reason I need to extend like this is because I don't have any easy way of backing up 1.3Tbytes of data.
While the rebuild is happening the existing volume is still available.
Besides saying it is available in the latest kernels is like saying it's available in another OS... That's nice, but does nobody here any good.
Well... there's the potential for me to build a kernel with the latest vanilla sources, temporarily boot into that, extend the array and then boot back to a supported kernel afterwards... Maybe!
Stephen Harris wrote:
On Fri, Aug 22, 2008 at 02:50:29PM -0400, Ross S. W. Walker wrote:
It would probably be faster to backup, rebuild and restore too...
The whole reason I need to extend like this is because I don't have any easy way of backing up 1.3Tbytes of data.
While the rebuild is happening the existing volume is still available.
Besides saying it is available in the latest kernels is like saying it's available in another OS... That's nice, but does nobody here any good.
Well... there's the potential for me to build a kernel with the latest vanilla sources, temporarily boot into that, extend the array and then boot back to a supported kernel afterwards... Maybe!
Or you could just boot from a LiveCD of a distro that was this and run a conversion there, it would make it unavailable during the conversion though.
If the array was part of a LVM VG, you could create another 4 drive array and add it to the VG and extend the LVs that way, or do a pvmove and move everything from the old array to the new.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Aug 22, 2008 at 03:05:30PM -0400, Ross S. W. Walker wrote:
Stephen Harris wrote:
Or you could just boot from a LiveCD of a distro that was this and run a conversion there, it would make it unavailable during the conversion though.
*grin* My first email on this subject...
I wonder if I could boot off a Ubuntu CD or something and grow the array that way. Would be annoying (many hours of server downtime)...
If the array was part of a LVM VG, you could create another 4 drive array and add it to the VG and extend the LVs that way, or do a pvmove and move everything from the old array to the new.
Well, it _is_... the old array was 4*500Gb. The new array is 5*1Tb. In each I've built a single VG/LV. But my machine can't handle 9 SATA disks (power, controller limitations, space). So what I did was use one of the TByte disks to copy the data, built the other 4 into an array, copied the data from the last disk onto the array and then... failed to extend the array.
I still have the old 4*500GB on a shelf, but I don't have anything I can plug it into.
(My other option is to buy a couple of SATA controllers, build a second machine then transfer data over the network)
Stephen Harris wrote:
On Fri, Aug 22, 2008 at 03:05:30PM -0400, Ross S. W. Walker wrote:
Stephen Harris wrote:
Or you could just boot from a LiveCD of a distro that was this and run a conversion there, it would make it unavailable during the conversion though.
*grin* My first email on this subject...
I wonder if I could boot off a Ubuntu CD or something and grow the array that way. Would be annoying (many hours of server downtime)...
I wouldn't use Ubuntu or any Debian based distro cause it's EVMS just might bugger up the LVM config...
Try Fedora or OpenSuse they use straight LVM.
If the array was part of a LVM VG, you could create another 4 drive array and add it to the VG and extend the LVs that way, or do a pvmove and move everything from the old array to the new.
Well, it _is_... the old array was 4*500Gb. The new array is 5*1Tb. In each I've built a single VG/LV. But my machine can't handle 9 SATA disks (power, controller limitations, space). So what I did was use one of the TByte disks to copy the data, built the other 4 into an array, copied the data from the last disk onto the array and then... failed to extend the array.
I still have the old 4*500GB on a shelf, but I don't have anything I can plug it into.
(My other option is to buy a couple of SATA controllers, build a second machine then transfer data over the network)
Instead of a second machine, how about an external disk enclosure?
You can get them rack mountable or tower based. Look for a nice 15 drive enclosure, then you have room to build 2 arrays...
A nice hardware RAID card with battery backed cache would make the arrays scream too, for RAID5/6 I always go hardware with BBU Cache. I almost always do the OS disks as software RAID1.
Hey with the enclosure going you can use the internal drives for volume snapshots and be able to keep quite a few without killing the storage performance.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Aug 22, 2008 at 03:31:31PM -0400, Ross S. W. Walker wrote:
I wouldn't use Ubuntu or any Debian based distro cause it's EVMS just might bugger up the LVM config...
Huh. Dunno what EVMS is, but thanks for the warning!
Instead of a second machine, how about an external disk enclosure?
You can get them rack mountable or tower based. Look for a nice 15 drive enclosure, then you have room to build 2 arrays...
This is a home server; I'm not made of money :-)
Stephen Harris wrote:
On Fri, Aug 22, 2008 at 03:31:31PM -0400, Ross S. W. Walker wrote:
I wouldn't use Ubuntu or any Debian based distro cause it's EVMS just might bugger up the LVM config...
Huh. Dunno what EVMS is, but thanks for the warning!
EVMS is like a storage management framework, that lvm is just a component of. It's very ambitious, but also very complex, and it writes out it's own meta-data for volumes that are managed by it.
Instead of a second machine, how about an external disk enclosure?
You can get them rack mountable or tower based. Look for a nice 15 drive enclosure, then you have room to build 2 arrays...
This is a home server; I'm not made of money :-)
Ah, Ok, well a JBOD enclosure needn't break the bank, especially if it's an empty one. Google around and you can probably find a white box JBOD enclosure that fits your budget. There are even nice desktop enclosures with 4x SATA/SAS connectors for 6 or 8 drives.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Aug 22, 2008 at 03:21:51PM -0400, Stephen Harris wrote:
I wonder if I could boot off a Ubuntu CD or something and grow the array that way. Would be annoying (many hours of server downtime)...
In the end I booted a CentOS 5.2 DVD in "rescue" mode. This appears to have improved since 5.0 because that version threw up lots of I/O errors (which is why I ran 4.x, which worked fine).
Once I loaded the modules in the right order I was able to grow the RAID5. It took around 25 hours. But, oddly, after the build the 5th disk was taken offline. The remaining 4 disks correctly maintained the data.
So then I rebooted back to the main build, re-added the disk, resized the PV, extended the LV, resized the FS. And now...
% df -hP /Media Filesystem Size Used Avail Use% Mounted on /dev/mapper/Raid5-Media 3.6T 1.3T 2.4T 35% /Media
The RAID is still rebuilding. In another 5 hours I should know if it's worked properly!