Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
-Jason
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list centos@centos.org wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
mdadm --create /dev/md0 --level=5 --raid-devices=7 /dev/sd[abcdefg]1
The above will create a level 5 RAID named /dev/md0 of /dev/sda1 /dev/sdb1 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1, with hot-spare /dev/sdg1
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
-Jason _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I used this guide for my first RAID on an Ubuntu box, its very straight forward. Its all command line based so everything here I have used in CentOS (apart from the writer sets the RAID flag on his drives via the GParted GUI but this can be done via terminal);
http://bfish.xaedalus.net/2006/11/software-raid-5-in-ubuntu-with-mdadm/
On Thu, Mar 25, 2010 at 3:36 PM, Robert Heller heller@deepsoft.com wrote:
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list centos@centos.org wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
mdadm --create /dev/md0 --level=5 --raid-devices=7 /dev/sd[abcdefg]1
The above will create a level 5 RAID named /dev/md0 of /dev/sda1 /dev/sdb1 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1, with hot-spare /dev/sdg1
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
-Jason _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Robert Heller -- 978-544-6933 Deepwoods Software -- Download the Model Railroad System http://www.deepsoft.com/ -- Binaries for Linux and MS-Windows heller@deepsoft.com -- http://www.deepsoft.com/ModelRailroadSystem/
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Robert,
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
Thanks.
Boris.
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epstein borepstein@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
As the disks get bigger, rebuild time also increases and the performance of the disks don't increase linearly with their storage. This means that when you are rebuilding a disk, the chances of one of your other disks failing becomes significantly large. Most suggest RAID6 these days as a minimum, mirroring and striping appears to be the most popular.
On Thu, Mar 25, 2010 at 5:27 PM, Hakan Koseoglu hakan@koseoglu.org wrote:
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epstein borepstein@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
As the disks get bigger, rebuild time also increases and the performance of the disks don't increase linearly with their storage. This means that when you are rebuilding a disk, the chances of one of your other disks failing becomes significantly large. Most suggest RAID6 these days as a minimum, mirroring and striping appears to be the most popular.
-- Hakan (m1fcj) - http://www.hititgunesi.org _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hakan,
You surely do have a point there. However, it is still not all that likely that a disk will fail during the rebuild time in question (what are we talking? some hours max?)
Boris.
On 3/25/2010 4:43 PM, Boris Epstein wrote:
On Thu, Mar 25, 2010 at 5:27 PM, Hakan Koseogluhakan@koseoglu.org wrote:
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epsteinborepstein@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
As the disks get bigger, rebuild time also increases and the performance of the disks don't increase linearly with their storage. This means that when you are rebuilding a disk, the chances of one of your other disks failing becomes significantly large. Most suggest RAID6 these days as a minimum, mirroring and striping appears to be the most popular.
You surely do have a point there. However, it is still not all that likely that a disk will fail during the rebuild time in question (what are we talking? some hours max?)
The common problem is that there are unused portions of the drives that go bad but are unnoticed for a long time. Then one fails badly enough to get kicked out of the raid. Then when you rebuild, you have to reconstruct parity for even the unused parts of the drive and you hit previously unnoticed bad spots in the process. I think the last Centos update added some sort of raid scan as a cron job that might detect bad spots earler, but I'm not sure what it actually does.
Boris Epstein wrote:
You surely do have a point there. However, it is still not all that likely that a disk will fail during the rebuild time in question (what are we talking? some hours max?)
8 disks is about the upper limit I'd suggest for a single raid group on any sort of system.
rebuilding a 8x1.5TB raid5 could easily take a full day or more.... will you have an online hotspare? if not, then the rebuild time includes how long it takes you to realize there's a bad drive, procure and install the replacement, /AND/ the umpteen hours for the rebuild.
personally, I prefer using raid10 or 1+0 (more or less the same thing), and for anything above a 2 disk mirror, I prefer to use a proper hardware raid controller... for 8+ disks, I'd likely be looking at external storage arrays such as the IBM DS3000 family.
Am 25.03.2010 um 22:07 schrieb Boris Epstein:
Robert,
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
Thanks.
Boris.
This has been discussed before.
The root of the problem lies in the fact that when a disk fails, you have to read-out the data from the other disks to re-build the RAID. Reads from disks have a certain probability to contain an error. The larger the disk and the larger the array, the more probable it is to encounter such an error while rebuilding the RAID (and if that happens, you're RAID is just a piece of scrap-metal)
http://www.google.com/search?q=the+end+of+raid
RAID5 works OK-ish for a couple of 146GB SAS-disks.
Rainer
At Thu, 25 Mar 2010 22:27:56 +0100 CentOS mailing list centos@centos.org wrote:
Am 25.03.2010 um 22:07 schrieb Boris Epstein:
Robert,
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
Thanks.
Boris.
This has been discussed before.
The root of the problem lies in the fact that when a disk fails, you have to read-out the data from the other disks to re-build the RAID. Reads from disks have a certain probability to contain an error. The larger the disk and the larger the array, the more probable it is to encounter such an error while rebuilding the RAID (and if that happens, you're RAID is just a piece of scrap-metal)
Or as was done recently at the Wendell Free Library, your disks become raw materials for an after school art project... :-)
http://www.google.com/search?q=the+end+of+raid
RAID5 works OK-ish for a couple of 146GB SAS-disks.
More than a couple of disks for RAID5 -- at least 3 are needed for RAID5.
Rainer MIME-Version: 1.0
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Robert Heller wrote:
At Thu, 25 Mar 2010 22:27:56 +0100 CentOS mailing list centos@centos.org wrote:
[...]
The root of the problem lies in the fact that when a disk fails, you have to read-out the data from the other disks to re-build the RAID. Reads from disks have a certain probability to contain an error. The larger the disk and the larger the array, the more probable it is to encounter such an error while rebuilding the RAID (and if that happens, you're RAID is just a piece of scrap-metal)
Or as was done recently at the Wendell Free Library, your disks become raw materials for an after school art project... :-)
It depends on how redundant the array is. With enough redundancy, one can rebuild even if more than one disc fails. RAID is essentially indistinguishable from ECC. If the number of errors (failed reads from discs) does not exceed the correction ability of the code used (usualy a Reed-Solomon BCH style code) then the reconstruction can proceed.
Mike
At Thu, 25 Mar 2010 17:07:47 -0400 CentOS mailing list centos@centos.org wrote:
On Thu, Mar 25, 2010 at 3:36 PM, Robert Heller heller@deepsoft.com wrote:
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list centos@centos.org wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
mdadm --create /dev/md0 --level=5 --raid-devices=7 /dev/sd[abcdefg]1
The above will create a level 5 RAID named /dev/md0 of /dev/sda1 /dev/sdb1 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1, with hot-spare /dev/sdg1
Note: RAID5 is not really recomended for such large disks. You run the risk of a complete data loss if one disk fails and the another disk fails during the rebuild.
-Jason _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Robert Heller -- 978-544-6933 Deepwoods Software -- Download the Model Railroad System http://www.deepsoft.com/ -- Binaries for Linux and MS-Windows heller@deepsoft.com -- http://www.deepsoft.com/ModelRailroadSystem/
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Robert,
Why is the size a factor here? Why would this be OK with smaller disks? How would you partition this instead?
There was a thread some time back (a few weeks? Couple of months?) about how as disk size got so much larger, the error rate hasn't really gotten much better. With such large disks, the number of I/O operations needed to do a rebuild of a RAID 5 array is so large that one will be increasingly likely to hit an error, at which point all bets are off. (There are some papers talking about this -- I don't have the links, but I think they are in the list archives.)
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)). Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4 pairs, 1.5TB/pair = 1.5*4 = 6TB total.
Thanks.
Boris. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)). Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4 pairs, 1.5TB/pair = 1.5*4 = 6TB total.
I am just starting to look into this RAID and I was wondering wouldn't RAID01 be better then RAID10? In a 4 disc system having the first two using stripping and then backing them up the second set with mirrors?
My though is having D1 and D2 as the primary drives stripping and then having D3 backup D1 and D4 backup D2.
And if enough room place a couple more drives in the system as hot standby's.
Or am I looking at this all wrong?
Robert Spangler wrote:
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)). Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4 pairs, 1.5TB/pair = 1.5*4 = 6TB total.
I am just starting to look into this RAID and I was wondering wouldn't RAID01 be better then RAID10? In a 4 disc system having the first two using stripping and then backing them up the second set with mirrors?
My though is having D1 and D2 as the primary drives stripping and then having D3 backup D1 and D4 backup D2.
And if enough room place a couple more drives in the system as hot standby's.
Or am I looking at this all wrong?
for all practical purposes its the same thing. if it was really stripe then mirror, a naive mirror handler would think it would have to remirror both drives when one half of one of the stripesets failed and was replaced. but in fact, the mirror handlres tend to be well aware of whats going on. mirror 0+1 aand stripe that with mirrored 2+3, and its really all the same the native raid10 in newer mdraid is cleaner because you don't end up with extra partial volume metadevices...
On Mar 27, 2010, at 5:07 AM, John R Pierce pierce@hogranch.com wrote:
Robert Spangler wrote:
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)). Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4 pairs, 1.5TB/pair = 1.5*4 = 6TB total.
I am just starting to look into this RAID and I was wondering wouldn't RAID01 be better then RAID10? In a 4 disc system having the first two using stripping and then backing them up the second set with mirrors?
My though is having D1 and D2 as the primary drives stripping and then having D3 backup D1 and D4 backup D2.
And if enough room place a couple more drives in the system as hot standby's.
Or am I looking at this all wrong?
for all practical purposes its the same thing. if it was really stripe then mirror, a naive mirror handler would think it would have to remirror both drives when one half of one of the stripesets failed and was replaced. but in fact, the mirror handlres tend to be well aware of whats going on. mirror 0+1 aand stripe that with mirrored 2+3, and its really all the same the native raid10 in newer mdraid is cleaner because you don't end up with extra partial volume metadevices...
RAID0+1 is never a good configuration because a single drive failure in a RAID0 stripe fails out the whole stripe, and with say an 4x2 RAID0+1, you are more likely to hit a disk failure with 4 drives in a RAID0 then 2 in a RAID1.
That's why RAID1+0 came about.
-Ross
On Saturday 27 March 2010 09:22, Ross Walker wrote:
for all practical purposes its the same thing. if it was really stripe then mirror, a naive mirror handler would think it would have to remirror both drives when one half of one of the stripesets failed and was replaced. but in fact, the mirror handlres tend to be well aware of whats going on. mirror 0+1 aand stripe that with mirrored 2+3, and its really all the same the native raid10 in newer mdraid is cleaner because you don't end up with extra partial volume metadevices...
RAID0+1 is never a good configuration because a single drive failure in a RAID0 stripe fails out the whole stripe, and with say an 4x2 RAID0+1, you are more likely to hit a disk failure with 4 drives in a RAID0 then 2 in a RAID1.
That's why RAID1+0 came about.
Thnx for clearing this up.
On Saturday 27 March 2010 05:07, John R Pierce wrote:
for all practical purposes its the same thing. if it was really stripe then mirror, a naive mirror handler would think it would have to remirror both drives when one half of one of the stripesets failed and was replaced. but in fact, the mirror handlres tend to be well aware of whats going on. mirror 0+1 aand stripe that with mirrored 2+3, and its really all the same the native raid10 in newer mdraid is cleaner because you don't end up with extra partial volume metadevices...
Thank you kindly for your reply.
On 3/25/2010 2:24 PM, Slack-Moehrle wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
Make matching partitions on each disk with fdisk, setting the type to FD (raid autodetect), then 'mdadm create ...' to specify the options and start it. See the create section in 'man mdadm'. You'll need at least --raid-level= --raid-devices= --auto=yes.
Then you'll probably want to add an entry in /etc/fstab to mount the new md device somewhere.