I have a Dell PowerEdge 2950 with 6 SAS disks and a PERC 5/i controller what won't do RAID 6. I plan to have 2 drives as RAID 1 for the OS, and the remaining 4 as RAID 6. Granted the PERC can do RAID 1, I'm tempted to do everything via software RAID. Thus, if anything goes wrong with the controller, I just need to obtain a new SAS controller and I'm back up.
I learned my lesson about not using RAID 0 for important data storage when a drive goes bad!
What are peoples' performance and reliability experiences with 64-bit C5's software RAID, particularly RAID 6?
Thanks.
Scott
Scott R. Ehrlich wrote:
I have a Dell PowerEdge 2950 with 6 SAS disks and a PERC 5/i controller what won't do RAID 6. I plan to have 2 drives as RAID 1 for the OS, and the remaining 4 as RAID 6. Granted the PERC can do RAID 1, I'm tempted to do everything via software RAID. Thus, if anything goes wrong with the controller, I just need to obtain a new SAS controller and I'm back up.
4 disks in RAID 6? Isn't this a bit overkill? You would use half the space for RAID checksums.
If it were me, I would:
Partition all drives identically with a 200M partition 1 and the rest as partition 2
Use sda1 and sdb1 in RAID 1 as /boot Use sd[cdef]1 as swap in RAID 5 Use sd[abcdef]2 as / in RAID 5
You could use RAID 6 for /, but then you're not 100% safe if you loose disks 1 and 2 simultaneously.
Mogens
On Sat, March 8, 2008 19:18, Mogens Kjaer wrote: Scott R. Ehrlich wrote:
I have a Dell PowerEdge 2950 with 6 SAS disks and a PERC 5/i controller what won't do RAID 6. I plan to have 2 drives as RAID 1 for the OS, and the remaining 4 as RAID 6. Granted the PERC can do RAID 1, I'm tempted to do everything via software RAID. Thus, if anything goes wrong with the controller, I just need to obtain a new SAS controller and I'm back up.
4 disks in RAID 6? Isn't this a bit overkill? You would use half the space for RAID checksums.
If it were me, I would:
Partition all drives identically with a 200M partition 1 and the rest as partition 2
Use sda1 and sdb1 in RAID 1 as /boot Use sd[cdef]1 as swap in RAID 5 Use sd[abcdef]2 as / in RAID 5
You could use RAID 6 for /, but then you're not 100% safe if you loose disks 1 and 2 simultaneously.
Mogens
-- Mogens, RAID 5 can only handle 1 drive failure. So, if he had 3 drives in he set & 1 spare, then that 1 spare will be used to rebuild te stripe, which could take long if it's big. If something goes wrong during this period, he could effectivly loose all his data. So he'll be as screwed (sorry, I couln't think of a better word) as if he had to loose 2 drives in RAID 6 stripe
RAID 6 with 4 drives will give the same volume, since all 4 drives are used, but offers 2 drive failure.
I would suggest rather use RAID 10 (this is 2x RAID 1 stripes, stripped together as RAID 0), as it will give you better performance and will be much more reliable
On Sat, 8 Mar 2008, Rudi Ahlers wrote:
On Sat, March 8, 2008 19:18, Mogens Kjaer wrote: Scott R. Ehrlich wrote:
I have a Dell PowerEdge 2950 with 6 SAS disks and a PERC 5/i controller what won't do RAID 6. I plan to have 2 drives as RAID 1 for the OS, and the remaining 4 as RAID 6. Granted the PERC can do RAID 1, I'm tempted to do everything via software RAID. Thus, if anything goes wrong with the controller, I just need to obtain a new SAS controller and I'm back up.
4 disks in RAID 6? Isn't this a bit overkill? You would use half the space for RAID checksums.
If it were me, I would:
Partition all drives identically with a 200M partition 1 and the rest as partition 2
Use sda1 and sdb1 in RAID 1 as /boot Use sd[cdef]1 as swap in RAID 5 Use sd[abcdef]2 as / in RAID 5
You could use RAID 6 for /, but then you're not 100% safe if you loose disks 1 and 2 simultaneously.
Mogens
-- Mogens, RAID 5 can only handle 1 drive failure. So, if he had 3 drives in he set & 1 spare, then that 1 spare will be used to rebuild te stripe, which could take long if it's big. If something goes wrong during this period, he could effectivly loose all his data. So he'll be as screwed (sorry, I couln't think of a better word) as if he had to loose 2 drives in RAID 6 stripe
RAID 6 with 4 drives will give the same volume, since all 4 drives are used, but offers 2 drive failure.
I would suggest rather use RAID 10 (this is 2x RAID 1 stripes, stripped together as RAID 0), as it will give you better performance and will be much more reliable
What would be the recommended way to create a software RAID 10 array?
Again, two disks for the OS as s/w RAID 1, the remaining four disks for data as RAID 10, with possibly 1 of the 4 as a [hot] spare.
Thanks.
Scott
--
Kind Regards Rudi Ahlers CEO, SoftDux Website: http://www.SoftDux.com Forums: http://Forum.SoftDux.com
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Scott R. Ehrlich wrote:
What would be the recommended way to create a software RAID 10 array?
Again, two disks for the OS as s/w RAID 1, the remaining four disks for data as RAID 10, with possibly 1 of the 4 as a [hot] spare.
well, if you only have 4 drives for that raid10, I dunno how have a hot spare. if you reserve one as a hotspare, then you only have 3 left, raid10's require an even number of drives. Forgetting the hotspare,
## partition sdc-sdf with one full disk primary partition each, parttition type FD (linux auto raid) mdadm --create /dev/md/myraid10 --level=raid10 --raid-devices=4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 mkfs /dev/md/myraid10 mount /dev/md/myraid10 /u01