On Fri, Jan 29, 2010 at 8:09 PM, Ian Blackwell ian@ikel.id.au wrote:
On 30/01/2010 12:09 PM, Victor Padro wrote:
Hello,
I was wondering if someone could help me,
I'll try...
I want to use one array with the 2 500GB HDDs in RAID1 for the OS and for some VMs,
That will work OK.
and the other 4 1TB HDDs I want to create an array in RAID5 or RAID10 for file sharing across my home Network.
You can use these disks in a RAID5 array, but not RAID10. I fairly sure you need more than 4. RAID10 is mirrored, so you only have "2" disks in the array, which isn't enough for parity/striping stuff. You need at least "3", which would mean 6 disks for RAID10.
Having said that, I'm assuming you want to use the entire hard disk as a participant in an array. You could create 2 x 500Gb partions on each disk and then you have 8 x 500Gb partitions to use in a RAID10 array. This approach sacrifices some redundancy though. If a disk dies entirely, then you will lose two participants in the RAID array, which may or may not be catastrophic - it depends on what you put where...
I found a guide but it's a little bit outdated and it's for Debian...
Do you have any other pointer I can read/use?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
I've mostly installed RAID arrays at install time, which you'll need to do as well if you want to put the OS on a RAID1 array.
TIA.
Ian _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thank you Ian, but I disagree in your concept of RAID10:
[quote] RAID 10 RAID 1+0 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "1+0" name. A RAID 1+0 array requires a minimum of four drives: two mirrored drives to hold half of the striped data, plus another two mirrored for the other half of the data. In Linux MD RAID 10 is a non-nested RAID type like RAID 1, that only requires a minimum of two drives, and may give read performance on the level of RAID 0. [quote]
I'll read that howto, is for fakeRAID though...
TIA