Hey Guys,
I have some questions regarding a new home server I am going to build in the hopefully very near future (ASAP, I just need to finish planning everything and this is the penultimate hurdle), I will be creating a software RAID...
Lets say I have three drives "knocking" around which are all 1TB SATA II drives but each made by a different manufacturer. I am going to guess that these couldn't be used in a RAID 5? Or could they?
However could a similar result of 2TBs of data with redundancy be achieved with JBOD?
Also regarding RAID 5, three drives of data to one for parity is the max ratio I believe? I.e. to expand this by adding another data drive, the original parity drive would no longer cover this and another would be required, is this correct?
One more question about hot swappable drives, I understand that you can create RAID arrays with and without hot swappable drives but I am confused by this concept. I'm my experience with RAIDs I have only every delt with a RAID 1 that has degraded. I simply set the drive as offline, replaced it, set it to online and the RAID rebuilt itself all without restart the server and operation was never interrupted. So we can presume the server had hot swappable drives enabled yes? (It was a hardware RAID). With a software RAID is this still achievable?
Thank you for reading.
Regards, James ;)
Charles de Gaulle - "The better I get to know men, the more I find myself loving dogs."
Lets say I have three drives "knocking" around which are all 1TB SATA II drives but each made by a different manufacturer. I am going to guess that these couldn't be used in a RAID 5? Or could they?
They can in fact. There might be minor differences of a few sectors between the drives but md RAID will account for those by using the 'smallest' (lowest sector count) drive as the base.
However could a similar result of 2TBs of data with redundancy be achieved with JBOD?
JBOD (Just a Bunch Of Disks) has no redundancy. The redundancy enters when you assign those disks to RAID sets.
Also regarding RAID 5, three drives of data to one for parity is the max ratio I believe? I.e. to expand this by adding another data drive, the original parity drive would no longer cover this and another would be required, is this correct?
No. The parity will be the same size regardless of how many drives are used.
One more question about hot swappable drives, I understand that you can create RAID arrays with and without hot swappable drives but I am confused by this concept. I'm my experience with RAIDs I have only every delt with a RAID 1 that has degraded. I simply set the drive as offline, replaced it, set it to online and the RAID rebuilt itself all without restart the server and operation was never interrupted. So we can presume the server had hot swappable drives enabled yes? (It was a hardware RAID). With a software RAID is this still achievable?
All hot swappable drives allow you to do is replace them without having to completely shutdown the machine. In hardware raid this is often built in such that you can replace a drive without telling RAID ahead of time and it will compensate. IBM xSeries servers are a good example. Software RAID can also do this but you have to tell the RAID system that you plan to remove the drive and then tell it when you add a new drive back. Hot swapping disks is also dependent upon the drive controller supporting hotswap.
James Bensley wrote:
Lets say I have three drives "knocking" around which are all 1TB SATA II drives but each made by a different manufacturer. I am going to guess that these couldn't be used in a RAID 5? Or could they?
RAID is a manufacturer independent concept. Though depending on who you ask it is typically good to have the same drive and model# in the array, mainly so they have very similar if not identical performance characteristics, if you have some drives that are faster than others then performance won't be consistent.
However could a similar result of 2TBs of data with redundancy be achieved with JBOD?
There's a couple of ways of interpreting JBOD, in my experience the most common way is referring to JBOD as a shelf of dumb disks, often times fiber attached, here is an example of such a system -
http://www.infortrend.com/main/2_product/es_f16f-r2j2_s2j2.asp
Another way of interpreting it is presenting a bunch of disks without any sort of RAID protection to the OS, either individually or in a concatenated group(set by the host controller).
Also regarding RAID 5, three drives of data to one for parity is the max ratio I believe? I.e. to expand this by adding another data drive, the original parity drive would no longer cover this and another would be required, is this correct?
Depends on the implementation, I can't speak for linux software RAID but it is not too uncommon to have 5 data, 1 parity(5+1), or 8+1, and some even go as high as 12+1 or even higher(shudder). The higher the ratio generally the lower the performance especially on writes, and disk rebuilds will take far longer with bigger ratios, resulting in a better chance of a double disk failure during the rebuild.
hardware RAID). With a software RAID is this still achievable?
If the hardware supports it yes. Some controllers don't support hot swap well, especially older ones, and if you yank a drive while the system is running it could crash the system/reboot the box/hang the I/O. But it certainly is possible, just be sure to test it out before putting it into production.
If it was me I would go for a 3Ware RAID card, and do it right, only time I might use software RAID these days is if it is RAID 0, which I haven't done since probably 2001. Was considering it for some new web servers because it doesn't matter if a disk dies if we lose the whole box, performance was the most important. But we ended up going with hardware raid 1+0 anyways.
nate
James Bensley wrote:
I have some questions regarding a new home server I am going to build in the hopefully very near future (ASAP, I just need to finish planning everything and this is the penultimate hurdle), I will be creating a software RAID...
Lets say I have three drives "knocking" around which are all 1TB SATA II drives but each made by a different manufacturer. I am going to guess that these couldn't be used in a RAID 5? Or could they?
However could a similar result of 2TBs of data with redundancy be achieved with JBOD?
If you use software raid to combine the JBOD's, yes.
Also regarding RAID 5, three drives of data to one for parity is the max ratio I believe? I.e. to expand this by adding another data drive, the original parity drive would no longer cover this and another would be required, is this correct?
Yes, but if I were doing it I'd either run 2 drives in RAID1 or get another drive and either have 2 RAID1 mount points or a RAID 0+1. The advantage of RAID1 is that you can recover the data from any single disk and it still runs full speed even with a missing disk. RAID5 works but there is a performance hit and a big one when a disk is bad.
One more question about hot swappable drives, I understand that you can create RAID arrays with and without hot swappable drives but I am confused by this concept. I'm my experience with RAIDs I have only every delt with a RAID 1 that has degraded. I simply set the drive as offline, replaced it, set it to online and the RAID rebuilt itself all without restart the server and operation was never interrupted. So we can presume the server had hot swappable drives enabled yes? (It was a hardware RAID). With a software RAID is this still achievable?
Sata drives all but a few controllers are designed to be hot swappable but you need a special drive bay that permits swapping. It probably doesn't matter for a home server where you can shut it down for repair anyway. With software raid, after the drive is recognized (either hotswap or reboot) you need to fdisk a matching partition and then use an 'mdadm --add ...' command to sync a new drive into the raid.
Thanks all for the promptness of your responses and the details you have provided it is greatly appreciated.
Since this is a home media server performance isn't imperative and mirroring and RAID 10/0+1 are too expensive so I am going to use my three existing drives of different manufactures as they are all reasonably new (each was purchased at different times this year) and throw in two more giving four drives for data and one for parity. That will suffice in terms of storage size (4TBs) and a ratio of four data drives to one parity is as far as I feel comfortable in terms of hardware redundancy.
I have read a few articles about mdadm and I have devised the following strategy in my head and am looking for some confirmation of its theoretical success:
Two of my existing three drives are full of data. I will purchase two more drives to go with my existing blank drive and set them up as a RAID 5 > copy my existing data on to the new file system one drive at a time and after each drive has been copied I will add said drive and use mdadm --grow to then incorporate that drive into the RAID before adding the next drive. Can anyone point out a flaw in this plan or more preferred method for doing this, or have I, dare I say it, got it right?
Also I was initially going to get a PCI-E SATA card to connect up all these drives and use mdadm to make a software RAID, for this particular setup is that ill advised or do people think this will suffice (simple because my budget is low and hardware RAID controller cards are more expansive, in my experience but if you know of a good bargain I'm all ears!). Just quickly, this brings me back to the issue of a hot swappable drive. Up time isn't critical as its a home server so I don't believe a hotspare is needed, in the event of a drive failure I can shutdown the server and fire up single user mode and have the RAID file system dismount and then replace the failed drive and rebuild the array, is this correct?
Thank you for you time guys it has been very much appreciated.
Regards, James ;)
Joan Crawford - "I, Joan Crawford, I believe in the dollar. Everything I earn, I spend." - http://www.brainyquote.com/quotes/authors/j/joan_crawford.html
James Bensley wrote:
suffice (simple because my budget is low and hardware RAID controller cards are more expansive, in my experience but if you know of a good bargain I'm all ears!).
Budget RAID cards should be avoided like swine flu, your better off with non RAID cards with linux software RAID like you said..
With the exception of older cards, I still run a pair of 3ware 8006-2 cards (2 port as the name implies), they look to run about $150 at the moment. The cards came out about 5 years ago so their performance isn't great but I trust them more than other controllers with my own data. Eventually I will replace the systems with something more modern(both systems are ~4-5 years old as well).
nate
James Bensley wrote:
I have read a few articles about mdadm and I have devised the following strategy in my head and am looking for some confirmation of its theoretical success:
Two of my existing three drives are full of data. I will purchase two more drives to go with my existing blank drive and set them up as a RAID 5 > copy my existing data on to the new file system one drive at a time and after each drive has been copied I will add said drive and use mdadm --grow to then incorporate that drive into the RAID before adding the next drive. Can anyone point out a flaw in this plan or more preferred method for doing this, or have I, dare I say it, got it right?
growing raid-5 by adding drives is an inherently dangerous operation as every single stripe of the raid has to be reorganized, and can take hours or even days.
http://linux-raid.osdl.org/index.php/Growing
I prefer wherever possible to build a new raid, copy the data, and when I'm assured the copy is correct, re purpose the old drives.
James Bensley wrote:
Thanks all for the promptness of your responses and the details you have provided it is greatly appreciated.
Since this is a home media server performance isn't imperative and mirroring and RAID 10/0+1 are too expensive so I am going to use my three existing drives of different manufactures as they are all reasonably new (each was purchased at different times this year) and throw in two more giving four drives for data and one for parity. That will suffice in terms of storage size (4TBs) and a ratio of four data drives to one parity is as far as I feel comfortable in terms of hardware redundancy.
I have read a few articles about mdadm and I have devised the following strategy in my head and am looking for some confirmation of its theoretical success:
Two of my existing three drives are full of data. I will purchase two more drives to go with my existing blank drive and set them up as a RAID 5 > copy my existing data on to the new file system one drive at a time and after each drive has been copied I will add said drive and use mdadm --grow to then incorporate that drive into the RAID before adding the next drive. Can anyone point out a flaw in this plan or more preferred method for doing this, or have I, dare I say it, got it right?
The flaw is that you don't have a backup. And if you grow the space on a raid you also have to resize the filesystem to use it.
Also I was initially going to get a PCI-E SATA card to connect up all these drives and use mdadm to make a software RAID, for this particular setup is that ill advised or do people think this will suffice (simple because my budget is low and hardware RAID controller cards are more expansive, in my experience but if you know of a good bargain I'm all ears!). Just quickly, this brings me back to the issue of a hot swappable drive. Up time isn't critical as its a home server so I don't believe a hotspare is needed, in the event of a drive failure I can shutdown the server and fire up single user mode and have the RAID file system dismount and then replace the failed drive and rebuild the array, is this correct?
Yes, but again, backups are a good thing... If you hit a bad sector on the other drives during the rebuild, you lose - and this is moderately likely since the rebuild has to read the whole disk, part of which probably hasn't been accessed in a long time.
On Wed, Nov 11, 2009 at 12:07:21AM +0000, James Bensley wrote:
Also I was initially going to get a PCI-E SATA card to connect up all these drives and use mdadm to make a software RAID, for this particular setup is that ill advised or do people think this will suffice (simple because my budget is low and hardware RAID controller cards are more expansive, in my experience but if you know of a good bargain I'm all ears!).
The 3ware 9550SXU-4LP is ~US$300. (The -8LP is almost $500.) If you only need to support four disks in your RAID this seems cheap (but your low budget may be different from mine). A bunch of other people on the list swear by Areca, which I believe are slightly less expensive than 3ware.
Just quickly, this brings me back to the issue of a hot swappable drive. Up time isn't critical as its a home server so I don't believe a hotspare is needed, in the event of a drive failure I can shutdown the server and fire up single user mode and have the RAID file system dismount and then replace the failed drive and rebuild the array, is this correct?
Theoretically, yes. The hot spare saves you from being unable to perform the shutdown before another drive fails, killing the array. If your motherboard supports SATA, you could put four disks on your new PCI SATA card, and two disks on the motherboard, to accomodate a five-disk RAID5 plus a hot spare. (Or run a four-disk RAID5 plus hot spare if you don't want to add more disks.) You don't need a hotswap bay for this; simply let the RAID array rebuild with the hot spare, then shut down and remove/replace the dead drive. The hotswap bay is what helps uptime; the hot spare helps protect reliability.
One other suggestion would be to run five disks in a RAID6 with no hot spare. When one disk dies, you get (more or less) equivalent performance to RAID5, and still have a redundant disk; when you replace a dead disk you can rebuild the RAID6 and have two redundant disks again. It's slightly more fault tolerant than RAID5 with the drawback that all the disks are getting used, instead of having one disk that's not used till needed. (I haven't benchmarked RAID5 vs. RAID6, so perhaps someone else can chime in.)
If you are really trying to go on the supercheap, then you can skip the hot spare and go RAID5, but be sure to have a computer store close by in case you do lose a drive, because if you lose another drive before you get to the store say bye-bye data. (Don't forget the backups!)
You probably don't even need single user to rebuild the array with the new disk; just make sure your disks are marked clearly, so you know which one has failed.
--keith
Just for a practical scenario, here's the machine I have at my home.
The front has four hot-swap drive bays, backed by a 3ware 4 port controller (IIRC a 9500 series). Two drives form a RAID1, which is where everything is kept. One drive is a hot spare. The fourth drive is a backup drive; my /home is on an xfs filesystem, so I use xfsdump to back that up, and store it on the fourth disk. Since it's hot swap, I can periodically remove the fourth drive and replace it with an empty disk; I can then store the old drive ''safely'' somewhere (if I were smart I would take it to work, but I haven't been smart yet).
Now, to be fair, my storage needs are quite small: /home is only about 100GB (and is only 60% full). Obviously for storage at the multi- terabyte range you'll need more disks.
--keith
Ok, I'm back again...
Thanks again to all for more replies and info, all the info of the list members is really appreciated.
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it? I am trying find some examples online of people using this card with CentOS in a software RAID but nothing yet so wondered if anyone here has any input here?
James Bensley wrote:
Ok, I'm back again...
Thanks again to all for more replies and info, all the info of the list members is really appreciated.
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it? I am trying find some examples online of people using this card with CentOS in a software RAID but nothing yet so wondered if anyone here has any input here?
how about:
http://markmail.org/message/2odawealo6ktbz2b
and:
http://lists.centos.org/pipermail/centos-devel/2007-April/001565.html
and:
http://osdir.com/ml/linux-raid/2009-09/msg00340.html
Maybe it works fine on Centos 5 with kernel drivers?
-- Eero
On Mon, 2009-11-16 at 21:49 +0000, James Bensley wrote:
Ok, I'm back again...
Thanks again to all for more replies and info, all the info of the list members is really appreciated.
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it? I am trying find some examples online of people using this card with CentOS in a software RAID but nothing yet so wondered if anyone here has any input here?
-- Regards, James ;)
If I am not mistaken that card uses the Marvell MV88SX5081 chipset and uses the sata_mv module (after a little googling).
I have been reading around google and found that support for this was added in CentOS 4 (sata_mv).
I also saw some posts regarding instability with this driver... sort of a use at own risk scenario.
Some were saying disabling hw RAID helped, but I haven't used that card so I couldn't tell you. Maybe someone else on the list has used a card with the MV88SX5081 chipset?
I have only played with the AOC-USAS-L8i card and that wasn't in Linux (it was for an Opensolaris build), we just needed to buy SFF-8087 to 4 x SATA cables to make it work. If you want to use SW raid, I might suggest going for a card that is just adds SATA ports and has no onboard RAID, but it is up to you.
Tait
James Bensley wrote:
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it?
Do you feel lucky?
To me seeing something that specifically calls out a version of a product tells me often times there is a binary driver behind it, so compatibility with CentOS 5.x is not a sure thing.
I poked around quite a bit online but could not find any indication of this card/chipset's level of support in linux. Saw a few people asking about support but no replies to any of them.
Myself I would skip this card and go for something that you can find that indicates it specifically supports Red Hat 5.x. Unless you can find something/someone that can tell you that this card has support for CentOS/RHEL 5.x.
I use PCI-X 3Ware 8006-2 raid controllers in two of my own personal systems, they work pretty well. More recently I got an ATTO SAS HBA for a system for a tape drive, they have lots of SAS and SATA HBAs, and lots of linux support but I don't see anything that is PCI-X.
nate
nate wrote:
James Bensley wrote:
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it?
Do you feel lucky?
To me seeing something that specifically calls out a version of a product tells me often times there is a binary driver behind it, so compatibility with CentOS 5.x is not a sure thing.
I poked around quite a bit online but could not find any indication of this card/chipset's level of support in linux. Saw a few people asking about support but no replies to any of them.
Myself I would skip this card and go for something that you can find that indicates it specifically supports Red Hat 5.x. Unless you can find something/someone that can tell you that this card has support for CentOS/RHEL 5.x.
I use PCI-X 3Ware 8006-2 raid controllers in two of my own personal systems, they work pretty well. More recently I got an ATTO SAS HBA for a system for a tape drive, they have lots of SAS and SATA HBAs, and lots of linux support but I don't see anything that is PCI-X.
I was having regular filesystem problems with an adaptec and promise card installed in the same box and they all went away when I swapped them out for the 8-port marvel card above - which I got because it was recommended for Solaris and I might eventually switch (especially if Nexenta releases a version that includes zfs de-dup soon).
Thanks for the speedy replies guys,
I had an itch, so I itched it; In the back of my head I couldn't help but think I had miss-read the details about my mobo and that it was PCI-E not PCI-X and I was right, so the previous card is no longer an option although I am not liking the look of it thanks to the list members finding various problems for me (thanks guys, saved me some time and hassle there!) so instead I am looking at one of these (http://www.adaptec.com/en-US/support/sata/sataii/AAR-1430SA/). Its based on the Marvell 88SX7042 chipset which seems to work under CentOS 5, Hurray!
-- Regards, James ;)
Mike Ditka - "If God had wanted man to play soccer, he wouldn't have given us arms."
On Tue, Nov 17, 2009 at 12:53 AM, James Bensley jwbensley@gmail.com wrote:
Thanks for the speedy replies guys,
I had an itch, so I itched it; In the back of my head I couldn't help but think I had miss-read the details about my mobo and that it was PCI-E not PCI-X and I was right, so the previous card is no longer an option although I am not liking the look of it thanks to the list members finding various problems for me (thanks guys, saved me some time and hassle there!) so instead I am looking at one of these (http://www.adaptec.com/en-US/support/sata/sataii/AAR-1430SA/). Its based on the Marvell 88SX7042 chipset which seems to work under CentOS 5, Hurray!
I have card MV88SX6081 and can confirm that it works... but not with in stock kernel, you have to build module on your own. so if you are planning to have system in hard drives which are conected to motherboard it can be just fine.
James Bensley wrote:
Ok, I'm back again...
Thanks again to all for more replies and info, all the info of the list members is really appreciated.
So I have found this card and wondered if anyone has a second opinion on it (http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm). Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it? I am trying find some examples online of people using this card with CentOS in a software RAID but nothing yet so wondered if anyone here has any input here?
Yes, I have one and it works fine with the stock drivers in Centos. Most of the drives are in hot-swap trays and I don't have any trouble swapping them in and out of software raid1 sets. As long as you have a PCI-X slot it should work for you.
On Mon, 2009-11-16 at 21:49 +0000, James Bensley wrote:
So I have found this card and wondered if anyone has a second opinion
on it
(http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm).
Seeing as it states on the website that the CD that comes with the card has RedHat drivers on it, and I will be using CentOS 5.4 i386 on my little home server everything should work just dandy shouldn't it?
Supermicro has a good pre-sales team. I would call them and ask. They should be able to give you a definitive answer.
-- Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net CentOS 5.4 VPS with unmetered bandwidth only $25/month! 7 day no risk trial, Google Checkout accepted
At Tue, 10 Nov 2009 21:26:37 +0000 CentOS mailing list centos@centos.org wrote:
Hey Guys,
I have some questions regarding a new home server I am going to build in the hopefully very near future (ASAP, I just need to finish planning everything and this is the penultimate hurdle), I will be creating a software RAID...
Lets say I have three drives "knocking" around which are all 1TB SATA II drives but each made by a different manufacturer. I am going to guess that these couldn't be used in a RAID 5? Or could they?
They probably could be. The RAID system would use the size of the 'smallest' disk as the base size for each disk. That is if you drives were *actually* 1.02TB, .985TB and 1.12TB, the RAID system would use .985TB of each disk, fully utilizing the .985TB disk, and leaving a 'small' amount of unused space on each of the 1.02TB and 1.12TB. Oh, the sizes will be much closer -- that is you might be lossing only a few sectors here and there.
However could a similar result of 2TBs of data with redundancy be achieved with JBOD?
Also regarding RAID 5, three drives of data to one for parity is the max ratio I believe? I.e. to expand this by adding another data drive, the original parity drive would no longer cover this and another would be required, is this correct?
No. You can use as many disks as you like for RAID 5. The 'parity' is not actually 1 bit. The capacity of a N disk RAID 5 (where N >= 3), is (N-1)*sizeof(one disk).
One more question about hot swappable drives, I understand that you can create RAID arrays with and without hot swappable drives but I am confused by this concept. I'm my experience with RAIDs I have only every delt with a RAID 1 that has degraded. I simply set the drive as offline, replaced it, set it to online and the RAID rebuilt itself all without restart the server and operation was never interrupted. So we can presume the server had hot swappable drives enabled yes? (It was a hardware RAID). With a software RAID is this still achievable?
Hardware RAID system almost always had hot swappable drives, esp. SCSI ones. And many of the old hardware RAID SCSI server boxes were equiped with hot swappable drive bays.
For software RAID, it depends on the controller and what the driver for that controller supports. It also depends on how the drives are mounted. You *can* 'hot' [un]plug conventually mounted drives (eg remove the cover of the *running* machine and reach in and pull the data and power plugs off the disk in the *correct* order), but it is tricky (and not really recomended). It is far easier to get a hot-swap chassis. If the controller supports hot swapping AND the driver supports the controller's hot swapping, yes, you can hot swap with software RAID.
With RAID 5, what you want is a 'hot' spare: an additional disk that is not part of the array, but is associated with it. What you do is 'fail' the drive you want to pull. The system will then 'rebuild' itself using the 'hot' spare. You then 'remove' the 'failed' disk from the RAID set, spin it down (requires the proper incantation, such as with sg_start from sg3_utils), pull the drive, insert a new drive, scan for the new drive (eg sg_scan), spin it up (sg_start), partition it (if necessary), and then add it as a hot spare.
Thank you for reading.
Regards, James ;)
Charles de Gaulle - "The better I get to know men, the more I find myself loving dogs." _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Robert Heller wrote:
No. You can use as many disks as you like for RAID 5. The 'parity' is not actually 1 bit. The capacity of a N disk RAID 5 (where N >= 3), is (N-1)*sizeof(one disk).
within reason. Its usually not a good idea to make a single raid5 set much over 7-8 disks as the repair time becomes brutal, and the performance degradation after a single drive failure is extreme. For very large numbers of drives, build multiple raid5 sets and stripe them (raid 0+5 sometimes called raid 50).
I will say again, I really prefer mirroring and raid 10/0+1 ... disks are cheap, data loss and/or downtime is expensive.