we have DELL 6800 server with 12 internal disks in it. O.S. is CENTOS 4.6 and SCSI control card is PERC 4e/di.
We plan to configure 4 disks (5,8,9,10) as RAID5 or RAID50. This logical volume will be use as file systems and store database backup files.
Can anyone tell me which one is better on performance?
Thanks.
付費才容量無上限?Yahoo!奇摩電子信箱2.0免費給你,信件永遠不必刪! - 馬上體驗!
mcclnx mcc wrote:
we have DELL 6800 server with 12 internal disks in it. O.S. is CENTOS 4.6 and SCSI control card is PERC 4e/di.
We plan to configure 4 disks (5,8,9,10) as RAID5 or RAID50. This logical volume will be use as file systems and store database backup files.
Can anyone tell me which one is better on performance?
raid50 requires 2 or more raid 5 volumes.
with 4 disks, thats just not an option.
for file storage (including backup files from a database), raid5 is probably fine... for primary database tablespace storage, I'd only use raid1 or raid10.
John R Pierce wrote:
raid50 requires 2 or more raid 5 volumes.
with 4 disks, thats just not an option.
for file storage (including backup files from a database), raid5 is probably fine... for primary database tablespace storage, I'd only use raid1 or raid10.
RAID-10 has only one perfect application, and that's with exactly four disks. It can't use fewer, and the next larger step is 8, where other flavors of RAID usually make more sense. But, for the 4-disk configuration, it's unbeatable unless you need capacity more than speed and redundancy. (In that case, you go with RAID-5.)
RAID-10 gives the same redundancy as RAID-50: guaranteed tolerance of a single disk lost, and will tolerate a second disk lost at the same time if it's in the other half of the RAID. RAID-10 may also give better performance than RAID-50. I'm not sure because you're trading off more spindles against more parity calculation with the RAID-50. At any rate, RAID-10 shouldn't be *slower*.
Warren Young wrote:
John R Pierce wrote:
raid50 requires 2 or more raid 5 volumes.
with 4 disks, thats just not an option.
for file storage (including backup files from a database), raid5 is probably fine... for primary database tablespace storage, I'd only use raid1 or raid10.
RAID-10 has only one perfect application, and that's with exactly four disks. It can't use fewer, and the next larger step is 8, where other flavors of RAID usually make more sense. But, for the 4-disk configuration, it's unbeatable unless you need capacity more than speed and redundancy. (In that case, you go with RAID-5.)
RAID-10 gives the same redundancy as RAID-50: guaranteed tolerance of a single disk lost, and will tolerate a second disk lost at the same time if it's in the other half of the RAID. RAID-10 may also give better performance than RAID-50. I'm not sure because you're trading off more spindles against more parity calculation with the RAID-50. At any rate, RAID-10 shouldn't be *slower*. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
It seems like you know / like RAID-10 a lot :)
So, how does it perform with 6 discs for example? Say I have 3 HDD's in RAID-0, and another 3 in RAID-0, then RAID-1 the 2 RAID-0 stripes. How well would that work? And what would you recommend on 8 / 10 HDD's?
on 5-22-2008 9:12 AM Rudi Ahlers spake the following:
Warren Young wrote:
John R Pierce wrote:
raid50 requires 2 or more raid 5 volumes.
with 4 disks, thats just not an option.
for file storage (including backup files from a database), raid5 is probably fine... for primary database tablespace storage, I'd only use raid1 or raid10.
RAID-10 has only one perfect application, and that's with exactly four disks. It can't use fewer, and the next larger step is 8, where other flavors of RAID usually make more sense. But, for the 4-disk configuration, it's unbeatable unless you need capacity more than speed and redundancy. (In that case, you go with RAID-5.)
RAID-10 gives the same redundancy as RAID-50: guaranteed tolerance of a single disk lost, and will tolerate a second disk lost at the same time if it's in the other half of the RAID. RAID-10 may also give better performance than RAID-50. I'm not sure because you're trading off more spindles against more parity calculation with the RAID-50. At any rate, RAID-10 shouldn't be *slower*. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
It seems like you know / like RAID-10 a lot :)
So, how does it perform with 6 discs for example? Say I have 3 HDD's in RAID-0, and another 3 in RAID-0, then RAID-1 the 2 RAID-0 stripes. How well would that work? And what would you recommend on 8 / 10 HDD's?
What you are describing would be raid 0+1 not raid 10. Most docs I have read state that raid 10 is more fault tolerant. Here is one that explains it better; http://www.pcguide.com/ref/hdd/perf/raid/levels/multXY-c.html
Rudi Ahlers wrote:
So, how does it perform with 6 discs for example? Say I have 3 HDD's in RAID-0, and another 3 in RAID-0, then RAID-1 the 2 RAID-0 stripes.
There's actually two kinds of RAID-10. Some like to say RAID-01 or RAID-1+0 or things like that to distinguish them. It's a matter of whether it's mirrors over stripes or stripes over mirrors. You're talking about mirrors over stripes, but I'm talking about doing it the other way around.
Your way has the advantage of letting you add disks in pairs, but to get that you get only single-disk redundancy: if a second disk goes out, your array is gone, no matter which disk it is.
If you do it the other way, you have to use groups of 4 (two mirrors striped together) but you get the advantage that with a single disk missing, you can lose another if it's in the other mirror. Of course, if you lose two in the same mirror, you're toast.
And what would you recommend on 8 / 10 HDD's?
As I said, usually RAID-5 or -6 usually makes more sense with so many spindles. If you're talking RAID-10 (my way) with so many disks, it starts getting expensive with 8, 12, etc.
I prefer raid level of ibm....
For Dell you can find more info about raid level at
http://support.dell.com/support/edocs/storage/RAID/RAIDbk0.pdf
But add hot spare disks....
Nightduke
2008/5/22, Warren Young warren@etr-usa.com:
Rudi Ahlers wrote:
So, how does it perform with 6 discs for example? Say I have 3 HDD's in
RAID-0, and another 3 in RAID-0, then RAID-1 the 2 RAID-0 stripes.
There's actually two kinds of RAID-10. Some like to say RAID-01 or RAID-1+0 or things like that to distinguish them. It's a matter of whether it's mirrors over stripes or stripes over mirrors. You're talking about mirrors over stripes, but I'm talking about doing it the other way around.
Your way has the advantage of letting you add disks in pairs, but to get that you get only single-disk redundancy: if a second disk goes out, your array is gone, no matter which disk it is.
If you do it the other way, you have to use groups of 4 (two mirrors striped together) but you get the advantage that with a single disk missing, you can lose another if it's in the other mirror. Of course, if you lose two in the same mirror, you're toast.
And what would you recommend on 8 / 10 HDD's?
As I said, usually RAID-5 or -6 usually makes more sense with so many spindles. If you're talking RAID-10 (my way) with so many disks, it starts getting expensive with 8, 12, etc.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Your way has the advantage of letting you add disks in pairs, but to get that you get only single-disk redundancy: if a second disk goes out, your array is gone, no matter which disk it is.
Nah, if you lose both disks that belong to the same stripe array, the other stripe array is still around.
Warren Young wrote:
Rudi Ahlers wrote:
So, how does it perform with 6 discs for example? Say I have 3 HDD's in RAID-0, and another 3 in RAID-0, then RAID-1 the 2 RAID-0 stripes.
There's actually two kinds of RAID-10. Some like to say RAID-01 or RAID-1+0 or things like that to distinguish them. It's a matter of whether it's mirrors over stripes or stripes over mirrors. You're talking about mirrors over stripes, but I'm talking about doing it the other way around.
Your way has the advantage of letting you add disks in pairs, but to get that you get only single-disk redundancy: if a second disk goes out, your array is gone, no matter which disk it is.
If you do it the other way, you have to use groups of 4 (two mirrors striped together) but you get the advantage that with a single disk missing, you can lose another if it's in the other mirror. Of course, if you lose two in the same mirror, you're toast.
And what would you recommend on 8 / 10 HDD's?
As I said, usually RAID-5 or -6 usually makes more sense with so many spindles. If you're talking RAID-10 (my way) with so many disks, it starts getting expensive with 8, 12, etc. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ok, so it stripping a mirror more redundant then, from what you say? But, it's limited to pairs of 4 HDD's, which means a bigger chassis, and a mobo / PCI controller that can support 8 HDD's if I want to add more?
But, if I want to use 6+ drives, rather use RAID 6? How does RAID-6 perform in relation to RAID-5 or RAID-10 (RAID-01)?
Ok, so it stripping a mirror more redundant then, from what you say? But, it's limited to pairs of 4 HDD's, which means a bigger chassis, and a mobo / PCI controller that can support 8 HDD's if I want to add more?
Huh? Said who? 6 disks = 3 mirrors = 3 stripped mirrors or 3 x single disk speed.
But, if I want to use 6+ drives, rather use RAID 6? How does RAID-6 perform in relation to RAID-5 or RAID-10 (RAID-01)?
No comments on RAID10...the new md raid10 module from Neil Brown is not available on Centos 4 and I do not know how that modules handles 6 disks.
However, using raid 0+1 as neil calls it, with 6 disks you get three mirrors that are stripped giving you the possibility of surviving three disk failures provided that they are all from different arrays but you will also lose the everything if the disks of any one mirror fail.. RAID6 gives you the maximum of surviving any two disk failure. I have heard of horror stories with md raid5 and besides, with 6 disks you would be better off using a hardware raid card that supports a bbu cache like 3ware 9550 and above to cut down on the data traffic over the bus if you want raid5/6.
On Thu, May 22, 2008 at 12:01 PM, Warren Young warren@etr-usa.com wrote:
At any rate, RAID-10 shouldn't be *slower*.
I've actually seen equipments where RAID-10 was slower for reading than RAID-5 with the same number of disks. RAID-10 depends on the ability of the controller of balancing reads between the two disks (because as both have the same information, it can choose from which to read). Most implementations in cheap controllers (cheap as opposed to hundreds of thousands of dollars SAN controllers) do not implement this in the smartest possible way.
With RAID-5 there is not such a choice, the information must be read from the disk that holds it, which means all implementations must do the right thing, which will end up striping reads across all (but one) disks.
In any case, I've used RAID-5 with databases and it works pretty well. The biggest problem with RAID-5 (especially on big volumes) would be the time to reconstruct if a disk fails. But if you're using good quality SCSI drives that tend to last long, I would consider using RAID-5.
As with any other performance-related issue, the answer, as usual, comes from the benchmarks you do with your own application. Anything else would be just theoretical and could not even apply to your particular case.
HTH, Filipe
You're going to need two RAID controllers and 6 drives to do RAID 50. RAID 50 will be faster, but costs more in drives and controllers.
Jason www.cyborgworkshop.org
mcclnx mcc wrote:
we have DELL 6800 server with 12 internal disks in it. O.S. is CENTOS 4.6 and SCSI control card is PERC 4e/di.
We plan to configure 4 disks (5,8,9,10) as RAID5 or RAID50. This logical volume will be use as file systems and store database backup files.
Can anyone tell me which one is better on performance?
Thanks.
付費才容量無上限?Yahoo!奇摩電子信箱2.0免費給你,信件永遠不必刪! - *馬 上體驗* http://tw.rd.yahoo.com/referurl/mail/mail20/tag_hot0103/*http://tw.mg0.mail.yahoo.com/dc/landing *!*
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Jason Clark wrote:
You're going to need two RAID controllers and 6 drives to do RAID 50. RAID 50 will be faster, but costs more in drives and controllers.
Jason www.cyborgworkshop.org
mcclnx mcc wrote:
we have DELL 6800 server with 12 internal disks in it. O.S. is CENTOS 4.6 and SCSI control card is PERC 4e/di.
We plan to configure 4 disks (5,8,9,10) as RAID5 or RAID50. This logical volume will be use as file systems and store database backup files.
Can anyone tell me which one is better on performance?
1st, Jason, please do not top post! It makes life harder in mailing lists.
http://www.centos.org/modules/tinycontent/index.php?id=16 (item 2, "Guidelines for CentOS Mailing List posts")
You do not need two (2) raid controllers unless you want to have redundancy at the controller level. Adaptec, 3Ware, etc do RAID 50. For RAID 50, you need at least 6 disks.
http://en.wikipedia.org/wiki/RAID
For database, i'd go with RAID 10. As pointed out Joseph in a previous post, RAID 5 rebuilding would slows the array down.
As for RAID 10, i didn't make extensive benchmarks but here are the rough results i got with Adaptec 3405 and four (4) Seagate 15K SAS drives:
RAID 5: Read = 170 MiB/s Write = 135 MiB/s
RAID 10: Read = 170 MiB/s Write = 160 MiB/s
And the difference gap (write) should increase in favor of RAID 10 as one add disks (provided that the controller use more PCI-e lane than the Adaptec 3405 which use 4 lanes or even using the PCI-X bus). RAID 5 uses XOR calculation.
Guy Boisvert, ing. IngTegration inc.
On Thu, May 22, 2008 at 7:12 PM, Guy Boisvert boisvert.guy@videotron.ca wrote:
You do not need two (2) raid controllers unless you want to have redundancy at the controller level. Adaptec, 3Ware, etc do RAID 50. For RAID 50, you need at least 6 disks.
http://en.wikipedia.org/wiki/RAID
For database, i'd go with RAID 10. As pointed out Joseph in a previous post, RAID 5 rebuilding would slows the array down.
As for RAID 10, i didn't make extensive benchmarks but here are the rough results i got with Adaptec 3405 and four (4) Seagate 15K SAS drives:
RAID 5: Read = 170 MiB/s Write = 135 MiB/s
RAID 10: Read = 170 MiB/s Write = 160 MiB/s
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
Pretty strong opinion that would be disputed by many don't you think? I would venture to say that any large system involved in SLA's with 5 9's etc would have very good equipment all using hardware based controllers. Not to mention you just invalidated almost every SAN in production which would be many :)
jlc
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
This was valid until...quite a few years ago.
Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
You seem to have been living under a rock for the last half decade.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
Hmm, I think your advice must be taken with a grain of salt. Have you actually tried to do what you suggest? In any case, I will give you the benefit of the doubt that you just did a typo.
On Fri, May 23, 2008 at 4:19 AM, Christopher Chan christopher@ias.com.hk wrote:
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
This was valid until...quite a few years ago.
Has hardware-raid vendors open-sourced their firmware then?
Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
You seem to have been living under a rock for the last half decade.
For each hardware-raid configuration I keep a redundant raid-controller. In case of controller failure it's the best way to recover my data on disks. I tried simple test cases once (yes, on the last half decade) and most failed except simple RAID-1 configurations.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
Hmm, I think your advice must be taken with a grain of salt. Have you actually tried to do what you suggest? In any case, I will give you the benefit of the doubt that you just did a typo.
mdadm raid10 is neither raid 1+0 nor raid 0+1. Go check with man mdadm or google. Each stripe is written on 2 different disks with a rolling frame and loss of 1 disk in 3 disk configuration can be recovered online.
Linux wrote:
On Fri, May 23, 2008 at 4:19 AM, Christopher Chan christopher@ias.com.hk wrote:
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
This was valid until...quite a few years ago.
Has hardware-raid vendors open-sourced their firmware then?
So? Has the vendor of your motherboard open sourced their firmware? Do you flash a piece of open source bios code into your motherboard's chip if not so?
Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
You seem to have been living under a rock for the last half decade.
For each hardware-raid configuration I keep a redundant raid-controller. In case of controller failure it's the best way to recover my data on disks. I tried simple test cases once (yes, on the last half decade) and most failed except simple RAID-1 configurations.
Sorry, I have never had a 3ware card fail on me during my four years at Outblaze Ltd. and besides, other users of 3ware had just have to plug in another card and they got all their data back. Of course, I have heard of horror stories with other brands like Mylex which might act up on a reboot.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
Hmm, I think your advice must be taken with a grain of salt. Have you actually tried to do what you suggest? In any case, I will give you the benefit of the doubt that you just did a typo.
mdadm raid10 is neither raid 1+0 nor raid 0+1. Go check with man mdadm or google. Each stripe is written on 2 different disks with a rolling frame and loss of 1 disk in 3 disk configuration can be recovered online.
OH, you were talking about that new module that is not available on Centos 4. That is the problem these days, acronyms are not necessarily uniform. Sorry, no experience with that particular module and I think this should clear up a lot of misunderstanding on answering questions about how do I make a "raid10" array during installation.
Linux wrote:
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
"More intelligently designed" -> Could you please tell us more on this one?
i Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
Not easily recoverable? I did recovery many time without a hitch (Adaptec, 3Ware, LSI, PERC)!
As for RAID 10 with 3 disks, mmm... go see:
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks
Lastly, it's kinda strange that your name is "Linux": Maybe you're young and your parents decided to honor this great OS! Well, i may name my next children "Cento" !!! ;-)
Hey, have a nice day "Linuxito" !
Guy Boisvert, ing. IngTegration inc.
On Fri, May 23, 2008 at 8:28 AM, Guy Boisvert boisvert.guy@videotron.ca wrote:
And stick with md-raid 10 (also known as software raid) because it is much more intelligently designed than any closed-source-embedded-raid-controller.
"More intelligently designed" -> Could you please tell us more on this one?
Simple answer: Open Source (and for a long time) I guess you know what it means. But I wander if source of Adaptec raid controller's firmware is opened in recent years.
Nowadays hardware raid frightens me because of the need to have spare raid-controllers for every hardware-raid-configuration I have. They are neither interchangable nor easily recoverable.
md-raid 10 can be established with any number of disks (at least 3 but better check with google)
Not easily recoverable? I did recovery many time without a hitch (Adaptec, 3Ware, LSI, PERC)!
Try recovering 3Ware failed disks with Adaptec then. Nearly every vendor has his own way in details. Yes, mostly documented but not interchangable. And I do not mean only RAID-1.
As for RAID 10 with 3 disks, mmm... go see:
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks
mdadm Raid-10 is neither 1+0 nor 0+1. So 3 disks is enough to supply a minimum level of redundancy. You should have 2 copies of each stripes on either 2 of 3 disks. But in 3 disk configuration loss of 2 disk means total loss. Go check with man mdadm.
Lastly, it's kinda strange that your name is "Linux": Maybe you're young and your parents decided to honor this great OS! Well, i may name my next children "Cento" !!! ;-)
Well, my parents taught me understanding what I read better than you (although I'm not a native English speaker)
Hey, have a nice day "Linuxito" !
Thanks buddy.
And for referance, try reading this [1]
I do not want to start a flame, just sharing my experience with different hardwares. This comparison about software-hardware raid excludes SAN and other external RAID solutions. Externally attached storage is outside the scope of this discussion. Externally connected solutions can obviously be SAN, software RAID, hardware RAID, or a combination thereof. [1]
Linux wrote:
On Fri, May 23, 2008 at 8:28 AM, Guy Boisvert boisvert.guy@videotron.ca wrote:
"More intelligently designed" -> Could you please tell us more on this one?
Simple answer: Open Source (and for a long time) I guess you know what it means. But I wander if source of Adaptec raid controller's firmware is opened in recent years.
Well, i respect Open Source (and your opinion) very much but your comparison imply that you had access to Adaptec's code! Maybe you really had access, i don't know. If it's the case, then thanks you for having shared this knowledge.
Not easily recoverable? I did recovery many time without a hitch (Adaptec, 3Ware, LSI, PERC)!
Try recovering 3Ware failed disks with Adaptec then. Nearly every vendor has his own way in details. Yes, mostly documented but not interchangable. And I do not mean only RAID-1.
You're talking about failed disks or controller?
With controller, easy with my backups (or backup card). People with no tolerance to failing controller arrange things accordingly like i do.
With disks, irrelevant.
As for RAID 10 with 3 disks, mmm... go see:
http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks
mdadm Raid-10 is neither 1+0 nor 0+1. So 3 disks is enough to supply a minimum level of redundancy. You should have 2 copies of each stripes on either 2 of 3 disks. But in 3 disk configuration loss of 2 disk means total loss. Go check with man mdadm.
Well, educate me (and maybe others) M8. I learn things everyday and i like it. How would you do RAID10 with 3 disks? I know how to do it with at least 4, then 6 and so on.
As for RAID-10, more below.
Well, my parents taught me understanding what I read better than you (although I'm not a native English speaker)
Well, english is neither my native language! As for reading, i'm not that bad but i may have misunderstood what you really meant. In that case, please forgive me! I didn't meant to be rude or anything.
Hey, have a nice day "Linuxito" !
Thanks buddy.
And for referance, try reading this [1]
I do not want to start a flame, just sharing my experience with different hardwares. This comparison about software-hardware raid excludes SAN and other external RAID solutions. Externally attached storage is outside the scope of this discussion. Externally connected solutions can obviously be SAN, software RAID, hardware RAID, or a combination thereof. [1]
I agree that the compatibility is great with software RAID. However, there are some limitations at least in performance (Bus saturation, etc).
I "tried to read" your reference (the URL you kindly provided me, thanks) and, quote:
"When the top array is a RAID 0 (such as in RAID 10 and RAID 50) most vendors omit the "+", though RAID 5+0 is clearer."
"RAID 1+0: mirrored sets in a striped set (minimum four disks; even number of disks) provides fault tolerance and improved performance but increases complexity. The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives. In a failed disk situation RAID 1+0 performs better because all the remaining disks continue to be used. The array can sustain multiple drive losses so long as no mirror loses both its drives."
So they say, and correct me if i'm wrong, that RAID10 is a RAID 1 of RAID 0. A mirror of stripe sets. You said it's not that, i lost you on this one.
|-- Mirror ----| | |
-- D1a -- D1b | | | Striped | Striped | | -- D2a -- D2b | | ... ... | | -- Dna -- Dnb
So that's why i don't get what you mean by RAID10 with 3 disks. Please explain.
Guy Boisvert, ing. IngTegration inc.
On Fri, May 23, 2008 at 5:31 PM, Guy Boisvert boisvert.guy@videotron.ca wrote:
Well, i respect Open Source (and your opinion) very much but your comparison imply that you had access to Adaptec's code! Maybe you really had access, i don't know. If it's the case, then thanks you for having shared this knowledge.
No need to see adaptec source code. Actively developed and widely used open-source projects have great success over their closed-sourced big-budgeted projects. But you are correct at one point: I do not have right to blame any vendor without a fair comparison. However, none of them tends to show theirs for a comparison. But again, according to general inclination, I have a great feeling that I am right. Besides, this is all about the philosophy of open-source. linux kernel-raid still has my vote.
Nevertheless, closed-source firmwares are everywhere, should we become paranoid? Maybe one day, but today, software linux kernel-raid is a good competitor in raid world, so I think it is a good choice to be paranoid about raid-stuff. (And of course we should, it is a cheap and great redundancy and for both data safety and service continuety)
As an example, IBM's SAN devices are great I think. I'd used one and loved its performance and simplicity and elasticity. No software open-source solution can easily race with it.
You're talking about failed disks or controller?
With controller, easy with my backups (or backup card). People with no tolerance to failing controller arrange things accordingly like i do.
With disks, irrelevant.
This is what I'm trying to explain. Even the same vendor breaks compatibility between different vendors and I'm still talking about controller cards. I have to have backup cards for all configurations I have. After using a backup card, I either have to supply a new backup for controller card or have to transfer my configuration to a new card.
For external solutions, I had only managed one configuration since now so no comment/comparison on them.
Well, educate me (and maybe others) M8. I learn things everyday and i like it. How would you do RAID10 with 3 disks? I know how to do it with at least 4, then 6 and so on.
As for RAID-10, more below.
Do not ask me, ask linux kernel raid10 developer [2]
Well, english is neither my native language! As for reading, i'm not that bad but i may have misunderstood what you really meant. In that case, please forgive me! I didn't meant to be rude or anything.
Please accept my apologies. I think I behaved somehow rude. No need to talk about such non-technical issued in this kind of a list :)
I agree that the compatibility is great with software RAID. However, there are some limitations at least in performance (Bus saturation, etc).
I "tried to read" your reference (the URL you kindly provided me, thanks) and, quote:
"When the top array is a RAID 0 (such as in RAID 10 and RAID 50) most vendors omit the "+", though RAID 5+0 is clearer."
"RAID 1+0: mirrored sets in a striped set (minimum four disks; even number of disks) provides fault tolerance and improved performance but increases complexity. The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives. In a failed disk situation RAID 1+0 performs better because all the remaining disks continue to be used. The array can sustain multiple drive losses so long as no mirror loses both its drives."
So they say, and correct me if i'm wrong, that RAID10 is a RAID 1 of RAID 0. A mirror of stripe sets. You said it's not that, i lost you on this one.
linux kernel raid10 is a combination of both raid0 and raid1, not sum of them. As developer himself says in [2] So you have 3x500GB disks and 750GB raid-volume.
[2] http://neil.brown.name/blog/20040827225440
Have a nice sunday....
P.S.: Once more, I am sorry to steal someone's thread which is about raid5/raid50 but I am currently using raid10 in many configurations and even after some disk failures I recovered easily. So, I can honestly recommend raid10 over raid5(0) configurations.
So they say, and correct me if i'm wrong, that RAID10 is a RAID 1 of RAID 0. A mirror of stripe sets. You said it's not that, i lost you on this one.
Heh, I dare say most of us are lost on this one. It is a blinking new module for md that is not available on Centos 4. This should help us deal with any future questions from people asking, "How do I create a raid10 array for root during installation?" or similar. Answer: You cannot not. But you can do raid 1+0/0+1.
Christopher Chan wrote:
So they say, and correct me if i'm wrong, that RAID10 is a RAID 1 of RAID 0. A mirror of stripe sets. You said it's not that, i lost you on this one.
Heh, I dare say most of us are lost on this one. It is a blinking new module for md that is not available on Centos 4. This should help us deal with any future questions from people asking, "How do I create a raid10 array for root during installation?" or similar. Answer: You cannot not. But you can do raid 1+0/0+1. _______________________________________________
Why are you still using CentOS 4?
Why are you still using CentOS 4?
Do you have an issue with Centos 4? I prefer to wait for RH to work most of the kinks with their new releases. Centos 5 has new versions of various libraries and software. They have never been able to guarantee zero breakage. Eg: I have heard of constantly crashing firefox. A known issue too.
Christopher Chan wrote:
Why are you still using CentOS 4?
Do you have an issue with Centos 4? I prefer to wait for RH to work most of the kinks with their new releases. Centos 5 has new versions of various libraries and software. They have never been able to guarantee zero breakage. Eg: I have heard of constantly crashing firefox. A known issue too. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Just asking. I don't use CentOS as a desktop OS, so the firefox problem doesn't bother me at all, but CentOS 5 is an upgrade in many regards, and I find it very stable. I have yet to try RAID10 with it though, as soon as I can get my hands on enough spare HDD's :)
Just asking. I don't use CentOS as a desktop OS, so the firefox problem doesn't bother me at all, but CentOS 5 is an upgrade in many regards, and I find it very stable. I have yet to try RAID10 with it though, as soon as I can get my hands on enough spare HDD's :)
I believe you cannot do it via the installer yet. Can anybody confirm the presence of raid10 personality in Centos 5?
On Mon, May 26, 2008 at 3:16 AM, Christopher Chan christopher@ias.com.hk wrote:
I believe you cannot do it via the installer yet. Can anybody confirm the presence of raid10 personality in Centos 5?
Installer does not have raid10 as an option. Not sure whether boot cd has this module or not. But after installing, it exists.
Current mdadm raid10 version in CentOS5 is a little old (v2.5.4 - 13 October 2006) and has a bug which sometimes kicks one drive from raid after initial resync and repeats kicking-after-resync when hot added again and again and again.
Linux wrote:
On Mon, May 26, 2008 at 3:16 AM, Christopher Chan christopher@ias.com.hk wrote:
I believe you cannot do it via the installer yet. Can anybody confirm the presence of raid10 personality in Centos 5?
Installer does not have raid10 as an option. Not sure whether boot cd has this module or not. But after installing, it exists.
Current mdadm raid10 version in CentOS5 is a little old (v2.5.4 - 13 October 2006) and has a bug which sometimes kicks one drive from raid after initial resync and repeats kicking-after-resync when hot added again and again and again.
In other words, broken. So do not use raid10 personality on Centos5. Okay. Back to stripping mirrors people.
Linux wrote:
Current mdadm raid10 version in CentOS5 is a little old (v2.5.4 - 13 October 2006) and has a bug which sometimes kicks one drive from raid after initial resync and repeats kicking-after-resync when hot added again and again and again.
Do you have some bug numbers ?
The only bugs that exist, are the ones that come with bug numbers, everything else is just user error.
On Mon, May 26, 2008 at 12:24 PM, Karanbir Singh mail-lists@karan.org wrote:
Linux wrote:
Current mdadm raid10 version in CentOS5 is a little old (v2.5.4 - 13 October 2006) and has a bug which sometimes kicks one drive from raid after initial resync and repeats kicking-after-resync when hot added again and again and again.
Do you have some bug numbers ?
The only bugs that exist, are the ones that come with bug numbers, everything else is just user error.
I guess I'm not the only one who is trying to be an a**hole :)
Ask md-raid developer [1] because he states about the bug there (and also somewhere else)
Go find bug number yourself. If you have any counter-information, share it here, correct me and let us all learn.
For all others, %60-70 of raid10 configurations may fail one drive with current raid10 module shipped with CentOS5 because of being somehow old.
Thanks...
PS: For further information about bugs see [2].
[1] http://www.issociate.de/board/post/411069/RAID_10_resync_leading_to_attempt_... [2] http://en.wikipedia.org/wiki/Software_bug
Linux wrote:
On Mon, May 26, 2008 at 12:24 PM, Karanbir Singh mail-lists@karan.org wrote:
Linux wrote:
Current mdadm raid10 version in CentOS5 is a little old (v2.5.4 - 13 October 2006) and has a bug which sometimes kicks one drive from raid after initial resync and repeats kicking-after-resync when hot added again and again and again.
Do you have some bug numbers ?
The only bugs that exist, are the ones that come with bug numbers, everything else is just user error.
I guess I'm not the only one who is trying to be an a**hole :)
ok, so let me be clear about something here - if you are going to say there is a bug in the centos code, you need to back that up with a bug number at either bugzilla.redhat.com or bugs.centos.org if you cant - then you are just a ranting idiot.
Also, the idea of centos and the whole enterprise platform seems new to you, version numbers are not the only thing you need to rely on to get / check functionality within the code base.
On Mon, May 26, 2008 at 1:48 PM, Karanbir Singh mail-lists@karan.org wrote:
ok, so let me be clear about something here - if you are going to say there is a bug in the centos code, you need to back that up with a bug number at either bugzilla.redhat.com or bugs.centos.org if you cant - then you are just a ranting idiot.
There is definitely a bug with raid10 module distributed in centos5 which I bumped into myself. I even gave a referance to you from md-raid10 developer's own words and since now none of the changelogs of rhel/centos mention whether some patch has dealt with this *bug* instead of updating to new mdadm (which would solve this funny situation) And if there is, I think *You* should show me some changelog or patch log because you are the offending side here.
However if you insist on seeing a bug number I can create on for you on bugzilla. On the other hand you have the right to not believe me and toss to it one day.
Also, the idea of centos and the whole enterprise platform seems new to you, version numbers are not the only thing you need to rely on to get / check functionality within the code base.
I guess you are trying to say "Hey folks, dont believe some anonymous egoist idiot, use raid10 module safely". Otherwise please stop traducing me. If you have any further knowledge share it with us, or keep quiet.
Thanks...
Q: Why you idiot "linux" shit dont enter a new bug at either bugzilla.redhat.com or bugs.centos.org? Dont you know this is your responsibility as an open-source software user? A: Because after tossing to this *bug* and simply finding out the solution I changed my raid configurations to raid1 on centos boxes. I cannot reproduce.
On Mon, May 26, 2008 at 3:05 AM, Linux linuxlist@gmail.com wrote:
[1] http://www.issociate.de/board/post/411069/RAID_10_resync_leading_to_attempt_...
At the risk of getting handed my head here, I offer the following:
[1] above refers to a bug in a "custom compiled unpatched 2.6.20 kernel" raid10 driver, which was being used with FC4. What this has to do with CentOS 4 or 5 I don't know, since we are not up to the 2.6.20 kernel yet.
"Linux," although this suggestion may be a little late, you might want to turn down the tone of your postings. The core developers on the CentOS team don't usually take well to insults, profanity and so on. Unless you are a Linux kernel developer or a member of the aforementioned team, you might read and write with a small, 4 or 5 kg grain of salt....
Regards.
mhr
On Mon, May 26, 2008 at 8:20 PM, MHR mhullrich@gmail.com wrote:
kernel" raid10 driver, which was being used with FC4. What this has to do with CentOS 4 or 5 I don't know, since we are not up to the 2.6.20 kernel yet.
According to me, it also exists in CentOS 5 with kernel 2.6.18-53.1.19.el5
"Linux," although this suggestion may be a little late, you might want to turn down the tone of your postings. The core developers on the CentOS team don't usually take well to insults, profanity and so on. Unless you are a Linux kernel developer or a member of the aforementioned team, you might read and write with a small, 4 or 5 kg grain of salt....
I accept that "a**hole", "keep quiet" etc- although in case of humor- were beyond the limits of respect, not only to anyone with a title "core developer" "kernel developer" etc but also to probably any person on this high level list, this can be assumed as an apology to all audience...
As for this thread, here is my *diet* final quote: If I were you, I'd think twice before using raid10 on current kernel (2.6.18-53.1.19.el5 seems to be my latest installed) because I have strong feelings that it may break down after resync in some cases. At least, this is what I had lived but I might be wrong, misspelled something, misread some log, mistyped some commands, mixed with other distributions etc... Try something else, raid5, 50, 60, anything you want, I have no idea on them on centos.
Have a light day :)
On Mon, May 26, 2008 at 11:06 AM, Linux linuxlist@gmail.com wrote:
According to me, it also exists in CentOS 5 with kernel 2.6.18-53.1.19.el5
I have no reason to doubt you since I don't use RAIDs at all (yet), but that being the case, I offer the following:
1) File a bug report with as much detail as you can, including, probably, your reference to Neil's thread on your [1] link to which my original reply was directed.
2) Recognize that, if this bug is validated (which it probably will be, if Neil is _the_ developer involved):
a) it won't be fixed in CentOS at all until the fix is propagated down from Red Hat (although there is a slim possibility that it might get fixed in a centosplus update);
b) it won't be fixed in Red Hat until the fix is propagated down from either the kernel developers or the driver writer (whichever one RH uses), and that may take some time.
3) Banging on the CentOS developers for a fix, or this list, probably won't get any of us far.
However, for myself, I thank you for raising the issue and providing a reference where we can all look at the details. As I said, I don't use RAID (yet), but if I ever do, I'll keep this issue in mind (or somewhere that has reliable recovery methods) and see if it's fixed.
Regards.
mhr