on 5-22-2008 9:58 PM Bahadir Kiziltan spake the following:
You need at least 6 drives for RAID5. I don't know if Perc 4e/Di allows configuring the RAID5.
Where did you get this bit of information? You can create a raid 5 with 3 or more disks.
Scott Silva wrote:
on 5-22-2008 9:58 PM Bahadir Kiziltan spake the following:
You need at least 6 drives for RAID5. I don't know if Perc 4e/Di allows configuring the RAID5.
Where did you get this bit of information? You can create a raid 5 with 3 or more disks.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
3 drives is not really recommended, since if 1 dies, you'll probably loose the whole set. Rather use min 4 drives, where 1 drive is a hot spare)
I'm not a fan of RAID 5 at all since it can only tolerate one failure at all. Go with raid 10 or something like that which is able to handle more than one failure. Intermittent, uncorrectable sector failures during rebuilds are becoming an increasing problem with today's drives.
Rudi Ahlers wrote:
Scott Silva wrote:
on 5-22-2008 9:58 PM Bahadir Kiziltan spake the following:
You need at least 6 drives for RAID5. I don't know if Perc 4e/Di allows configuring the RAID5.
Where did you get this bit of information? You can create a raid 5 with 3 or more disks.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
3 drives is not really recommended, since if 1 dies, you'll probably loose the whole set. Rather use min 4 drives, where 1 drive is a hot spare)
William Warren wrote:
I'm not a fan of RAID 5 at all since it can only tolerate one failure at all. Go with raid 10 or something like that which is able to handle more than one failure. Intermittent, uncorrectable sector failures during rebuilds are becoming an increasing problem with today's drives.
Is that raid10 or raid 1+0 or raid 0+1? :D
At least for the latter two, their handling more than one failure depends on which disks blow. Not sure how the raid10 module handles things.
Christopher Chan wrote:
William Warren wrote:
I'm not a fan of RAID 5 at all since it can only tolerate one failure at all. Go with raid 10 or something like that which is able to handle more than one failure. Intermittent, uncorrectable sector failures during rebuilds are becoming an increasing problem with today's drives.
Is that raid10 or raid 1+0 or raid 0+1? :D
At least for the latter two, their handling more than one failure depends on which disks blow. Not sure how the raid10 module handles things.
Whoever implements RAID10 will want the RAID1+0 which is a stripe set of mirrors, rather then the RAID0+1 which is a mirror of stripe sets.
The problem being two fold, 1) in a RAID0+1 a single drive failure on either side of the mirror will put the whole array into total failure jeopardy, a failure on both sides is a total loss, 2) the pathway for simultaneous operations is cut down from (say X is an even number of disks) X reads, X/2 writes, to 2 reads, 1 write.
On a RAID5/6 array you are limited to a pathway of 1 read and 1 write at a time and all writes must write across the entire stripe, so if you do choose RAID5/6 then it is highly recommended to use a hardware RAID controller with a BBU write-back and read-ahead cache which can minimize the impact of this by caching a whole stripe set to write at once and to have a stripe set of reads waiting for io requests.
For database log files and other applications that do a lot of random io it is recommended to use fast RPM drives in a RAID10 which has the multiple pathways for reads and writes which will maximize the total number of random IOPS (ios per second).
Typically most vendors recommend a two-prong approach, keep the database data files on a RAID5/RAID6 type array and keep the log files on a RAID10 array.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
From: Ross S. W. Walker Sent: May 25, 2008 08:56
Typically most vendors recommend a two-prong approach, keep the database data files on a RAID5/RAID6 type array and keep the log files on a RAID10 array.
I can not comment on "most vendors" but for the PROGRESS RDBMS RAID5 is definitely not recommended. It will work but you will see a significant reduction in performance. We strongly recommend that our clients go with RAID10 (as in RAID 1+0). In-house we only use RAID10.
Just my 0.02CA.
Regards, Hugh
Hugh E Cruickshank wrote:
From: Ross S. W. Walker Sent: May 25, 2008 08:56
Typically most vendors recommend a two-prong approach, keep the database data files on a RAID5/RAID6 type array and keep the log files on a RAID10 array.
I can not comment on "most vendors" but for the PROGRESS RDBMS RAID5 is definitely not recommended. It will work but you will see a significant reduction in performance. We strongly recommend that our clients go with RAID10 (as in RAID 1+0). In-house we only use RAID10.
Ok, most vendors meaning MS, Oracle, Sybase. I am unfamiliar with PROGRESS (Postgresql variant?), but in my experience with the aforementioned they typically do all writing to the db log files, which is recommended to be kept on a RAID10, then when when transactions are checkpointed, they are written to the DB files. The software makes all attempts to keep the data written to the database files as linear as possible to make sequential access possible and dump/restore fast. This makes the log files write-mostly and the database files read-mostly and of course why the two different RAID types.
Of course that really only pays if your databases are large enough to justify two separate storage systems. Right now my databases are small enough to be kept together with logs on a RAID10, but when they grow unwieldy I will move the databases off the RAID10 onto a RAID5/6/50/60 whatever and leave the log files on the RAID10.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
I can not comment on "most vendors" but for the PROGRESS RDBMS RAID5 is definitely not recommended. It will work but you will see a significant reduction in performance. We strongly recommend that our clients go with RAID10 (as in RAID 1+0). In-house we only use RAID10.
+1 Write performance of RAID5 on hardware MegaRAID SATA 150-6D is *very* poor.
Nikolay Ulyanitsky wrote:
I can not comment on "most vendors" but for the PROGRESS RDBMS RAID5 is definitely not recommended. It will work but you will see a significant reduction in performance. We strongly recommend that our clients go with RAID10 (as in RAID 1+0). In-house we only use RAID10.
+1 Write performance of RAID5 on hardware MegaRAID SATA 150-6D is *very* poor.
So? That thing is 1) ancient with what looks like a half-baked chip solution for raid5 calculations and 2) just comes with only 64MB of cache.
You can get a 3ware card with much more cache (9550 and above) and blow away that LSI piece of rubbish.
Ross S. W. Walker wrote:
Christopher Chan wrote:
William Warren wrote:
I'm not a fan of RAID 5 at all since it can only tolerate one failure at all. Go with raid 10 or something like that which is able to handle more than one failure. Intermittent, uncorrectable sector failures during rebuilds are becoming an increasing problem with today's drives.
Is that raid10 or raid 1+0 or raid 0+1? :D
At least for the latter two, their handling more than one failure depends on which disks blow. Not sure how the raid10 module handles things.
Whoever implements RAID10 will want the RAID1+0 which is a stripe set of mirrors, rather then the RAID0+1 which is a mirror of stripe sets.
Here we go. Please go and hammer Neil Brown about his version of RAID10 for md which is decidedly different from doing md 0+1/1+0. http://neil.brown.name/blog/20040827225440
Feel free to also hammer him on his definition of raid 1+0/0+1 as he calls raid 0+1 "a raid0 array built over a collection of raid1 arrays".
The problem being two fold, 1) in a RAID0+1 a single drive failure on either side of the mirror will put the whole array into total failure jeopardy, a failure on both sides is a total loss, 2) the pathway for simultaneous operations is cut down from (say X is an even number of disks) X reads, X/2 writes, to 2 reads, 1 write.
A failure of one mirror will destroy the whole raid 1+0 array too. I do not see how having a functional raid0 array on one side of the mirror in raid 0+1 will cut writes to one disk instead of two.
However, I would personally go for a stripe of mirrored disks since a rebuild will not involve all disks.
On a RAID5/6 array you are limited to a pathway of 1 read and 1 write at a time and all writes must write across the entire stripe, so if you do choose RAID5/6 then it is highly recommended to use a hardware RAID controller with a BBU write-back and read-ahead cache which can minimize the impact of this by caching a whole stripe set to write at once and to have a stripe set of reads waiting for io requests.
Yes, any hardware raid doing raid5 without a decent amount of cache is going to be very poor on write performance.
For database log files and other applications that do a lot of random io it is recommended to use fast RPM drives in a RAID10 which has the multiple pathways for reads and writes which will maximize the total number of random IOPS (ios per second).
Next time, please follow the thread. We are japping about the raid10 module for md by Neil Brown and how it apparently does not require the traditional way of doing raid 1+0/0+1. Like how his module can do "raid10" with just three disks.
http://neil.brown.name/blog/20040827225440
Typically most vendors recommend a two-prong approach, keep the database data files on a RAID5/RAID6 type array and keep the log files on a RAID10 array.
Thank you for your information.
Christopher Chan wrote:
Ross S. W. Walker wrote:
Christopher Chan wrote:
William Warren wrote:
I'm not a fan of RAID 5 at all since it can only tolerate one failure at all. Go with raid 10 or something like that which is able to handle more than one failure. Intermittent, uncorrectable sector failures during rebuilds are becoming an increasing problem with today's drives.
Is that raid10 or raid 1+0 or raid 0+1? :D
At least for the latter two, their handling more than one failure depends on which disks blow. Not sure how the raid10 module handles things.
Whoever implements RAID10 will want the RAID1+0 which is a stripe set of mirrors, rather then the RAID0+1 which is a mirror of stripe sets.
Here we go. Please go and hammer Neil Brown about his version of RAID10 for md which is decidedly different from doing md 0+1/1+0. http://neil.brown.name/blog/20040827225440
Well I don't want to hammer Neil and his RAID implementation, but technically, it isn't really RAID10, really Neil has come up with a whole new RAID level on to itself. It's quite good, don't get me wrong, it's just not RAID10.
Feel free to also hammer him on his definition of raid 1+0/0+1 as he calls raid 0+1 "a raid0 array built over a collection of raid1 arrays".
Well I am not going to hammer him on that either. In fact I am not out to "hammer" anybody here. We are professionals here not children.
The problem being two fold, 1) in a RAID0+1 a single drive failure on either side of the mirror will put the whole array into total failure jeopardy, a failure on both sides is a total loss, 2) the pathway for simultaneous operations is cut down from (say X is an even number of disks) X reads, X/2 writes, to 2 reads, 1 write.
A failure of one mirror will destroy the whole raid 1+0 array too. I do not see how having a functional raid0 array on one side of the mirror in raid 0+1 will cut writes to one disk instead of two.
How about a quick picture, O = good disk, X = failed disk,
Stripe of mirrors:
X O X -|-|- O X O
A mirror of stripes:
X|O|O ----- O|X|O
So on the first as only 1 disk in each mirror was affected the RAID array as a whole survives, but on the second since both sides have a total loss the array as a whole fails.
However, I would personally go for a stripe of mirrored disks since a rebuild will not involve all disks.
Another good point I forgot to mention.
On a RAID5/6 array you are limited to a pathway of 1 read and 1 write at a time and all writes must write across the entire stripe, so if you do choose RAID5/6 then it is highly recommended to use a hardware RAID controller with a BBU write-back and read-ahead cache which can minimize the impact of this by caching a whole stripe set to write at once and to have a stripe set of reads waiting for io requests.
Yes, any hardware raid doing raid5 without a decent amount of cache is going to be very poor on write performance.
For database log files and other applications that do a lot of random io it is recommended to use fast RPM drives in a RAID10 which has the multiple pathways for reads and writes which will maximize the total number of random IOPS (ios per second).
Next time, please follow the thread. We are japping about the raid10 module for md by Neil Brown and how it apparently does not require the traditional way of doing raid 1+0/0+1. Like how his module can do "raid10" with just three disks.
My Apologies I thought the thread was "RAID5 or RAID50 for database?".
I will next time look out for the hijacking and take appropriate action.
Typically most vendors recommend a two-prong approach, keep the database data files on a RAID5/RAID6 type array and keep the log files on a RAID10 array.
Thank you for your information.
Your welcome.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
How about a quick picture, O = good disk, X = failed disk,
Stripe of mirrors:
X O X -|-|- O X O
A mirror of stripes:
X|O|O
O|X|O
AH...I forgot we were on SIX disks and not four. :P
My Apologies I thought the thread was "RAID5 or RAID50 for database?".
I will next time look out for the hijacking and take appropriate action.
Yeah...it did become a general raid whatever thing...can you imagine it become anything else if you put a raid5 and a database question together?