Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I? I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
What's wrong with RAID5, is there any technical limitation with RAID5?
Thanks.
Fajar Priyanto wrote:
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I?
Design/marketing decision for the features of the board?
I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
They do not want to go through the trouble of implementing raid5 for that particular board or they want raid5 to be available on more expensive options or ...etc
What's wrong with RAID5, is there any technical limitation with RAID5?
Not with the technology itself. They are just not offering raid5. If you have a problem with that, get another product or go to another company.
On Thursday 24 September 2009, Fajar Priyanto wrote:
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Both raid4 and raid5 could thoretically be used with 14 drives. Why they limit you to 7 drives at all is a good question (maybe you should ask IBM?). Possibly they consider too large arrays with only a single drive worth of parity un-safe.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I? I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
This has a completely different explanation. RAID0, 1 or 1+0 is a "simple" case of juggling sectors, no parity engine is needed. RAID4, 5 or 6 would require a much more complex and powerfull design.
What's wrong with RAID5, is there any technical limitation with RAID5?
Compared to raid4: not much at all Compared to raid10: less safe, longer rebuilds, slower Compared to raid6: less safe, more usable space, typically faster
/Peter
Fajar Priyanto wrote:
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I? I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
What's wrong with RAID5, is there any technical limitation with RAID5?
I think that's a re-branded Netapp FAS 2020. NetApps let you grow volumes by simply adding disks and with raid4 if you initialize the new disk to all zero's you don't have to recompute/rebuild the parity which is on a separate disk. That doesn't work with raid5. It's probably the wrong device if your first concern is being cheap.
Am 24.09.2009 um 07:43 schrieb Fajar Priyanto:
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I? I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
What's wrong with RAID5, is there any technical limitation with RAID5?
Well, it depends on the disk-size: http://www.enterprisestorageforum.com/technology/features/article.php/383963...
Rainer
On 09/24/2009 07:35 AM, Rainer Duffner wrote:
Am 24.09.2009 um 07:43 schrieb Fajar Priyanto:
Hi all, Sorry for the OT. I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1. The problem is, from the docs it says that it only supports either RAID-DP or RAID4. What I want to achieve is Max Storage Capacity, so I change it from RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the capacity is the same.
Now, why RAID5 is not supported? I believe using RAID5, I can get more storage capacity, can't I? I also notice with some onboard RAID controller, they only support either RAID0, RAID1, or RAID1+0. No RAID5.
What's wrong with RAID5, is there any technical limitation with RAID5?
Well, it depends on the disk-size: http://www.enterprisestorageforum.com/technology/features/article.php/383963...
This info is VERY relevant ... you will almost ALWAYS have a failure on rebuild with very large RAID 5 arrays. Since that is a fault in a second drive, that failure will cause the loss of all the data. I would not recommend RAID 5 right now ... it is not worth the risk.
On Wed, Sep 30, 2009 at 08:52:08PM -0500, Johnny Hughes wrote:
On 09/24/2009 07:35 AM, Rainer Duffner wrote:
Well, it depends on the disk-size: http://www.enterprisestorageforum.com/technology/features/article.php/383963...
This info is VERY relevant ... you will almost ALWAYS have a failure on rebuild with very large RAID 5 arrays. Since that is a fault in a second drive, that failure will cause the loss of all the data. I would not recommend RAID 5 right now ... it is not worth the risk.
"Almost always" is very dependent on the disks and size of the array.
Let's take a 20TiByte array as an example.
Now, the "hard error rate" is an expectation. That means that with an error rate of 1E14 then you'd expect to see 1 error for every 1E14 bits read. If we make the simplifying assumption of any read being equally likely to fail then any single bit read has a 1/1E14 chance of being wrong. (see end of email for more thoughts on this).
Now to rebuild a 20Tibyte array you would need to read 20Tibytes of data. The chance of this happening without error is: (1-1/1E14)^(8*20*2^40) = 0.172 ie only 17% of rebuilding a 20TiByte array! That's pretty bad. In fact it's downright awful. Do not build 20TiByte arrays with consumer disks!
Note that this doesn't care about the size of the disks nor the number of disks; it's purely based on probability of read error.
Now an "enterprise" class disk with an error rate of 1E15 looks better: (1-1/1E15)^(8*10*2^40) = 0.838 or 84% chance of successful rebuild. Better. But probably not good enough.
How about an Enterprise SAS disk at 1E16 (1-1/1E16)^(8*12.5*2^40) = 0.981 or 98% Not "five nines", but pretty good.
Of course you're never going to get 100%. Technology just doesn't work that way.
So, if you buy Enterprise SAS disks then you do stand a good chance of rebuilding a 20TiByte Raid 5. A 2% chance of a double-failure. Do you want to risk your company on that?
RAID6 makes things better; you need a triple failure to cause data loss. It's possible, but the numbers are a lot lower.
Of course the error rate and other disk characteristics are actually WAGs based on some statistical analysis. There's no actual measurements to show this.
Real life numbers appear to show that disks far outlive their expected values. Error rates are much lower than manufacturer claims (excluding bad batches and bad manufacturing, of course!)
This is just a rough "off my head" analysis. I'm not totally convinced it's correct (my understanding of error rate could be wrong; the assumption of even failure distribution is likely to be wrong because errors on a disk cluster - a sector is bad, a track is bad etc). But the analysis _feels_ right... which means nothing :-)
I currently have 5*1Tbyte consumer disks in a RAID5. That, theoretically, gives me a 27% chance of failure during a rebuild. As it happens I've had 2 bad disks, but they went bad a month apart (I think it is a bad batch!). Each time the array has rebuilt without detectable error.
Let's not even talk about Petabyte arrays. If you're doing that then you better have multiple redundancy in place, and **** the expense! Google is a great example of this.
Slightly OT.......
Opensolaris has just had triple parity raid (raidz3) added to ZFS;
http://blogs.sun.com/ahl/entry/triple_parity_raid_z
Pity we can't get an in kernel version of ZFS for linux.
On Thu, Oct 1, 2009 at 12:41 PM, Stephen Harris lists@spuddy.org wrote:
On Wed, Sep 30, 2009 at 08:52:08PM -0500, Johnny Hughes wrote:
On 09/24/2009 07:35 AM, Rainer Duffner wrote:
Well, it depends on the disk-size:
http://www.enterprisestorageforum.com/technology/features/article.php/383963...
This info is VERY relevant ... you will almost ALWAYS have a failure on rebuild with very large RAID 5 arrays. Since that is a fault in a second drive, that failure will cause the loss of all the data. I would not recommend RAID 5 right now ... it is not worth the risk.
"Almost always" is very dependent on the disks and size of the array.
Let's take a 20TiByte array as an example.
Now, the "hard error rate" is an expectation. That means that with an error rate of 1E14 then you'd expect to see 1 error for every 1E14 bits read. If we make the simplifying assumption of any read being equally likely to fail then any single bit read has a 1/1E14 chance of being wrong. (see end of email for more thoughts on this).
Now to rebuild a 20Tibyte array you would need to read 20Tibytes of data. The chance of this happening without error is: (1-1/1E14)^(8*20*2^40) = 0.172 ie only 17% of rebuilding a 20TiByte array! That's pretty bad. In fact it's downright awful. Do not build 20TiByte arrays with consumer disks!
Note that this doesn't care about the size of the disks nor the number of disks; it's purely based on probability of read error.
Now an "enterprise" class disk with an error rate of 1E15 looks better: (1-1/1E15)^(8*10*2^40) = 0.838 or 84% chance of successful rebuild. Better. But probably not good enough.
How about an Enterprise SAS disk at 1E16 (1-1/1E16)^(8*12.5*2^40) = 0.981 or 98% Not "five nines", but pretty good.
Of course you're never going to get 100%. Technology just doesn't work that way.
So, if you buy Enterprise SAS disks then you do stand a good chance of rebuilding a 20TiByte Raid 5. A 2% chance of a double-failure. Do you want to risk your company on that?
RAID6 makes things better; you need a triple failure to cause data loss. It's possible, but the numbers are a lot lower.
Of course the error rate and other disk characteristics are actually WAGs based on some statistical analysis. There's no actual measurements to show this.
Real life numbers appear to show that disks far outlive their expected values. Error rates are much lower than manufacturer claims (excluding bad batches and bad manufacturing, of course!)
This is just a rough "off my head" analysis. I'm not totally convinced it's correct (my understanding of error rate could be wrong; the assumption of even failure distribution is likely to be wrong because errors on a disk cluster - a sector is bad, a track is bad etc). But the analysis _feels_ right... which means nothing :-)
I currently have 5*1Tbyte consumer disks in a RAID5. That, theoretically, gives me a 27% chance of failure during a rebuild. As it happens I've had 2 bad disks, but they went bad a month apart (I think it is a bad batch!). Each time the array has rebuilt without detectable error.
Let's not even talk about Petabyte arrays. If you're doing that then you better have multiple redundancy in place, and **** the expense! Google is a great example of this.
--
rgds Stephen _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Stephen Harris wrote:
"Almost always" is very dependent on the disks and size of the array.
Let's take a 20TiByte array as an example.
...
he did say 'very large'.
note, raid10 has another parameter... say you have a 20 drive raid10 of 1TB drives (10TB total usable). if one drive fails, a rebuild only requires reading one drive and writing the hotspare replacement, this is fairly quick compared with the massive restripe operation of a raid5.
and, if during that rebuild operation, another drive fails, there's only a 1 in 19 odds of it being the mirror of the previously failed drive if we assume failures are a totally random occurance (yeah, ok, if we assume that a drive is more likely to fail when its being accessed, then the odds are soemwhat higher tha mirror would fail then another drive in the array.... but, an array that does periodic sweeps on idle storage will greatly reduce the possibility of this by 'discovering' a failing drive much sooner.
What's happen here:
modules.o: In function `Module_SymX': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:680: undefined reference to `dlsym' modules.o: In function `Module_Sym': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:658: undefined reference to `dlsym' modules.o: In function `unload_all_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:1210: undefined reference to `dlsym' modules.o: In function `Module_free': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:571: undefined reference to `dlclose' modules.o: In function `module_loadall': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:712: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:713: undefined reference to `dlsym' modules.o: In function `Module_Unload': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:600: undefined reference to `dlsym' modules.o: In function `Unload_all_testing_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:500: undefined reference to `dlclose' modules.o: In function `Unload_all_loaded_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:394: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:443: undefined reference to `dlclose' modules.o: In function `Init_all_testing_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:358: undefined reference to `dlsym' modules.o: In function `Module_Create': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:194: undefined reference to `dlopen' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:197: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:207: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:246: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:252: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:258: undefined reference to `dlsym' modules.o:/home/ftp/Chat/irc/Unreal3.2/src/modules.c:269: more undefined references to `dlsym' follow modules.o: In function `Module_Create': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:224: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:232: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:302: undefined reference to `dlerror' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:216: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:238: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:210: undefined reference to `dlclose' modules.o: In function `Module_SymEx': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:636: undefined reference to `dlsym' collect2: ld returned 1 exit status make[1]: *** [ircd] Error 1 make[1]: Leaving directory `/home/ftp/Chat/irc/Unreal3.2/src' make: *** [build] Error 2
Saludos Fraternales _____________________________ Atte. Alberto García Gómez M:.M:. Administrador de Redes/Webmaster IPI "Carlos Marx", Matanzas. Cuba.
Sorry for the top post but it seems most convenient.
At it says, it cannot find the functions called dlsym, dlclose, dlerror and dlopen.
You are probably missing a header file in modules.c or missing some directory in the includes search path.
Alberto García Gómez wrote:
What's happen here:
modules.o: In function `Module_SymX': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:680: undefined reference to `dlsym' modules.o: In function `Module_Sym': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:658: undefined reference to `dlsym' modules.o: In function `unload_all_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:1210: undefined reference to `dlsym' modules.o: In function `Module_free': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:571: undefined reference to `dlclose' modules.o: In function `module_loadall': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:712: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:713: undefined reference to `dlsym' modules.o: In function `Module_Unload': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:600: undefined reference to `dlsym' modules.o: In function `Unload_all_testing_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:500: undefined reference to `dlclose' modules.o: In function `Unload_all_loaded_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:394: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:443: undefined reference to `dlclose' modules.o: In function `Init_all_testing_modules': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:358: undefined reference to `dlsym' modules.o: In function `Module_Create': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:194: undefined reference to `dlopen' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:197: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:207: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:246: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:252: undefined reference to `dlsym' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:258: undefined reference to `dlsym' modules.o:/home/ftp/Chat/irc/Unreal3.2/src/modules.c:269: more undefined references to `dlsym' follow modules.o: In function `Module_Create': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:224: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:232: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:302: undefined reference to `dlerror' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:216: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:238: undefined reference to `dlclose' /home/ftp/Chat/irc/Unreal3.2/src/modules.c:210: undefined reference to `dlclose' modules.o: In function `Module_SymEx': /home/ftp/Chat/irc/Unreal3.2/src/modules.c:636: undefined reference to `dlsym' collect2: ld returned 1 exit status make[1]: *** [ircd] Error 1 make[1]: Leaving directory `/home/ftp/Chat/irc/Unreal3.2/src' make: *** [build] Error 2
Saludos Fraternales _____________________________ Atte. Alberto García Gómez M:.M:. Administrador de Redes/Webmaster IPI "Carlos Marx", Matanzas. Cuba.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos