On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a problem to power.
That depends. Software raid is fine for raid1 and raid0. If you want raid5 or raid6, you have to use hardware raid with bbu cache that matches the size of the array notwithstanding the limiting to 10 disks max. consideration.
cpu performance/amount of RAM available is a non-issue and has been for a decade.
On Apr 13, 2011, at 8:45 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a problem to power.
That depends. Software raid is fine for raid1 and raid0. If you want raid5 or raid6, you have to use hardware raid with bbu cache that matches the size of the array notwithstanding the limiting to 10 disks max. consideration.
cpu performance/amount of RAM available is a non-issue and has been for a decade.
The battery backed cache is essential in avoiding the parity write hole as well as avoiding the performance penalty of short writes, those less then the stripe width where the remaining chunks need to be read to calculate the new parity, as the cache can attempt to cache the write until it gets a full stripe width and/or cache future writes until the read-calc-write is completed.
-Ross
On Wednesday, April 13, 2011 09:18 PM, Ross Walker wrote:
On Apr 13, 2011, at 8:45 AM, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a problem to power.
That depends. Software raid is fine for raid1 and raid0. If you want raid5 or raid6, you have to use hardware raid with bbu cache that matches the size of the array notwithstanding the limiting to 10 disks max. consideration.
cpu performance/amount of RAM available is a non-issue and has been for a decade.
The battery backed cache is essential in avoiding the parity write hole as well as avoiding the performance penalty of short writes, those less then the stripe width where the remaining chunks need to be read to calculate the new parity, as the cache can attempt to cache the write until it gets a full stripe width and/or cache future writes until the read-calc-write is completed.
While we are at it, disks being directly connected to the raid card will mean there won't be bus contention from nics and what not whereas software raid 5/6 would have to deal with that.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Christopher Chan Sent: Wednesday, April 13, 2011 3:45 PM To: centos@centos.org Subject: Re: [CentOS] 40TB File System Recommendations
While we are at it, disks being directly connected to the raid card will mean there won't be bus contention from nics and what not whereas software raid 5/6 would have to deal with that.
Could that really be an issue as well? What kind of traffic levels are we speaking of now? Approximately? That is to say, in as much this can be quantified at all.
I've never really seen this problem.