nate wrote:
Chan Chung Hang Christopher wrote:
Complete bollocks. The bottleneck is not the drives themselves as whether it is SATA/PATA disk drive performance has not changed much which is why 15k RPM disks are still king. The bottleneck is the bus be it PCI-X or PCIe 16x/8x/4x or at least the latencies involved due to bus traffic.
In most cases the bottleneck is the drives themselves, there is only so many I/O requests per second a drive can handle. Most workloads are random, rather than sequential, so the amount of data you can pull from a particular drive can be very low depending on what your workload is.
Which is true whether you are running hardware or software raid 0/1/1+0. However, when it comes to software raid, given enough disks, the bottleneck moves from the disk to the bus especially for raid5/6.
Fortunately the large caches(12GB per controller, mirrored with another controller) on the array buffer the higher response times on the disks resulting in host response times of around 20 milliseconds for reads, and 0-5 milliseconds for writes, which by most measures is excellent.
Haha, yeah, if you have such large scale setups, nobody would compare software raid.