On 04/04/2011 09:00 AM, compdoc wrote:
It's possible to set up guests to use a block device that will get you the same disk I/O as the underlying storage.
Is that what you're seeing? What speed does the host see when benchmarking the RAID volumes, and what speeds do the guests see?
Yes, I have been going on the assumption that I get close to native block device performance, but the test results tell me otherwise. I see array rebuild data rates which seem reasonable ... in the order of 60 to 80 MBytes/sec. I'm using 256k chunks, with the stride size set to match the number of data drives.
Using bonnie++, I mounted one of the Guest RAID-6 filesystems on the Host, ran the default tests, unmounted, then booted the Guest and ran the same default tests. The amount of RAM assigned was the same for both, to level the playing field a bit.
Direct comparisons between the two were difficult to judge, but the general result was that the Host was between 2:1 and 3:1 better than the Guest, which seems to be a rather large performance gap. Latency differences were all over the map, which I find puzzling. The Host is 64-bit and the Guest 32-bit, if that makes any difference. Perhaps caching between Host and Guest accounts for some of the differences.
At the moment my questions tend to be a bit academic. I'm primarily wondering if RAID-10 is paranoid enough given the current quality of WD CaviarBlack drives (better than dirt-cheap consumer drives, but not enterprise grade). My second question relates to whether or not the added overhead of using something like qcow2 would be offset by the advantages of more space efficiency and the copy-on-write feature.
I'd love to hear what other software RAID users think, especially regarding large-capacity drives. It's rare for a modern drive to hand out bad data without an accompanying error condition (which the md driver should handle), but I have read that uncaught bad data is possible and would not be flagged in RAID arrays which don't use parity calculations.
Chuck