On 6/30/11, Les Mikesell lesmikesell@gmail.com wrote:
OK, but without knowing the cause, you already know the cure. Make the virtual servers not share physical disks - they will always want a single head to be in different places at the same time.
Same old problem: budget :D Also, I expect similar setups in the future so I need to be able to know why and not simply throw hardware at it since the amount of disk activity is relatively low. The curious part is that this doesn't appear to happen during expected heavy usage. It almost never occurs during working hours on a weekday ever-since I ionice the other script.
And there is also probably some ugly stuff about how using files for virtual disk
Unfortunately yes, this was one part I misread/understood, should had gone with raw partitions. However the real amount of i/o on these aren't expected to be high, especially not during a lull hour like 1am or on a Sunday.
images and perhaps LVM on both the real and virtual side makes your disk blocks misaligned. Fixing that might help too.
No LVM on either side, kept unnecessary layers off the guest. And manually fdisk'd the drive so ensure 4K alignment.
What's the physical disk system? I remember seeing something like that long ago where a raid controller had a large write cache that normally made it seem fast, but once in a while either filling it to a high-water mark or something else would trigger it to complete catch up before responding again - which could take several minutes with everything blocked. And nothing else ever looked out of the ordinary.
Standard Intel-based board, onboard SATA controller with a pair of SATA2 disks mirrored with mdadm. As I said, budget setup :D