On Aug 21, 2009, at 6:27 AM, Karanbir Singh mail-lists@karan.org wrote:
On 08/20/2009 05:46 PM, Coert Waagmeester wrote:
Xen DomU
DRBD
LVM Volume
RAID 1
this makes no sense, you are loosing about 12% of i/o capability here - even before hitting drbd, and then taking another hit on whatever drbd brings in ( depending on how its setup, even on a gigabit link that drbd load factor can be upto another 15 - 20%, or as low as 2 - 5% )
I don't think LVM adds as much latency as you think it does, tests on raw partitions show negligible difference between dev-mapper and raw partitions. It gives a lot of advantages though where you want multiple phy: disk volumes on the same array.
Now drbd is a different story, it will add latency, a lot for synchronous replication, a little for asynchronous, but noticeable.
What I first wanted to do was:
DomU | DRBD
LVM Volume
RAID 1
Again, not very clear. Do onething, post some real stuff, eg the proc status files for drbd and pvdisplay,gvdisplay and the raid status files ( whatever raid setup you are using ).
Having a replica per domU will get really complicated really fast imagine handling dozens of fail-over/split brain scenarios per server, ouch, forget about it!
If this is for live migration, then I recommend setting up two RAID arrays one for local only domUs and one for live migration domUs with active-active replication and synchronous io. If you can't setup two physical arrays, create two partitions of the existing array. Then drbd one array/partition, make the drbd device and local partition PVs and then create your VGs and LVs on them.
If this is for disaster recovery of all domUs then setup a drbd async replica of the whole array and create LVM on top of that.
-Ross