Hello all,
I am running drbd protocol A to a secondary machine to have 'backups' of my xen domUs.
Is it necessary to change the xen domains configs to use /dev/drbd* instead of the LVM volume that drbd mirrors, and which the xen domU runs of?
regards, Coert
On Aug 20, 2009, at 10:22 AM, Coert Waagmeester <lgroups@waagmeester.co.za
wrote:
Hello all,
I am running drbd protocol A to a secondary machine to have 'backups' of my xen domUs.
Is it necessary to change the xen domains configs to use /dev/drbd* instead of the LVM volume that drbd mirrors, and which the xen domU runs of?
Yes otherwise the data won't be replicated and your drbd volume will be inconsistent and need resync'd.
-Ross
Ross Walker wrote:
On Aug 20, 2009, at 10:22 AM, Coert Waagmeester <lgroups@waagmeester.co.za
wrote:
Hello all,
I am running drbd protocol A to a secondary machine to have 'backups' of my xen domUs.
Is it necessary to change the xen domains configs to use /dev/drbd* instead of the LVM volume that drbd mirrors, and which the xen domU runs of?
Yes otherwise the data won't be replicated and your drbd volume will be inconsistent and need resync'd.
-Ross
To be clear, are you saying you have a DRBD partition on both host machines, and LVM on top of that to allocate LVs for host storage?
You would not want to bypass the LVM layer in that case. The hosts would be still configured to map the LV devices into the domUs. You need to go through the LVM layer, which uses the DRBD partition as a block physical device. The writes down through the DRDB layer will still be replicated. -Alan
On Thu, 2009-08-20 at 09:38 -0600, Alan Sparks wrote:
Ross Walker wrote:
On Aug 20, 2009, at 10:22 AM, Coert Waagmeester <lgroups@waagmeester.co.za
wrote:
Hello all,
I am running drbd protocol A to a secondary machine to have 'backups' of my xen domUs.
Is it necessary to change the xen domains configs to use /dev/drbd* instead of the LVM volume that drbd mirrors, and which the xen domU runs of?
Yes otherwise the data won't be replicated and your drbd volume will be inconsistent and need resync'd.
-Ross
To be clear, are you saying you have a DRBD partition on both host machines, and LVM on top of that to allocate LVs for host storage?
You would not want to bypass the LVM layer in that case. The hosts would be still configured to map the LV devices into the domUs. You need to go through the LVM layer, which uses the DRBD partition as a block physical device. The writes down through the DRDB layer will still be replicated. -Alan
Hello Alan,
This is my current setup:
Xen DomU ============ DRBD ============ LVM Volume ============ RAID 1
What I first wanted to do was:
DomU | DRBD ============ LVM Volume ============ RAID 1
Is this possible or not recommended?
Regards, Coert
Coert Waagmeester wrote:
Hello Alan,
This is my current setup:
Xen DomU
DRBD
LVM Volume
RAID 1
What I first wanted to do was:
DomU | DRBD
LVM Volume
RAID 1
If I understand you diagram, you have DRDB running inside your domU with LVM on top of it now, and you are considering moving DRDB out of the domU into your dom0. Yes, it should work, and yes, you'll have to change the domU config to map the drbd block device into the domU. Am hoping a vgscan on the domU will pick up the fact you will be effectively changing the physical device names. I think the arrangement of doing DRDB and LVM at the dom0 level, and mapping LVs out of that as domU disks is not an uncommon implementation. -Alan
On 08/20/2009 05:46 PM, Coert Waagmeester wrote:
Xen DomU
DRBD
LVM Volume
RAID 1
this makes no sense, you are loosing about 12% of i/o capability here - even before hitting drbd, and then taking another hit on whatever drbd brings in ( depending on how its setup, even on a gigabit link that drbd load factor can be upto another 15 - 20%, or as low as 2 - 5% )
What I first wanted to do was:
DomU | DRBD
LVM Volume
RAID 1
Again, not very clear. Do onething, post some real stuff, eg the proc status files for drbd and pvdisplay,gvdisplay and the raid status files ( whatever raid setup you are using ).
On Aug 21, 2009, at 6:27 AM, Karanbir Singh mail-lists@karan.org wrote:
On 08/20/2009 05:46 PM, Coert Waagmeester wrote:
Xen DomU
DRBD
LVM Volume
RAID 1
this makes no sense, you are loosing about 12% of i/o capability here - even before hitting drbd, and then taking another hit on whatever drbd brings in ( depending on how its setup, even on a gigabit link that drbd load factor can be upto another 15 - 20%, or as low as 2 - 5% )
I don't think LVM adds as much latency as you think it does, tests on raw partitions show negligible difference between dev-mapper and raw partitions. It gives a lot of advantages though where you want multiple phy: disk volumes on the same array.
Now drbd is a different story, it will add latency, a lot for synchronous replication, a little for asynchronous, but noticeable.
What I first wanted to do was:
DomU | DRBD
LVM Volume
RAID 1
Again, not very clear. Do onething, post some real stuff, eg the proc status files for drbd and pvdisplay,gvdisplay and the raid status files ( whatever raid setup you are using ).
Having a replica per domU will get really complicated really fast imagine handling dozens of fail-over/split brain scenarios per server, ouch, forget about it!
If this is for live migration, then I recommend setting up two RAID arrays one for local only domUs and one for live migration domUs with active-active replication and synchronous io. If you can't setup two physical arrays, create two partitions of the existing array. Then drbd one array/partition, make the drbd device and local partition PVs and then create your VGs and LVs on them.
If this is for disaster recovery of all domUs then setup a drbd async replica of the whole array and create LVM on top of that.
-Ross