Hi,
I've been using CentOS & Xen on a server that has 2 VM's configured. The default configuration includes one physical iface that is propagated (by a default bridge) to the VM's.
Since I wanted to configure additional physical iface, define a new bridge and propagate it to the viface-s of the VM's, i configured the bridge/phys. iface and brought it up (here are configurations that I set up):
-> eth3 DEVICE=eth3 BOOTPROTO=static HWADDR=D4:85:64:4B:76:AB ONBOOT=yes #HOTPLUG=no #DHCP_HOSTNAME=kdr-3k-4r-3o-07 BRIDGE=br0 TYPE=Ethernet
-> br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=none ONBOOT=yes
Then I configured new virt. ifaces on VM's (using virsh edit, config is listed below):
<interface type='bridge'> <mac address='00:16:36:17:62:5d'/> <source bridge='xenbr0'/> <script path='vif-bridge'/> <target dev='vif5.0'/> </interface> <interface type='bridge'> <mac address='00:16:3e:ca:63:39'/> <source bridge='br0'/> <script path='vif-bridge'/> <target dev='vif5.1'/> </interface>
You can see the default bridge (xenbr0 -> vif5.0) and my new bridge br0 connected to vif5.1.
So, here I got stuck up - when I try tu start a VM (xm create VIRT_SRV), the VM starts booting but hangs up (I can ping it form another srv., but when I try SSH it says "connection refused"). When I remove configuration for vif5.1 the VM starts up normally. I actually managed to start VM with vif5.1 up once, but next time I restarted it, the problem was there :/
Furthermore, I noticed this in the Xen log:
[2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices vif. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:166) Waiting for 0. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:544) hotplugStatusCallback /local/domain/0/backend/vif/1/0/hotplug-status. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:544) hotplugStatusCallback /local/domain/0/backend/vif/1/0/hotplug-status. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:558) hotplugStatusCallback 1. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:166) Waiting for 1. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:544) hotplugStatusCallback /local/domain/0/backend/vif/1/1/hotplug-status. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:558) hotplugStatusCallback 1. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices usb. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices vbd. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:166) Waiting for 768. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:544) hotplugStatusCallback /local/domain/0/backend/vbd/1/768/hotplug-status. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:558) hotplugStatusCallback 1. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:166) Waiting for 5632. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:544) hotplugStatusCallback /local/domain/0/backend/vbd/1/5632/hotplug-status. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:558) hotplugStatusCallback 1. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices irq. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices vkbd. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices vfb. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices pci. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices ioports. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices tap. [2011-11-18 09:27:16 xend 8213] DEBUG (DevController:160) Waiting for devices vtpm.
So the boot process goes on regulary until the last line ("Waiting for devices vtpm"). It seems like TPM makes it impossible for VM's to start regulary. When I open the VM's console (with virt-manager) I see the whole boot process but it hangs when it tries to start iface eth2 (attached to the second bridge) ...
Does anybody have a clue why is this happening?
Thanks in advance ..
Hi,
2011/11/18 Matija Draganović mdra137@gmail.com:
I've been using CentOS & Xen on a server that has 2 VM's configured. The default configuration includes one physical iface that is propagated (by a default bridge) to the VM's.
You do not mention which version of CentOS and Xen you are using?
Since I wanted to configure additional physical iface, define a new bridge and propagate it to the viface-s of the VM's, i configured the bridge/phys. iface and brought it up (here are configurations that I set up):
-> eth3 DEVICE=eth3 BOOTPROTO=static HWADDR=D4:85:64:4B:76:AB ONBOOT=yes #HOTPLUG=no #DHCP_HOSTNAME=kdr-3k-4r-3o-07 BRIDGE=br0 TYPE=Ethernet
-> br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=none ONBOOT=yes
If this is a CentOS5 machine with the CentOS provided Xen 3.0 packages, then here is info how i got my bridges to work with that setup:
* For dom0 I configured eth0 and eth1 as usual in /etc/sysconfig/network-scripts
* I did not configure br0 or br1 in network-scripts, but instead I created a file called /etc/xen/scripts/my-network-script having this contents:
#!/bin/sh dir=$(dirname "$0") "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=xenbr0 "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=xenbr1
* Then I modified in dom0 the file /etc/xen/xend-config.sxp in the following way:
#(network-script network-bridge) (network-script my-network-script)
* Now rebooting dom0 made the bridges available
* After that I could configure them in my domU config:
vif = [ "mac=00:16:3E:69:29:25,bridge=xenbr0,script=vif-bridge","mac=00:16:3E:E6:B0:6D,bridge=xenbr1,script=vif-bridge" ]
* After starting the domU I could configure the interfaces in network-scripts using the hardware addresses specified in the domU config
It seems that configuring bridging is done a bit differently in different Xen versions, so this might not work if you are using some other kind of config. With CentOS6 and 3rd party Xen 4.1 packages this procedure did not work at all and instead I needed to do the following:
* in dom0 create the br0 and br1 devices in network-scripts
* in dom0 /etc/xen/xend-config.sxp put instead:
#(network-script network-bridge) (network-script /bin/true)
* and in dom0 use different configuration for the domU config files like this:
vif = [ "mac=00:16:3E:69:29:25,bridge=br0","mac=00:16:3E:E6:B0:6D,bridge=br1" ]
Hope this helps, unfortuantely I am not familiar with virsh at all.
Best, Peter