-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Hi list,
I searched the web for bug reports regarding this phenomenon I see on *multiple* machines of a customer, however, I didn't find an exact fit. So, I'd like to ask here whether anyone else has run into this.
I have multiple CentOS 6 machines running using KVM to virtualize a bunch of machines on them (LVM-based).
Software releases as following:
[root@fe00 ~]# rpm -qa|egrep '(virt|kvm)' virt-viewer-0.5.6-8.el6_5.3.x86_64 libvirt-python-0.10.2-29.el6_5.7.x86_64 libvirt-client-0.10.2-29.el6_5.7.x86_64 qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64 libvirt-0.10.2-29.el6_5.7.x86_64 python-virtinst-0.600.0-18.el6.noarch
[root@fe00 ~]# uname -r 2.6.32-431.17.1.el6.x86_64
The VMs (here: two) have the "default" connection provided by KVM (heading to the internet) as well as a bridged interface to connect to a high performance backbone, where sensitive data is kept and bandwidth is an issue (or better, not :), on a second interface within the VMs:
[root@fe00 ~]# brctl show bridge name bridge id STP enabled interfaces br1 8000.001b21xxxxxx yes eth1 vnet1 vnet3 virbr0 8000.525400xxxxxx yes virbr0-nic vnet0 vnet2
br1 is the interface connected to the backbone, virbr0 KVM's user mode network.
After some time of inactivity on the virbr0 interface, from *within* the VMs connection is *lost*. The interface(s) lose their IP; running dhclient(8) is not of any use.
To get the machine back onto track, ``service libvirtd restart'' has to be issued: Vanished iptables rules show up again. (This, in contrast to an Ubuntu document [0], fixes it without shutting the VM(s) down.) Starting dhclient(8) within the VMs gets connectivity back.
Just out of curiosity I let one of the VMs cause some ICMP traffic now and then hit the interface -- ``ping -i 10 some.nice.host'' is sufficient to keep that interface alive.
The virbr0-headed interfaces are almost only being used to get updates etc -- management and load-balanced services are being served by the bridged interface.
Is this a bug or a feature?
Best,
Timo
[0] -- https://help.ubuntu.com/community/KVM/Networking#Network_Bridge_Losing_Conne...
On Wed, Jun 4, 2014 at 7:40 PM, Timo Schöler timo@riscworks.net wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Hi list,
I searched the web for bug reports regarding this phenomenon I see on *multiple* machines of a customer, however, I didn't find an exact fit. So, I'd like to ask here whether anyone else has run into this.
I have multiple CentOS 6 machines running using KVM to virtualize a bunch of machines on them (LVM-based).
Software releases as following:
[root@fe00 ~]# rpm -qa|egrep '(virt|kvm)' virt-viewer-0.5.6-8.el6_5.3.x86_64 libvirt-python-0.10.2-29.el6_5.7.x86_64 libvirt-client-0.10.2-29.el6_5.7.x86_64 qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64 libvirt-0.10.2-29.el6_5.7.x86_64 python-virtinst-0.600.0-18.el6.noarch
[root@fe00 ~]# uname -r 2.6.32-431.17.1.el6.x86_64
The VMs (here: two) have the "default" connection provided by KVM (heading to the internet) as well as a bridged interface to connect to a high performance backbone, where sensitive data is kept and bandwidth is an issue (or better, not :), on a second interface within the VMs:
[root@fe00 ~]# brctl show bridge name bridge id STP enabled interfaces br1 8000.001b21xxxxxx yes eth1 vnet1 vnet3 virbr0 8000.525400xxxxxx yes virbr0-nic vnet0 vnet2
br1 is the interface connected to the backbone, virbr0 KVM's user mode network.
After some time of inactivity on the virbr0 interface, from *within* the VMs connection is *lost*. The interface(s) lose their IP; running dhclient(8) is not of any use.
To get the machine back onto track, ``service libvirtd restart'' has to be issued: Vanished iptables rules show up again. (This, in contrast to an Ubuntu document [0], fixes it without shutting the VM(s) down.) Starting dhclient(8) within the VMs gets connectivity back.
Have you verified that the iptables rules disappear? That is: * Initially, the NAT rule is present * After inactivity, the NAT rule disappears * After restarting libvirtd, the NAT rule re-appears?
-George
On 06/05/2014 12:37 PM, thus George Dunlap spake:
On Wed, Jun 4, 2014 at 7:40 PM, Timo Schöler timo@riscworks.net wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Hi list,
I searched the web for bug reports regarding this phenomenon I see on *multiple* machines of a customer, however, I didn't find an exact fit. So, I'd like to ask here whether anyone else has run into this.
I have multiple CentOS 6 machines running using KVM to virtualize a bunch of machines on them (LVM-based).
Software releases as following:
[root@fe00 ~]# rpm -qa|egrep '(virt|kvm)' virt-viewer-0.5.6-8.el6_5.3.x86_64 libvirt-python-0.10.2-29.el6_5.7.x86_64 libvirt-client-0.10.2-29.el6_5.7.x86_64 qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64 libvirt-0.10.2-29.el6_5.7.x86_64 python-virtinst-0.600.0-18.el6.noarch
[root@fe00 ~]# uname -r 2.6.32-431.17.1.el6.x86_64
The VMs (here: two) have the "default" connection provided by KVM (heading to the internet) as well as a bridged interface to connect to a high performance backbone, where sensitive data is kept and bandwidth is an issue (or better, not :), on a second interface within the VMs:
[root@fe00 ~]# brctl show bridge name bridge id STP enabled interfaces br1 8000.001b21xxxxxx yes eth1 vnet1 vnet3 virbr0 8000.525400xxxxxx yes virbr0-nic vnet0 vnet2
br1 is the interface connected to the backbone, virbr0 KVM's user mode network.
After some time of inactivity on the virbr0 interface, from *within* the VMs connection is *lost*. The interface(s) lose their IP; running dhclient(8) is not of any use.
To get the machine back onto track, ``service libvirtd restart'' has to be issued: Vanished iptables rules show up again. (This, in contrast to an Ubuntu document [0], fixes it without shutting the VM(s) down.) Starting dhclient(8) within the VMs gets connectivity back.
Have you verified that the iptables rules disappear? That is: * Initially, the NAT rule is present * After inactivity, the NAT rule disappears * After restarting libvirtd, the NAT rule re-appears?
-George
Hi,
yes, it's exactly that way it happens.
Timo
Timo, I can confirm observing the same behavior. Your email thread here was very helpful in that it gave me two workarounds: (1) service libvirtd restart (2) make some background keepalive ping.
I'm hosting on a RHEL 6.4 system with similar version numbers:
[root@redpant centosimage]# rpm -qa|egrep '(virt|kvm)' libvirt-python-0.10.2-18.el6_4.2.x86_64 libvirt-0.10.2-18.el6_4.2.x86_64 virt-manager-0.9.0-18.el6.x86_64 virt-viewer-0.5.2-18.el6_4.2.x86_64 virt-top-1.0.4-3.15.el6.x86_64 qemu-kvm-0.12.1.2-2.355.el6_4.2.x86_64 python-virtinst-0.600.0-15.el6.noarch virt-what-1.11-1.2.el6.x86_64 virt-who-0.8-5.el6.noarch libvirt-java-devel-0.4.9-1.el6.noarch libvirt-devel-0.10.2-18.el6_4.2.x86_64 libvirt-client-0.10.2-18.el6_4.2.x86_64 libvirt-java-0.4.9-1.el6.noarch
Mojo
On Thu, Jun 5, 2014 at 12:10 AM, Timo Schöler timo@riscworks.net wrote:
[root@fe00 ~]# brctl show bridge name bridge id STP enabled interfaces br1 8000.001b21xxxxxx yes eth1 vnet1 vnet3 virbr0 8000.525400xxxxxx yes virbr0-nic vnet0 vnet2
Please share the config file that defines virbr0-nic and virbr0
-- Arun Khan