With the current talk on bonding, I have a few questions of my own.
I'm setting up a KVM host with CentOS 6.3 x86_64 on which I'd like to attach the VMs to a bonded interface. My target setup is one where two GigE NICs are bonded and then the KVM bridge interface is attached to the bonded interface.
Initially I tried to use the balance-alb mode (mode6), but had little luck (receiving traffic on the bond appeared to be non-functional from the perspective of a VM). After some reading [0] [1] - I switched the mode to balance-tlb (mode5) and hosts are now reachable.
See bottom of [0] for a note on "known ARP problem for bridge on a bonded interface".
I'd prefer mode5 or 6 since it would balance between my slave interfaces and need not worry about 802.3ad support (mode4) on the switch this host will be connected to. But the way it seems mode 6 isn't going to work out for me. (Maybe experimenting with mode4 is the way to go.)
[0] http://www.linux-kvm.org/page/HOWTO_BONDING [1] https://lists.linux-foundation.org/pipermail/bridge/2007-April/005376.html
My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface.
Thanks! ---~~.~~--- Mike // SilverTip257 //
On 09/06/2012 06:19 PM, SilverTip257 wrote:
With the current talk on bonding, I have a few questions of my own.
I'm setting up a KVM host with CentOS 6.3 x86_64 on which I'd like to attach the VMs to a bonded interface. My target setup is one where two GigE NICs are bonded and then the KVM bridge interface is attached to the bonded interface.
Initially I tried to use the balance-alb mode (mode6), but had little luck (receiving traffic on the bond appeared to be non-functional from the perspective of a VM). After some reading [0] [1] - I switched the mode to balance-tlb (mode5) and hosts are now reachable.
See bottom of [0] for a note on "known ARP problem for bridge on a bonded interface".
I'd prefer mode5 or 6 since it would balance between my slave interfaces and need not worry about 802.3ad support (mode4) on the switch this host will be connected to. But the way it seems mode 6 isn't going to work out for me. (Maybe experimenting with mode4 is the way to go.)
[0] http://www.linux-kvm.org/page/HOWTO_BONDING [1] https://lists.linux-foundation.org/pipermail/bridge/2007-April/005376.html
My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface.
You probably either want to use Centos 6.2 or wait for 6.4. Apparently there have been some changes in the network device infrastructure with 6.3 kernels which resulted in bonding issues especially when used with vlan tagging. I've been bitten by this and these issues have been addressed on the red hat bugzilla but it's not entirely clear which kernel contains all the final fixes.
Regards, Dennis
On 09/06/2012 12:19 PM, SilverTip257 wrote:
My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface.
I'm not sure I'd call this high availability... but here's an example of bonding two ethernet ports (eth0 and eth1) together into a bond (mode 4) and then setting up a bridge for a VLAN (id 375) that some VMs can run on:
[root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth0 DEVICE=eth0 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth1 DEVICE=eth1 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# cat ifcfg-bond0 | sed 's/[1-9]/x/g' DEVICE=bond0 ONBOOT=yes BOOTPROTO=static IPADDR=x0.xxx.xx.xx NETMASK=xxx.xxx.xxx.0 DNSx=xx0.xxx.xxx.xxx DNSx=x0.xxx.xx.xx DNSx=x0.xxx.xx.x0 [root@kvm01a network-scripts]# cat ifcfg-br375 DEVICE=br375 BOOTPROTO=none TYPE=Bridge ONBOOT=yes [root@kvm01a network-scripts]# cat ifcfg-bond0.375 DEVICE=bond0.375 BOOTPROTO=none ONBOOT=yes VLAN=yes BRIDGE=br375 [root@kvm01a network-scripts]# cat /etc/modprobe.d/local.conf alias bond0 bonding options bonding mode=4 miimon=100 [root@kvm01a network-scripts]# grep Mode /proc/net/bonding/bond0 Bonding Mode: IEEE 802.3ad Dynamic link aggregation [root@kvm01a network-scripts]# egrep '^V|375' /proc/net/vlan/config VLAN Dev name | VLAN ID bond0.375 | 375 | bond0
Repeat ad nauseam for the other VLANs you want to put VMs on (assuming your switch is trunking them to your hypervisor).
See also http://backdrift.org/howtonetworkbonding via http://irclog.perlgeek.de/crimsonfu/2012-08-15#i_5900501
Phil
@Dennis Good to know, but rather than strip it back to 6.2, I'll just find a suitable solution using 6.3. Had I known I'd have these problems with mode6, I probably would have kept this box at the 6.2 release.
@Phil Thanks for the example! I'll have to give mode4 a shot.
This makes me wish that 'work' used bonding on the production KVM hosts rather than just hooking a bridge to each individual interface and attaching the hosts to the bridges. So with our setup there currently is no network load balancing or redundancy for the VMs (and I'd like to fix that).
Thank you both for the advice. Have a great weekend! ---~~.~~--- Mike // SilverTip257 //
On Thu, Sep 6, 2012 at 4:35 PM, Philip Durbin philipdurbin@gmail.com wrote:
On 09/06/2012 12:19 PM, SilverTip257 wrote:
My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface.
I'm not sure I'd call this high availability... but here's an example of bonding two ethernet ports (eth0 and eth1) together into a bond (mode 4) and then setting up a bridge for a VLAN (id 375) that some VMs can run on:
[root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth0 DEVICE=eth0 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth1 DEVICE=eth1 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# cat ifcfg-bond0 | sed 's/[1-9]/x/g' DEVICE=bond0 ONBOOT=yes BOOTPROTO=static IPADDR=x0.xxx.xx.xx NETMASK=xxx.xxx.xxx.0 DNSx=xx0.xxx.xxx.xxx DNSx=x0.xxx.xx.xx DNSx=x0.xxx.xx.x0 [root@kvm01a network-scripts]# cat ifcfg-br375 DEVICE=br375 BOOTPROTO=none TYPE=Bridge ONBOOT=yes [root@kvm01a network-scripts]# cat ifcfg-bond0.375 DEVICE=bond0.375 BOOTPROTO=none ONBOOT=yes VLAN=yes BRIDGE=br375 [root@kvm01a network-scripts]# cat /etc/modprobe.d/local.conf alias bond0 bonding options bonding mode=4 miimon=100 [root@kvm01a network-scripts]# grep Mode /proc/net/bonding/bond0 Bonding Mode: IEEE 802.3ad Dynamic link aggregation [root@kvm01a network-scripts]# egrep '^V|375' /proc/net/vlan/config VLAN Dev name | VLAN ID bond0.375 | 375 | bond0
Repeat ad nauseam for the other VLANs you want to put VMs on (assuming your switch is trunking them to your hypervisor).
See also http://backdrift.org/howtonetworkbonding via http://irclog.perlgeek.de/crimsonfu/2012-08-15#i_5900501
Phil _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Could try building a 3.6 kernel from the git repository and see if they've resolve the issue there.
-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Eskimo North Linux Friendly Internet Access, Shell Accounts, and Hosting. Knowledgeable human assistance, not telephone trees or script readers. See our web site: http://www.eskimo.com/ (206) 812-0051 or (800) 246-6874.
On Fri, 7 Sep 2012, SilverTip257 wrote:
Date: Fri, 7 Sep 2012 12:50:49 -0400 From: SilverTip257 silvertip257@gmail.com Reply-To: Discussion about the virtualization on CentOS centos-virt@centos.org To: Discussion about the virtualization on CentOS centos-virt@centos.org Subject: Re: [CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging
@Dennis Good to know, but rather than strip it back to 6.2, I'll just find a suitable solution using 6.3. Had I known I'd have these problems with mode6, I probably would have kept this box at the 6.2 release.
@Phil Thanks for the example! I'll have to give mode4 a shot.
This makes me wish that 'work' used bonding on the production KVM hosts rather than just hooking a bridge to each individual interface and attaching the hosts to the bridges. So with our setup there currently is no network load balancing or redundancy for the VMs (and I'd like to fix that).
Thank you both for the advice. Have a great weekend! ---~~.~~--- Mike // SilverTip257 //
On Thu, Sep 6, 2012 at 4:35 PM, Philip Durbin philipdurbin@gmail.com wrote:
On 09/06/2012 12:19 PM, SilverTip257 wrote:
My question to the members of this list is what bonding mode(s) are you using for a high availability setup? I welcome any advice/tips/gotchas on bridging to a bonded interface.
I'm not sure I'd call this high availability... but here's an example of bonding two ethernet ports (eth0 and eth1) together into a bond (mode 4) and then setting up a bridge for a VLAN (id 375) that some VMs can run on:
[root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth0 DEVICE=eth0 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth1 DEVICE=eth1 SLAVE=yes MASTER=bond0 [root@kvm01a network-scripts]# cat ifcfg-bond0 | sed 's/[1-9]/x/g' DEVICE=bond0 ONBOOT=yes BOOTPROTO=static IPADDR=x0.xxx.xx.xx NETMASK=xxx.xxx.xxx.0 DNSx=xx0.xxx.xxx.xxx DNSx=x0.xxx.xx.xx DNSx=x0.xxx.xx.x0 [root@kvm01a network-scripts]# cat ifcfg-br375 DEVICE=br375 BOOTPROTO=none TYPE=Bridge ONBOOT=yes [root@kvm01a network-scripts]# cat ifcfg-bond0.375 DEVICE=bond0.375 BOOTPROTO=none ONBOOT=yes VLAN=yes BRIDGE=br375 [root@kvm01a network-scripts]# cat /etc/modprobe.d/local.conf alias bond0 bonding options bonding mode=4 miimon=100 [root@kvm01a network-scripts]# grep Mode /proc/net/bonding/bond0 Bonding Mode: IEEE 802.3ad Dynamic link aggregation [root@kvm01a network-scripts]# egrep '^V|375' /proc/net/vlan/config VLAN Dev name | VLAN ID bond0.375 | 375 | bond0
Repeat ad nauseam for the other VLANs you want to put VMs on (assuming your switch is trunking them to your hypervisor).
See also http://backdrift.org/howtonetworkbonding via http://irclog.perlgeek.de/crimsonfu/2012-08-15#i_5900501
Phil _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On Fri, Sep 7, 2012 at 11:17 AM, Nanook nanook@eskimo.com wrote:
Could try building a 3.6 kernel from the git repository and see if they've
resolve the issue there.
If the latest stable kernel (3.5.x) from kernel.org is useful, you can find kernel-ml-3.5.3 from ELRepo ( http://elrepo.org/tiki/kernel-ml ).
Akemi