On 05/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
I run KVM VMs, built and managed using libvirt, through bonded interfaces all the time.
I don't have a specific tutorial for this, but I cover all the steps to build a mode=1 (Active/Passive) bond and then routing VMs through it as part of a larger tutorial.
Here are the specific sections I think will help you;
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network
This section covers building 3 bonds, which you only need one of. In the tutorial, you only need to care about the "IFN" bond and bridge (bond2 + vbr2).
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Provisioning_vm000...
This covers all the steps used in the 'virt-install' call to provision the VMs, which includes telling them to use the bridge.
Hope that helps.