Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
SORRY typo;
options bond0 mode=5 miimon=100
is really
options bond0 mode=4 miimon=100
On May 13, 2012, at 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
SORRY typo;
options bond0 mode=5 miimon=100
is really
options bond0 mode=4 miimon=100
Confirm your switch supports LACP and those ports are configured on the switch for the aggregation.
Check dmesg and the bonding status in /sys/class/net/$BOND/bonding to make sure that the lowest layer is good before progressing up the layers to IP/TCP/Application.
On 5/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I spent two months on bonding two nics inside a box to a bridge in the box. There is a bug, very prominent in fedora bugzillas about it. You cannot do it without some modification. Libvirt loses some vms, no way to make it work that I know of except for those suggested changes which I did not try.
if you look in your libvirt logs you will see xml bond errors....and thus impossible to do inside of the box. this only applies if the bonded nics and bridge are all in the same box,
also, the options should no longer go in bonding.conf, but in the bridge file itself.
in all my testing all vms worked except the one assigned vnet0, that always got 'lost'... however, any attempt by the vm to send a signal outside to the net, would cause it to be found again..
this bug is not fixed in 6 or in latest fedora when i last checked....there are self made patches in fedora bugzilla though.
On May 13, 2012, at 12:23 PM, bob wrote:
On 5/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I spent two months on bonding two nics inside a box to a bridge in the box. There is a bug, very prominent in fedora bugzillas about it. You cannot do it without some modification. Libvirt loses some vms, no way to make it work that I know of except for those suggested changes which I did not try.
if you look in your libvirt logs you will see xml bond errors....and thus impossible to do inside of the box. this only applies if the bonded nics and bridge are all in the same box,
also, the options should no longer go in bonding.conf, but in the bridge file itself.
in all my testing all vms worked except the one assigned vnet0, that always got 'lost'... however, any attempt by the vm to send a signal outside to the net, would cause it to be found again..
this bug is not fixed in 6 or in latest fedora when i last checked....there are self made patches in fedora bugzilla though.
Hi Bob,
WOW, ok cool.
I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.
Good info to know.
Really appreciate the feedback.
- aurf
On 5/13/2012 12:30 PM, aurfalien wrote:
On May 13, 2012, at 12:23 PM, bob wrote:
On 5/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I spent two months on bonding two nics inside a box to a bridge in the box. There is a bug, very prominent in fedora bugzillas about it. You cannot do it without some modification. Libvirt loses some vms, no way to make it work that I know of except for those suggested changes which I did not try.
if you look in your libvirt logs you will see xml bond errors....and thus impossible to do inside of the box. this only applies if the bonded nics and bridge are all in the same box,
also, the options should no longer go in bonding.conf, but in the bridge file itself.
in all my testing all vms worked except the one assigned vnet0, that always got 'lost'... however, any attempt by the vm to send a signal outside to the net, would cause it to be found again..
this bug is not fixed in 6 or in latest fedora when i last checked....there are self made patches in fedora bugzilla though.
Hi Bob,
WOW, ok cool.
I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.
Good info to know.
Really appreciate the feedback.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
and dont forget a lot of the bond modes require a second device like a switch...i think 0 and 6 were the ones I tried
I really wanted the bond, but decided to blow some ips and just use extra bridges to try to balance. It was not fun at all finding this bug...lol
I think the reason they do not want to get into it is alomost everyone uses bond/bridges outside of the single server and it is not an issue...and to rewrite and debug all that for a few of us that are crazy to do such single box bridge bond, well, we in't gonna see that according to the bugzilla responses I saw...still, one can hope
On May 13, 2012, at 12:54 PM, bob wrote:
On 5/13/2012 12:30 PM, aurfalien wrote:
On May 13, 2012, at 12:23 PM, bob wrote:
On 5/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I spent two months on bonding two nics inside a box to a bridge in the box. There is a bug, very prominent in fedora bugzillas about it. You cannot do it without some modification. Libvirt loses some vms, no way to make it work that I know of except for those suggested changes which I did not try.
if you look in your libvirt logs you will see xml bond errors....and thus impossible to do inside of the box. this only applies if the bonded nics and bridge are all in the same box,
also, the options should no longer go in bonding.conf, but in the bridge file itself.
in all my testing all vms worked except the one assigned vnet0, that always got 'lost'... however, any attempt by the vm to send a signal outside to the net, would cause it to be found again..
this bug is not fixed in 6 or in latest fedora when i last checked....there are self made patches in fedora bugzilla though.
Hi Bob,
WOW, ok cool.
I will simply do 2 bridges and allocate some of my guests to either for a sort of manual load balancing.
Good info to know.
Really appreciate the feedback.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
and dont forget a lot of the bond modes require a second device like a switch...i think 0 and 6 were the ones I tried
I really wanted the bond, but decided to blow some ips and just use extra bridges to try to balance. It was not fun at all finding this bug...lol
I think the reason they do not want to get into it is alomost everyone uses bond/bridges outside of the single server and it is not an issue...and to rewrite and debug all that for a few of us that are crazy to do such single box bridge bond, well, we in't gonna see that according to the bugzilla responses I saw...still, one can hope
Thats a real bummer.
I don't get why they think this.
I mean whats so strange for one to bond interfaces anyways on a system? Oh well, multi bridges are fine for me.
Funny, my Blow Leopard (OSX 10.6.8) and Cryin (OSX 10.7) Servers trunk fine with my Foundry switch at 802.3ad.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 5/13/2012 1:04 PM, aurfalien wrote:
Thats a real bummer.
I don't get why they think this.
I mean whats so strange for one to bond interfaces anyways on a system? Oh well, multi bridges are fine for me.
Funny, my Blow Leopard (OSX 10.6.8) and Cryin (OSX 10.7) Servers trunk fine with my Foundry switch at 802.3ad.
from what i get it is a problem with libvirt, using a bridge that is going through a bond....on the same machine. It must be rather detailed to fix and only a few people seem to use that route. (like you and me)
On 05/13/2012 10:16 AM, bob wrote:
from what i get it is a problem with libvirt, using a bridge that is going through a bond....on the same machine. It must be rather detailed to fix and only a few people seem to use that route. (like you and me)
I've been running 14 CentOS5 VMs with bridged over active-backup bonded interfaces (actually, over three sets of bonded interfaces) on a single Ubuntu 10.04-LTS server KVM host for a couple of years now. The only real issue I have had is that during a host reboot the 'thundering herd' trying to autostart simultaneously sometimes doesn't reliably start all 14 VMs and I have to manually launch the one or two VMs that fail to launch.
Also, I had to roll my own shutdown script because for whatever reason Ubuntu 10.04 thinks shooting running VMs in the head during a shutdown is a better approach than waiting for them to properly shutdown on request.
On Sun, May 13, 2012 at 12:42:52PM -0700, Jerry Franz wrote:
On 05/13/2012 10:16 AM, bob wrote:
from what i get it is a problem with libvirt, using a bridge that is going through a bond....on the same machine. It must be rather detailed to fix and only a few people seem to use that route. (like you and me)
I've been running 14 CentOS5 VMs with bridged over active-backup bonded interfaces (actually, over three sets of bonded interfaces) on a single Ubuntu 10.04-LTS server KVM host for a couple of years now. The only real issue I have had is that during a host reboot the 'thundering herd' trying to autostart simultaneously sometimes doesn't reliably start all 14 VMs and I have to manually launch the one or two VMs that fail to launch.
I used to see the same for RHEL5-based kvm-servers, but haven't seen this with any RHEL6-based servers that I would recommend for kvm installs now.
Also, I had to roll my own shutdown script because for whatever reason Ubuntu 10.04 thinks shooting running VMs in the head during a shutdown is a better approach than waiting for them to properly shutdown on request.
I also tend to favour shutdown/reboots for kvm guests instead of suspend/resume...
best regards,
Florian La Roche
On Tue, May 15, 2012 at 7:41 AM, Florian La Roche Florian.LaRoche@gmx.net wrote:
I also tend to favour shutdown/reboots for kvm guests instead of suspend/resume...
Florian La Roche
Slightly OT now, but, Florian, can you take a look at
http://bugs.centos.org/view.php?id=5726
I think it has to do with shutdown/reboots versus suspend/resume.
Akemi
On Tue, May 15, 2012 at 07:52:05AM -0700, Akemi Yagi wrote:
http://bugs.centos.org/view.php?id=5726
I think it has to do with shutdown/reboots versus suspend/resume.
FWIW, I hit this last week; had shut down the host to do a hardware upgrade (eSATA controller, 12Tb of disks). After I brought the machine back up it all looked good. Until the next morning when "logwatch" reports from a guest complained that ntp had shut down 'cos the time difference was too great.
Oops!
On Tue, May 15, 2012 at 8:14 AM, Stephen Harris lists@spuddy.org wrote:
On Tue, May 15, 2012 at 07:52:05AM -0700, Akemi Yagi wrote:
http://bugs.centos.org/view.php?id=5726
I think it has to do with shutdown/reboots versus suspend/resume.
FWIW, I hit this last week; had shut down the host to do a hardware upgrade (eSATA controller, 12Tb of disks). After I brought the machine back up it all looked good. Until the next morning when "logwatch" reports from a guest complained that ntp had shut down 'cos the time difference was too great.
Oops!
Yes, that confirms what the bug sumbitter described. Thanks for the note.
Akemi
On Tue, May 15, 2012 at 07:52:05AM -0700, Akemi Yagi wrote:
On Tue, May 15, 2012 at 7:41 AM, Florian La Roche Florian.LaRoche@gmx.net wrote:
I also tend to favour shutdown/reboots for kvm guests instead of suspend/resume...
Florian La Roche
Slightly OT now, but, Florian, can you take a look at
http://bugs.centos.org/view.php?id=5726
I think it has to do with shutdown/reboots versus suspend/resume.
Hello Akemi,
seems this is indeed dependent on suspend/resume versus save/restore and I'd suggest reporting this upstream to see if this can become an option versus hardcoded behaviour.
best regards,
Florian La Roche
On Tue, May 15, 2012 at 1:27 PM, Florian La Roche Florian.LaRoche@gmx.net wrote:
On Tue, May 15, 2012 at 07:52:05AM -0700, Akemi Yagi wrote:
On Tue, May 15, 2012 at 7:41 AM, Florian La Roche Florian.LaRoche@gmx.net wrote:
I also tend to favour shutdown/reboots for kvm guests instead of suspend/resume...
Florian La Roche
Slightly OT now, but, Florian, can you take a look at
http://bugs.centos.org/view.php?id=5726
I think it has to do with shutdown/reboots versus suspend/resume.
Hello Akemi,
seems this is indeed dependent on suspend/resume versus save/restore and I'd suggest reporting this upstream to see if this can become an option versus hardcoded behaviour.
Thanks, Florian, for your note and also for adding a comment to the bug tracker.
Akemi
On Sun, May 13, 2012 at 5:45 PM, aurfalien aurfalien@gmail.com wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
Note I'm speaking bonding only and not bridging here:
These days bonding is supposed to be done in the network-script files, not modprobe.conf: # ifcfg-bond0: DEVICE=bond0 IPADDR=10.0.0.6 NETMASK=255.255.255.0 #NETWORK= #BROADCAST= ONBOOT=yes BOOTPROTO=none USERCTL=no BONDING_OPTS="mode=active-backup primary=em1 arp_interval=2000 arp_ip_target=10.0.0.1 arp_validate=all num_grat_arp=12 primary_reselect=failure"
Adjust accordingly.
-- Mikael.
On May 13, 2012, at 2:27 PM, Mikael Fridh wrote:
On Sun, May 13, 2012 at 5:45 PM, aurfalien aurfalien@gmail.com wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
Note I'm speaking bonding only and not bridging here:
These days bonding is supposed to be done in the network-script files, not modprobe.conf: # ifcfg-bond0: DEVICE=bond0 IPADDR=10.0.0.6 NETMASK=255.255.255.0 #NETWORK= #BROADCAST= ONBOOT=yes BOOTPROTO=none USERCTL=no BONDING_OPTS="mode=active-backup primary=em1 arp_interval=2000 arp_ip_target=10.0.0.1 arp_validate=all num_grat_arp=12 primary_reselect=failure"
Adjust accordingly.
Hi Mikael,
I didn't do them in the .conf as its depreciated in Centos 6.
However I did move the miimon etc... lines to my network scripts file and still no dice.
I didn't try your suggestions as it looks too much like a patch, not very clean like it used to be in version 5.
So I basically had done what you suggested but w/o the arp lines.
I'm in no hurry for this although I will keep your suggestions in my notes as I may have an up coming Centos 6 server that absolutely needs binding.
Thanks for the reply.
- aurf
On 05/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
I run KVM VMs, built and managed using libvirt, through bonded interfaces all the time.
I don't have a specific tutorial for this, but I cover all the steps to build a mode=1 (Active/Passive) bond and then routing VMs through it as part of a larger tutorial.
Here are the specific sections I think will help you;
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network
This section covers building 3 bonds, which you only need one of. In the tutorial, you only need to care about the "IFN" bond and bridge (bond2 + vbr2).
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Provisioning_vm000...
This covers all the steps used in the 'virt-install' call to provision the VMs, which includes telling them to use the bridge.
Hope that helps.
On May 13, 2012, at 2:43 PM, Digimer wrote:
On 05/13/2012 11:45 AM, aurfalien wrote:
Hi all,
Read many posts on the subject.
Using 802.3ad.
Few problems; Cannot ping some hosts on the network, they are all up. Cannot resolve via DNS which is one of the hosts that I cannot ping, internal nor external DNS hosts. Unplugging the NICS and plugging them back in will then not allow pining if the default d=gateway.
When cold booting it somewhat works, some hosts are pingable while others are not.
When restarting the network service via /etc/init.d/network, nothing is pingable.
Here are my configs;
ifcfg-bond0 DEVICE=bond0 USERCTL=no BOOTPROTO=none ONBOOT=yes IPADDR=10.0.0.10 NETMASK=255.255.0.0 NETWORK=10.0.0.0 TYPE=Unknown IPV6INIT=no
ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
/etc/modprob.d/bonding.conf alias bond0 bonding options bond0 mode=5 miimon=100
Bonding worked great in Centos 5.x, not so well for me in Centos 6.2.
My goal is to get this working under bridging for KVM, I can only imagine the nightmare seeing I can't get a simple bond to work!
Any guidance is golden.
- aurf
I run KVM VMs, built and managed using libvirt, through bonded interfaces all the time.
I don't have a specific tutorial for this, but I cover all the steps to build a mode=1 (Active/Passive) bond and then routing VMs through it as part of a larger tutorial.
Here are the specific sections I think will help you;
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network
This section covers building 3 bonds, which you only need one of. In the tutorial, you only need to care about the "IFN" bond and bridge (bond2 + vbr2).
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Provisioning_vm000...
This covers all the steps used in the 'virt-install' call to provision the VMs, which includes telling them to use the bridge.
Hope that helps.
WOW, thanks Digi!
Will def read it.
I've another server to test and roll out soon and want to revisit bond+bridge in Centos 6.
This one unfortunately needs to be prime time tomorrow and I don't have enough resources (ie; brain power) to try the bond endeavor.
- aurf