Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Should I need to add a udev rule?
- aurf
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Should I need to add a udev rule?
No. I only muck with udev when I am moving the real device to ethX names around.
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Man, really sorry to bug as this seems begnin as I've done this numerous time but on non bridged ints.
When I add MTU=9000 to the bridged int, I get;
RNETLINK answers invalid argument
My phys int is showing jumbo but the bridged int is showing standard.
- aurf
On 20/11/13 20:49, aurfalien wrote:
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote: > Hi, > > Wondering if this is the proper bridging technique to use for Centos6+KVM; > > http://wiki.centos.org/HowTos/KVM > > Before I embark on this again, I would like to do it by the book. > > Thanks in advance, > > - aurf
Personally, I do this:
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri...
It gives the VMs direct access to the outside network, as if they were normal servers. I've used this setup for years without issue under many different VMs with various OSes.
cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Man, really sorry to bug as this seems begnin as I've done this numerous time but on non bridged ints.
When I add MTU=9000 to the bridged int, I get;
RNETLINK answers invalid argument
My phys int is showing jumbo but the bridged int is showing standard.
No bother at all. It has been a bit since I tested it though, so I will have to experiment a bit myself....
Done!
I remember the trick now; The bridge will take the MTU of the _lowest_ MTU device connected to it. So in my case here, I up'ed the MTU of the backing ethX and bondY devices, but the bridge stayed at 1500.
Trying to adjust it failed with 'SIOCSIFMTU: Invalid argument', which is the kernel's way of saying that the MTU is too large for the device (usually hit when surpassing the hardwares real MTU). Being a bridge though, this didn't make sense. When I up'ed the MTU of the vnetX devices though, the bridge jumped up on its own.
So I suspect that if you do 'brctl show' and then check the MTU of the connected devices, one of them will still have a low MTU. Push it up and then do a non-fragmenting ping 28 bytes smaller than your MTU size. If the ping works, you know the MTU is increased.
All this said, my experience with realtek NICs left me detesting them. I've seen cards advertised as supporting "jumbo frames" going up to silly sizes like 7200 only. Further, in benchmarks, the performance dropped over something like an MTU of 4000.
If you want to determine the actual maximum MTU of a given interface, this might help;
https://github.com/digimer/network_profiler/blob/master/network_profiler
It's a little script that uses passwordless SSH between two nodes and automatically determines the maximum MTU between the two machines and then benchmarks at 100 byte intervals. When it's done, it spits out a graph showing the full and half-duplex results so you can see which MTU was the best to use.
Once you've profiled the real devices, you can then work on the MTU of the higher-layer devices like bonds, bridges and virtual interfaces.
hth
On 20/11/13 23:03, Digimer wrote:
On 20/11/13 20:49, aurfalien wrote:
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
> On 20/11/13 19:04, aurfalien wrote: >> Hi, >> >> Wondering if this is the proper bridging technique to use for Centos6+KVM; >> >> http://wiki.centos.org/HowTos/KVM >> >> Before I embark on this again, I would like to do it by the book. >> >> Thanks in advance, >> >> - aurf > > Personally, I do this: > > https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri... > > It gives the VMs direct access to the outside network, as if they were > normal servers. I've used this setup for years without issue under many > different VMs with various OSes. > > cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Man, really sorry to bug as this seems begnin as I've done this numerous time but on non bridged ints.
When I add MTU=9000 to the bridged int, I get;
RNETLINK answers invalid argument
My phys int is showing jumbo but the bridged int is showing standard.
No bother at all. It has been a bit since I tested it though, so I will have to experiment a bit myself....
Done!
I remember the trick now; The bridge will take the MTU of the _lowest_ MTU device connected to it. So in my case here, I up'ed the MTU of the backing ethX and bondY devices, but the bridge stayed at 1500.
Trying to adjust it failed with 'SIOCSIFMTU: Invalid argument', which is the kernel's way of saying that the MTU is too large for the device (usually hit when surpassing the hardwares real MTU). Being a bridge though, this didn't make sense. When I up'ed the MTU of the vnetX devices though, the bridge jumped up on its own.
So I suspect that if you do 'brctl show' and then check the MTU of the connected devices, one of them will still have a low MTU. Push it up and then do a non-fragmenting ping 28 bytes smaller than your MTU size. If the ping works, you know the MTU is increased.
All this said, my experience with realtek NICs left me detesting them. I've seen cards advertised as supporting "jumbo frames" going up to silly sizes like 7200 only. Further, in benchmarks, the performance dropped over something like an MTU of 4000.
If you want to determine the actual maximum MTU of a given interface, this might help;
https://github.com/digimer/network_profiler/blob/master/network_profiler
It's a little script that uses passwordless SSH between two nodes and automatically determines the maximum MTU between the two machines and then benchmarks at 100 byte intervals. When it's done, it spits out a graph showing the full and half-duplex results so you can see which MTU was the best to use.
Once you've profiled the real devices, you can then work on the MTU of the higher-layer devices like bonds, bridges and virtual interfaces.
hth
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
Stay out of udev if you can. It's often overwritten by component addition and manipulation MTU is parsed, and overridden, by options in /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to read and manage there, and if new devices are added or replaced, the behavior is dominated by the "HWADDR" associated config files there, no matter what "udev" thinks the device number or name should be..
On Wed, Nov 20, 2013 at 11:32 PM, Digimer lists@alteeve.ca wrote:
On 20/11/13 23:03, Digimer wrote:
On 20/11/13 20:49, aurfalien wrote:
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote: > > On Nov 20, 2013, at 4:13 PM, Digimer wrote: > >> On 20/11/13 19:04, aurfalien wrote: >>> Hi, >>> >>> Wondering if this is the proper bridging technique to use for Centos6+KVM; >>> >>> http://wiki.centos.org/HowTos/KVM >>> >>> Before I embark on this again, I would like to do it by the book. >>> >>> Thanks in advance, >>> >>> - aurf >> >> Personally, I do this: >> >> https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri... >> >> It gives the VMs direct access to the outside network, as if they were >> normal servers. I've used this setup for years without issue under many >> different VMs with various OSes. >> >> cheers > > Many many thanks, will use it. > > Sounds like it will bode well concerning jumbo frames. > > - aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
Man, really sorry to bug as this seems begnin as I've done this numerous time but on non bridged ints.
When I add MTU=9000 to the bridged int, I get;
RNETLINK answers invalid argument
My phys int is showing jumbo but the bridged int is showing standard.
No bother at all. It has been a bit since I tested it though, so I will have to experiment a bit myself....
Done!
I remember the trick now; The bridge will take the MTU of the _lowest_ MTU device connected to it. So in my case here, I up'ed the MTU of the backing ethX and bondY devices, but the bridge stayed at 1500.
Trying to adjust it failed with 'SIOCSIFMTU: Invalid argument', which is the kernel's way of saying that the MTU is too large for the device (usually hit when surpassing the hardwares real MTU). Being a bridge though, this didn't make sense. When I up'ed the MTU of the vnetX devices though, the bridge jumped up on its own.
So I suspect that if you do 'brctl show' and then check the MTU of the connected devices, one of them will still have a low MTU. Push it up and then do a non-fragmenting ping 28 bytes smaller than your MTU size. If the ping works, you know the MTU is increased.
All this said, my experience with realtek NICs left me detesting them. I've seen cards advertised as supporting "jumbo frames" going up to silly sizes like 7200 only. Further, in benchmarks, the performance dropped over something like an MTU of 4000.
If you want to determine the actual maximum MTU of a given interface, this might help;
https://github.com/digimer/network_profiler/blob/master/network_profiler
It's a little script that uses passwordless SSH between two nodes and automatically determines the maximum MTU between the two machines and then benchmarks at 100 byte intervals. When it's done, it spits out a graph showing the full and half-duplex results so you can see which MTU was the best to use.
Once you've profiled the real devices, you can then work on the MTU of the higher-layer devices like bonds, bridges and virtual interfaces.
hth
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
The problem is that there are no ifcfg-vnetX config files. They are dynamically created as VMs are created or migrated to a node. You could manually (or via script) change the MTU, but that would mean that the MTU on the bridge would drop momentarily when new VMs start. This could break network traffic for any existing VMs (or real devices) using large frames.
I'm not a fan of udev either, but in this case, it is the best option. Of course, I am certainly open to hearing alternative methods if they exist.
On 21/11/13 08:39, Nico Kadel-Garcia wrote:
Stay out of udev if you can. It's often overwritten by component addition and manipulation MTU is parsed, and overridden, by options in /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to read and manage there, and if new devices are added or replaced, the behavior is dominated by the "HWADDR" associated config files there, no matter what "udev" thinks the device number or name should be..
<snip>
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
I was under the impression that the relevant MTU settings were on the *node's* local ifcfg-eth* configurations. Did something change with KVM internal networking in the last year?
On Thu, Nov 21, 2013 at 1:03 PM, Digimer lists@alteeve.ca wrote:
The problem is that there are no ifcfg-vnetX config files. They are dynamically created as VMs are created or migrated to a node. You could manually (or via script) change the MTU, but that would mean that the MTU on the bridge would drop momentarily when new VMs start. This could break network traffic for any existing VMs (or real devices) using large frames.
I'm not a fan of udev either, but in this case, it is the best option. Of course, I am certainly open to hearing alternative methods if they exist.
On 21/11/13 08:39, Nico Kadel-Garcia wrote:
Stay out of udev if you can. It's often overwritten by component addition and manipulation MTU is parsed, and overridden, by options in /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to read and manage there, and if new devices are added or replaced, the behavior is dominated by the "HWADDR" associated config files there, no matter what "udev" thinks the device number or name should be..
<snip>
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
What you do in the VMs does not impact the hosts, so I didn't speak to that. Having the bridge, interfaces, switches and vnets at 9000 (for example) doesn't immediately enable large frames in the virtual servers. It simply means that all of the links between the VM and other devices on the network are ready for JFs.
Imagine this;
{real switch} | {ethX + ethY} | {bondX} | {vbr0} | {vnetX} | {VM's eth0}
All of these devices need to have their MTU set to your desires value. If any one of these is still 1500, then only standard frames will be able to traverse them.
* real switch; Log into it and make sure jumbo frames are enabled
* ethX + ethY; If you are using bonding, be sure both/all slaved interfaces are set to use a large frame.
* bondX; Again if you use a bond, make sure the bondX interface has a large frame.
* vbr0; The bridge can not be set to a specific MTU size. It will use the lowest MTU of the various devices / interfaces connected to it.
* vnetX; These are the "virtual network cables" that are used to "plug in" a VM's interface to the bridge. This is not new by any means. In the real world, network cables don't have setable MTUs of course. In the virtual world though, they do. These interfaces are spontaneously created and destroyed as VMs come and go. This is what the udev rule is for because these "virtual network cables" don't have traditional ifcfg-X files.
* VM's eth0; This is the (emulated) network card in your virtual server. If you told the hypervisor to replicate an e1000 intel card or use the virtio-net driver, you can set a large MTU. However, if you used something like an emulated realtek card, those don't support jumbo frames, so their emulated counterparts will not support large frames either.
hth
digimer
On 21/11/13 13:32, Nico Kadel-Garcia wrote:
I was under the impression that the relevant MTU settings were on the *node's* local ifcfg-eth* configurations. Did something change with KVM internal networking in the last year?
On Thu, Nov 21, 2013 at 1:03 PM, Digimer lists@alteeve.ca wrote:
The problem is that there are no ifcfg-vnetX config files. They are dynamically created as VMs are created or migrated to a node. You could manually (or via script) change the MTU, but that would mean that the MTU on the bridge would drop momentarily when new VMs start. This could break network traffic for any existing VMs (or real devices) using large frames.
I'm not a fan of udev either, but in this case, it is the best option. Of course, I am certainly open to hearing alternative methods if they exist.
On 21/11/13 08:39, Nico Kadel-Garcia wrote:
Stay out of udev if you can. It's often overwritten by component addition and manipulation MTU is parsed, and overridden, by options in /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to read and manage there, and if new devices are added or replaced, the behavior is dominated by the "HWADDR" associated config files there, no matter what "udev" thinks the device number or name should be..
<snip>
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On Nov 21, 2013, at 10:48 AM, Digimer wrote:
What you do in the VMs does not impact the hosts, so I didn't speak to that. Having the bridge, interfaces, switches and vnets at 9000 (for example) doesn't immediately enable large frames in the virtual servers. It simply means that all of the links between the VM and other devices on the network are ready for JFs.
Imagine this;
{real switch} | {ethX + ethY} | {bondX} | {vbr0} | {vnetX} | {VM's eth0}
All of these devices need to have their MTU set to your desires value. If any one of these is still 1500, then only standard frames will be able to traverse them.
real switch; Log into it and make sure jumbo frames are enabled
ethX + ethY; If you are using bonding, be sure both/all slaved
interfaces are set to use a large frame.
- bondX; Again if you use a bond, make sure the bondX interface has a
large frame.
- vbr0; The bridge can not be set to a specific MTU size. It will use
the lowest MTU of the various devices / interfaces connected to it.
- vnetX; These are the "virtual network cables" that are used to "plug
in" a VM's interface to the bridge. This is not new by any means. In the real world, network cables don't have setable MTUs of course. In the virtual world though, they do. These interfaces are spontaneously created and destroyed as VMs come and go. This is what the udev rule is for because these "virtual network cables" don't have traditional ifcfg-X files.
- VM's eth0; This is the (emulated) network card in your virtual server.
If you told the hypervisor to replicate an e1000 intel card or use the virtio-net driver, you can set a large MTU. However, if you used something like an emulated realtek card, those don't support jumbo frames, so their emulated counterparts will not support large frames either.
hth
digimer
On 21/11/13 13:32, Nico Kadel-Garcia wrote:
I was under the impression that the relevant MTU settings were on the *node's* local ifcfg-eth* configurations. Did something change with KVM internal networking in the last year?
On Thu, Nov 21, 2013 at 1:03 PM, Digimer lists@alteeve.ca wrote:
The problem is that there are no ifcfg-vnetX config files. They are dynamically created as VMs are created or migrated to a node. You could manually (or via script) change the MTU, but that would mean that the MTU on the bridge would drop momentarily when new VMs start. This could break network traffic for any existing VMs (or real devices) using large frames.
I'm not a fan of udev either, but in this case, it is the best option. Of course, I am certainly open to hearing alternative methods if they exist.
On 21/11/13 08:39, Nico Kadel-Garcia wrote:
Stay out of udev if you can. It's often overwritten by component addition and manipulation MTU is parsed, and overridden, by options in /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to read and manage there, and if new devices are added or replaced, the behavior is dominated by the "HWADDR" associated config files there, no matter what "udev" thinks the device number or name should be..
<snip>
Another update;
To make sure the VMs' vnetX devices are created with a larger MTU, you *sill* need to update udev[1].
Append to /etc/udev/rules.d/70-persistent-net.rules;
==== # Make all VMs' vnetX devices come up with an MTU of 9000. SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000" ====
Assuming you find that you can use an MTU of '9000', of course. No need to reboot or even restart networking. Just add that line and then provision/boot your VMs. If the VMs are already running, you can adjust the MTU of the existing 'vnetX' devices with:
ifconfig vnetX mtu 9000
Cheers!
PS - Credit for the udev rule:
http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-gue...
-- Digimer
Hi,
I seem to lack a vnet to bridge device.
When I go to change my interface on the VM using the GUI, I do not see an option for "Host device vnet# (Bridge 'br6')
Instead I see "host device eth6 (Bridge 'br6') So before creating one via;
brctl addif...
Let me explain my config;
eth0 - standard MTU eth1 - disabled *eth6 - 10Gb at jumbo * This card was added after KVM was setup and running.
My ifconfig output;
br0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A inet addr:10.0.10.218 Bcast:10.0.255.255 Mask:255.255.0.0 inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:754670 errors:0 dropped:0 overruns:0 frame:0 TX packets:43162 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:160904242 (153.4 MiB) TX bytes:51752758 (49.3 MiB)
br6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 inet addr:10.0.10.220 Bcast:10.0.255.255 Mask:255.255.0.0 inet6 addr: fe80::205:33ff:fe48:7b29/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:4130 errors:0 dropped:0 overruns:0 frame:0 TX packets:11150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:131498 (128.4 KiB) TX bytes:513156 (501.1 KiB)
eth0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3379929 errors:18 dropped:0 overruns:0 frame:18 TX packets:3565007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:840911383 (801.9 MiB) TX bytes:3519831013 (3.2 GiB) Memory:fbbe0000-fbc00000
eth6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Memory:fbd40000-fbd7ffff
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:185130 errors:0 dropped:0 overruns:0 frame:0 TX packets:185130 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:138905226 (132.4 MiB) TX bytes:138905226 (132.4 MiB)
virbr0 Link encap:Ethernet HWaddr 52:54:00:CE:7A:65 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:11139 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:512410 (500.4 KiB)
vnet0 Link encap:Ethernet HWaddr FE:30:48:7E:65:72 inet6 addr: fe80::fc30:48ff:fe7e:6572/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:1045 errors:0 dropped:0 overruns:0 frame:0 TX packets:697730 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:119723 (116.9 KiB) TX bytes:175334262 (167.2 MiB)
vnet1 Link encap:Ethernet HWaddr FE:16:36:0E:E7:F4 inet6 addr: fe80::fc16:36ff:fe0e:e7f4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:3494450 errors:0 dropped:0 overruns:0 frame:0 TX packets:3243369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:3466241191 (3.2 GiB) TX bytes:822212316 (784.1 MiB)
brctl show;
bridge name bridge id STP enabled interfaces br0 8000.002590639f7a no eth0 vnet0 vnet1 br6 8000.000533487b29 no eth6 virbr0 8000.525400ce7a65 yes virbr0-nic
Should I have a virbr6?
I'm obviously pretty lost and must admin I sorta hate bridging in KVM.
- aurf
Hi,
I seem to lack a vnet to bridge device.
When I go to change my interface on the VM using the GUI, I do not see an option for "Host device vnet# (Bridge 'br6')
Instead I see "host device eth6 (Bridge 'br6') So before creating one via;
brctl addif...
Let me explain my config;
eth0 - standard MTU eth1 - disabled *eth6 - 10Gb at jumbo
- This card was added after KVM was setup and running.
My ifconfig output;
br0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A inet addr:10.0.10.218 Bcast:10.0.255.255 Mask:255.255.0.0 inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:754670 errors:0 dropped:0 overruns:0 frame:0 TX packets:43162 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:160904242 (153.4 MiB) TX bytes:51752758 (49.3 MiB)
br6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 inet addr:10.0.10.220 Bcast:10.0.255.255 Mask:255.255.0.0 inet6 addr: fe80::205:33ff:fe48:7b29/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:4130 errors:0 dropped:0 overruns:0 frame:0 TX packets:11150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:131498 (128.4 KiB) TX bytes:513156 (501.1 KiB)
eth0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3379929 errors:18 dropped:0 overruns:0 frame:18 TX packets:3565007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:840911383 (801.9 MiB) TX bytes:3519831013 (3.2 GiB) Memory:fbbe0000-fbc00000
eth6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Memory:fbd40000-fbd7ffff
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:185130 errors:0 dropped:0 overruns:0 frame:0 TX packets:185130 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:138905226 (132.4 MiB) TX bytes:138905226 (132.4 MiB)
virbr0 Link encap:Ethernet HWaddr 52:54:00:CE:7A:65 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:11139 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:512410 (500.4 KiB)
vnet0 Link encap:Ethernet HWaddr FE:30:48:7E:65:72 inet6 addr: fe80::fc30:48ff:fe7e:6572/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:1045 errors:0 dropped:0 overruns:0 frame:0 TX packets:697730 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:119723 (116.9 KiB) TX bytes:175334262 (167.2 MiB)
vnet1 Link encap:Ethernet HWaddr FE:16:36:0E:E7:F4 inet6 addr: fe80::fc16:36ff:fe0e:e7f4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:3494450 errors:0 dropped:0 overruns:0 frame:0 TX packets:3243369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:3466241191 (3.2 GiB) TX bytes:822212316 (784.1 MiB)
brctl show;
bridge namebridge idSTP enabledinterfaces br08000.002590639f7anoeth0 vnet0 vnet1 br68000.000533487b29noeth6 virbr08000.525400ce7a65yesvirbr0-nic
Should I have a virbr6?
I'm obviously pretty lost and must admin I sorta hate bridging in KVM.
- aurf
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
So far my current VMs that work, I see there network as;
Host device vnet 0 (Bridge 'br0')
I do not see a;
Host device vnet# (Bridge 'br6')
My ints of interest are set to jumbo so thats not a prob. I think the prob is that I am missing the vnet bridge device for eth6.
So I'm curious why its not there and how do I create it?
- aurf
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
So far my current VMs that work, I see there network as;
Host device vnet 0 (Bridge 'br0')
I do not see a;
Host device vnet# (Bridge 'br6')
My ints of interest are set to jumbo so thats not a prob. I think the prob is that I am missing the vnet bridge device for eth6.
So I'm curious why its not there and how do I create it?
- aurf
I can't speak to the tools you are using, but I can say that this is where the bridge is defined in the VM's XML definition file:
==== [root@an-c05n01 ~]# cat /shared/definitions/vm01-win2008.xml |grep vbr -B 2 -A 5 <interface type='bridge'> <mac address='52:54:00:8e:67:32'/> <source bridge='vbr2'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> ====
Try changing: <source bridge='br0'/> -> <source bridge='br6'/> and restart the VM.
On Nov 21, 2013, at 2:36 PM, Digimer wrote:
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
So far my current VMs that work, I see there network as;
Host device vnet 0 (Bridge 'br0')
I do not see a;
Host device vnet# (Bridge 'br6')
My ints of interest are set to jumbo so thats not a prob. I think the prob is that I am missing the vnet bridge device for eth6.
So I'm curious why its not there and how do I create it?
- aurf
I can't speak to the tools you are using, but I can say that this is where the bridge is defined in the VM's XML definition file:
==== [root@an-c05n01 ~]# cat /shared/definitions/vm01-win2008.xml |grep vbr -B 2 -A 5
<interface type='bridge'> <mac address='52:54:00:8e:67:32'/> <source bridge='vbr2'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> ====
Try changing: <source bridge='br0'/> -> <source bridge='br6'/> and restart the VM.
It already has the source bridge as br6.
But I think I need to have a;
vnet6 to br6 relationship defined somewhere.
Right now I only see;
Host device eth6 to br6 but I need vnet6 to br6 or something like that.
Currently, while my guest VM sees its int via ifconfig, it cannot get an packets to/from.
This is why I feel the need for a vnet to br6.
- aurf
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 21/11/13 17:42, aurfalien wrote:
On Nov 21, 2013, at 2:36 PM, Digimer wrote:
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
So far my current VMs that work, I see there network as;
Host device vnet 0 (Bridge 'br0')
I do not see a;
Host device vnet# (Bridge 'br6')
My ints of interest are set to jumbo so thats not a prob. I think the prob is that I am missing the vnet bridge device for eth6.
So I'm curious why its not there and how do I create it?
- aurf
I can't speak to the tools you are using, but I can say that this is where the bridge is defined in the VM's XML definition file:
==== [root@an-c05n01 ~]# cat /shared/definitions/vm01-win2008.xml |grep vbr -B 2 -A 5
<interface type='bridge'> <mac address='52:54:00:8e:67:32'/> <source bridge='vbr2'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> ====
Try changing: <source bridge='br0'/> -> <source bridge='br6'/> and restart the VM.
It already has the source bridge as br6.
But I think I need to have a;
vnet6 to br6 relationship defined somewhere.
Right now I only see;
Host device eth6 to br6 but I need vnet6 to br6 or something like that.
Currently, while my guest VM sees its int via ifconfig, it cannot get an packets to/from.
This is why I feel the need for a vnet to br6.
- aurf
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
Think of each bridge as if it were a physical switch.
When you detached vnet1 from br0, you unplugged it from a switch. When you attached it to br1, you plugged it into another switch.
If there is no connection out to your network/internet on a given switch, then anything plugged into that switch will go nowhere. Same with bridges.
You seemed to indicate earlier that the main connection was on br6. Is this true? If so, then "switch" br6 is the switch with the "uplink" to your network. Plug a VM into it and you can route out through it.
When you rebooted the VM, the hypervisor read the definition file. That definition file says to plug in the server to br0. So it makes sense that the reboot reconnected it to br0.
If you want to use jumbo frames on the br0 switch, you need to set the larger MTU on the interfaces are all set to your desired MTU size.
This is int4eresting stuff. I do note that the "virt-manager" tool, and NetworkManager, give *no* insight and detailed management sufficient to resolve this stuff. Note also that dancing through all the hoops to get this working, end-to-end, is one of the big reasons that most environments refuse to even *try* to use jumbo frames, as helpful as they sometimes are to heavy data transfers.
On Thu, Nov 21, 2013 at 6:58 PM, Digimer lists@alteeve.ca wrote:
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
Think of each bridge as if it were a physical switch.
When you detached vnet1 from br0, you unplugged it from a switch. When you attached it to br1, you plugged it into another switch.
If there is no connection out to your network/internet on a given switch, then anything plugged into that switch will go nowhere. Same with bridges.
You seemed to indicate earlier that the main connection was on br6. Is this true? If so, then "switch" br6 is the switch with the "uplink" to your network. Plug a VM into it and you can route out through it.
When you rebooted the VM, the hypervisor read the definition file. That definition file says to plug in the server to br0. So it makes sense that the reboot reconnected it to br0.
If you want to use jumbo frames on the br0 switch, you need to set the larger MTU on the interfaces are all set to your desired MTU size.
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
It's not so much hard as it is knowing all the hops in your network. If anything along the chain has a low MTU, the whole route is effectively reduced.
On 21/11/13 20:20, Nico Kadel-Garcia wrote:
This is int4eresting stuff. I do note that the "virt-manager" tool, and NetworkManager, give *no* insight and detailed management sufficient to resolve this stuff. Note also that dancing through all the hoops to get this working, end-to-end, is one of the big reasons that most environments refuse to even *try* to use jumbo frames, as helpful as they sometimes are to heavy data transfers.
On Thu, Nov 21, 2013 at 6:58 PM, Digimer lists@alteeve.ca wrote:
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
Think of each bridge as if it were a physical switch.
When you detached vnet1 from br0, you unplugged it from a switch. When you attached it to br1, you plugged it into another switch.
If there is no connection out to your network/internet on a given switch, then anything plugged into that switch will go nowhere. Same with bridges.
You seemed to indicate earlier that the main connection was on br6. Is this true? If so, then "switch" br6 is the switch with the "uplink" to your network. Plug a VM into it and you can route out through it.
When you rebooted the VM, the hypervisor read the definition file. That definition file says to plug in the server to br0. So it makes sense that the reboot reconnected it to br0.
If you want to use jumbo frames on the br0 switch, you need to set the larger MTU on the interfaces are all set to your desired MTU size.
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Sorry guys, I've tried and tried, no dice.
Seems like I am missing missing a vent1, vnet2, etc... to br0 association.
I can see were the vnet# gets created upon VM startup.
And based on how my VM xml file is set, will go to either br0, br1. br2, etc...
But in my case, the only interface that works is vnet0 for all my VMs.
In the CentOS virtual machine manager for whatever NIC you choose, there is a drop down option for virtual network interface.
For source device, I only ever see a vnet0 to br0. For my other bridges, there is only eth# to vnet#.
The configs for this are rather simple and I don't know were else to look;
various /etc/sysconfig/network* files
and the VM xml config.
Everythings is set to the same MTU wether standard or jumbo, but no matter what, my VMs network interfaces work when set to vnet0 as its connected to br0.
I cannot get br6 to show with vnet2 for example. Not even my vnet1 is connected to br1 but rather br0.
However in the UI as mentioned before, i do not see a vnet1 to br1 relationship.
Are there any other config files I can look at?
- aurf On Nov 21, 2013, at 5:52 PM, Digimer wrote:
It's not so much hard as it is knowing all the hops in your network. If anything along the chain has a low MTU, the whole route is effectively reduced.
On 21/11/13 20:20, Nico Kadel-Garcia wrote:
This is int4eresting stuff. I do note that the "virt-manager" tool, and NetworkManager, give *no* insight and detailed management sufficient to resolve this stuff. Note also that dancing through all the hoops to get this working, end-to-end, is one of the big reasons that most environments refuse to even *try* to use jumbo frames, as helpful as they sometimes are to heavy data transfers.
On Thu, Nov 21, 2013 at 6:58 PM, Digimer lists@alteeve.ca wrote:
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
Think of each bridge as if it were a physical switch.
When you detached vnet1 from br0, you unplugged it from a switch. When you attached it to br1, you plugged it into another switch.
If there is no connection out to your network/internet on a given switch, then anything plugged into that switch will go nowhere. Same with bridges.
You seemed to indicate earlier that the main connection was on br6. Is this true? If so, then "switch" br6 is the switch with the "uplink" to your network. Plug a VM into it and you can route out through it.
When you rebooted the VM, the hypervisor read the definition file. That definition file says to plug in the server to br0. So it makes sense that the reboot reconnected it to br0.
If you want to use jumbo frames on the br0 switch, you need to set the larger MTU on the interfaces are all set to your desired MTU size.
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 22/11/13 17:11, aurfalien wrote:
Sorry guys, I've tried and tried, no dice.
Seems like I am missing missing a vent1, vnet2, etc... to br0 association.
I can see were the vnet# gets created upon VM startup.
And based on how my VM xml file is set, will go to either br0, br1. br2, etc...
But in my case, the only interface that works is vnet0 for all my VMs.
In the CentOS virtual machine manager for whatever NIC you choose, there is a drop down option for virtual network interface.
For source device, I only ever see a vnet0 to br0. For my other bridges, there is only eth# to vnet#.
The configs for this are rather simple and I don't know were else to look;
various /etc/sysconfig/network* files
and the VM xml config.
Everythings is set to the same MTU wether standard or jumbo, but no matter what, my VMs network interfaces work when set to vnet0 as its connected to br0.
I cannot get br6 to show with vnet2 for example. Not even my vnet1 is connected to br1 but rather br0.
However in the UI as mentioned before, i do not see a vnet1 to br1 relationship.
Are there any other config files I can look at?
- aurf
Why do you have so many bridges? In almost all cases, only one bridge is needed. The bridge should connect to a real interface to get to the outside world. Then all VMs should point to that bridge.
I think you might be over-complicating things.
Cancel my last email as I peeked at a server I set up last year w/o issue having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that need to be set?
My guest VMs being on a 2nd or 3rd NIC interface can't get a IP via DHCP and when set statically cannot send/recv packets.
I vaguely recall setting routes on the working box from last year but forgot :)
- aurf On Nov 21, 2013, at 5:52 PM, Digimer wrote:
It's not so much hard as it is knowing all the hops in your network. If anything along the chain has a low MTU, the whole route is effectively reduced.
On 21/11/13 20:20, Nico Kadel-Garcia wrote:
This is int4eresting stuff. I do note that the "virt-manager" tool, and NetworkManager, give *no* insight and detailed management sufficient to resolve this stuff. Note also that dancing through all the hoops to get this working, end-to-end, is one of the big reasons that most environments refuse to even *try* to use jumbo frames, as helpful as they sometimes are to heavy data transfers.
On Thu, Nov 21, 2013 at 6:58 PM, Digimer lists@alteeve.ca wrote:
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything else. The vnetX number is a simple sequence that increments each time a VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from earlier showed that both vnet0 and vnet1 were connected to br0. You can try using the bridge utils to remove them from br0 and connect them to br6 as a test.
-- Digimer
Well, when I remove vnet1 from br0 and add vnet1 to br1, I loose connectivity with my VMs.
No biggy so I reboot my entire host.
Then vnet1 show back under br0.
I just don't understand enough about this to get a clue, depressing.
- aurf
Think of each bridge as if it were a physical switch.
When you detached vnet1 from br0, you unplugged it from a switch. When you attached it to br1, you plugged it into another switch.
If there is no connection out to your network/internet on a given switch, then anything plugged into that switch will go nowhere. Same with bridges.
You seemed to indicate earlier that the main connection was on br6. Is this true? If so, then "switch" br6 is the switch with the "uplink" to your network. Plug a VM into it and you can route out through it.
When you rebooted the VM, the hypervisor read the definition file. That definition file says to plug in the server to br0. So it makes sense that the reboot reconnected it to br0.
If you want to use jumbo frames on the br0 switch, you need to set the larger MTU on the interfaces are all set to your desired MTU size.
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 22/11/13 18:11, aurfalien wrote:
Cancel my last email as I peeked at a server I set up last year w/o issue having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that need to be set?
My guest VMs being on a 2nd or 3rd NIC interface can't get a IP via DHCP and when set statically cannot send/recv packets.
I vaguely recall setting routes on the working box from last year but forgot :)
- aurf
We're not all gentlemen. ;)
So you have multiple separate networks? If so, then I assume you have a bridge per network? If so, then the server you configure will see the outside network that the given bridge connects to.
Remember, a bridge is just a "software network switch". If the "switch" is not connected to the server with a DHCP server, it will not get an answer to DHCP requests.
On Nov 22, 2013, at 3:51 PM, Digimer wrote:
On 22/11/13 18:11, aurfalien wrote:
Cancel my last email as I peeked at a server I set up last year w/o issue having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that need to be set?
My guest VMs being on a 2nd or 3rd NIC interface can't get a IP via DHCP and when set statically cannot send/recv packets.
I vaguely recall setting routes on the working box from last year but forgot :)
- aurf
We're not all gentlemen. ;)
So you have multiple separate networks?
Well no, I have 1 network that my host is connected to. This host has 2 active NICs, eth0 1Gb (which has a corresponding br0) and eth6 10Gb (which has a corresponding br6).
It also has 1 inactive or not connected NIC being eth1 which has a br1 associated with it.
All and any VMs configured on this host can send/rcv packets while on br0.
But when I set any of those VMs to use br6, no routing occurs.
So while I have a bridge per NIC, I only have 1 network, 1 subnet, 1 gateway etc...
I've looked at the diff between my working server having 6 NICs and my non working server have 2 active NICs and don't see any diff.
- aurf
OMG!!!
So let me first give an analogy.
Me; So I have a flat tire.
You; No, its inflated and looks fine.
Me; No, I'm tellin ya its flat.
You; I just checked tire pressure, all is well.
Me; Holy sh$#, the road it jacked up giving e the illusion of a flat!
So I am able to ping my br6 int from another host so I assumed it was up. But upon examining ip route, I see my def route is my primary interface br0 so it answered back even though I ping br6.
I walk over to the KVM host and eth6 is physically unplugged!
Its in now and all is well.
Dude, so sorry.
Now, to fix it so a ping will result in accuracy. - aurf
On Nov 22, 2013, at 3:51 PM, Digimer wrote:
On 22/11/13 18:11, aurfalien wrote:
Cancel my last email as I peeked at a server I set up last year w/o issue having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that need to be set?
My guest VMs being on a 2nd or 3rd NIC interface can't get a IP via DHCP and when set statically cannot send/recv packets.
I vaguely recall setting routes on the working box from last year but forgot :)
- aurf
We're not all gentlemen. ;)
So you have multiple separate networks? If so, then I assume you have a bridge per network? If so, then the server you configure will see the outside network that the given bridge connects to.
Remember, a bridge is just a "software network switch". If the "switch" is not connected to the server with a DHCP server, it will not get an answer to DHCP requests.
-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 22/11/13 19:19, aurfalien wrote:
OMG!!!
So let me first give an analogy.
Me; So I have a flat tire.
You; No, its inflated and looks fine.
Me; No, I'm tellin ya its flat.
You; I just checked tire pressure, all is well.
Me; Holy sh$#, the road it jacked up giving e the illusion of a flat!
So I am able to ping my br6 int from another host so I assumed it was up. But upon examining ip route, I see my def route is my primary interface br0 so it answered back even though I ping br6.
I walk over to the KVM host and eth6 is physically unplugged!
Its in now and all is well.
Dude, so sorry.
Now, to fix it so a ping will result in accuracy.
- aurf
Been there, done that.
May I make a suggestion? If eth0 and eth1 both connect to the same network, why not set them up as a mode=1 (active/passive) bond? Then you can flip between them without interrupting traffic flow (very useful for network cabling work).
If eth6 also goes to the same place, just faster, then put it in the bond and make it the primary interface. Should it fail, the traffic will route through eth0 or eth1. When eth6 comes back up though, the bond would reselect it and your servers would start transmitting at 10 Gbps again.
Cheers
On Nov 20, 2013, at 8:03 PM, Digimer wrote:
On 20/11/13 20:49, aurfalien wrote:
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
> On 20/11/13 19:04, aurfalien wrote: >> Hi, >> >> Wondering if this is the proper bridging technique to use for Centos6+KVM; >> >> http://wiki.centos.org/HowTos/KVM >> >> Before I embark on this again, I would like to do it by the book. >> >> Thanks in advance, >> >> - aurf > > Personally, I do this: > > https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bri... > > It gives the VMs direct access to the outside network, as if they were > normal servers. I've used this setup for years without issue under many > different VMs with various OSes. > > cheers
Many many thanks, will use it.
Sounds like it will bode well concerning jumbo frames.
- aurf
Jumbo frames should be fine. I don't generally use it myself, but I have tested it with success. Just be sure to enable it on the bridge and slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should be sufficient.
-- Digimer
OMG BINGO.
Yes, I've a few interfaces at standard MTU.
Was hoping for true isolation, but the ints aren't really isolated when in bridge mode.
This is good to know.
And thanks for the replies, its like a light bulb went off.
- aurf
I wrote this last year. I've found no other description that lays out the difficulties of KVM bridges, tagged VLAN's, and pair bonding.
https://wikis.uit.tufts.edu/confluence/display/TUSKpub/Configure+Pair+Bondin...
I'm not working for that university anymore, so I've not had an opportunity to update it. But it's pretty complete. The anaconda with its internal use of NetworkManager tools that come from upstream *cannot be convinced* to properly configure these settings, they're simply not available setup options. You have to set them up manually on the KVM server after basic OS installation.
These are some of the reasons I reject NetworkManager for any server setups or virtualization environments. It lacks the most basic setup features such as pair bonding or bridge setups, and I've not yet seen evidence of improvement in the upstream codebase.
Nico Kadel-Garcia
On Wed, Nov 20, 2013 at 7:04 PM, aurfalien aurfalien@gmail.com wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt