[CentOS-virt] proper bridging technoque
aurfalien
aurfalien at gmail.com
Thu Nov 21 22:15:13 UTC 2013
On Nov 21, 2013, at 10:48 AM, Digimer wrote:
> What you do in the VMs does not impact the hosts, so I didn't speak to
> that. Having the bridge, interfaces, switches and vnets at 9000 (for
> example) doesn't immediately enable large frames in the virtual servers.
> It simply means that all of the links between the VM and other devices
> on the network are ready for JFs.
>
> Imagine this;
>
> {real switch}
> |
> {ethX + ethY}
> |
> {bondX}
> |
> {vbr0}
> |
> {vnetX}
> |
> {VM's eth0}
>
> All of these devices need to have their MTU set to your desires value.
> If any one of these is still 1500, then only standard frames will be
> able to traverse them.
>
> * real switch; Log into it and make sure jumbo frames are enabled
>
> * ethX + ethY; If you are using bonding, be sure both/all slaved
> interfaces are set to use a large frame.
>
> * bondX; Again if you use a bond, make sure the bondX interface has a
> large frame.
>
> * vbr0; The bridge can not be set to a specific MTU size. It will use
> the lowest MTU of the various devices / interfaces connected to it.
>
> * vnetX; These are the "virtual network cables" that are used to "plug
> in" a VM's interface to the bridge. This is not new by any means. In the
> real world, network cables don't have setable MTUs of course. In the
> virtual world though, they do. These interfaces are spontaneously
> created and destroyed as VMs come and go. This is what the udev rule is
> for because these "virtual network cables" don't have traditional
> ifcfg-X files.
>
> * VM's eth0; This is the (emulated) network card in your virtual server.
> If you told the hypervisor to replicate an e1000 intel card or use the
> virtio-net driver, you can set a large MTU. However, if you used
> something like an emulated realtek card, those don't support jumbo
> frames, so their emulated counterparts will not support large frames either.
>
> hth
>
> digimer
>
> On 21/11/13 13:32, Nico Kadel-Garcia wrote:
>> I was under the impression that the relevant MTU settings were on the
>> *node's* local ifcfg-eth* configurations. Did something change with
>> KVM internal networking in the last year?
>>
>> On Thu, Nov 21, 2013 at 1:03 PM, Digimer <lists at alteeve.ca> wrote:
>>> The problem is that there are no ifcfg-vnetX config files. They are
>>> dynamically created as VMs are created or migrated to a node. You could
>>> manually (or via script) change the MTU, but that would mean that the
>>> MTU on the bridge would drop momentarily when new VMs start. This could
>>> break network traffic for any existing VMs (or real devices) using large
>>> frames.
>>>
>>> I'm not a fan of udev either, but in this case, it is the best option.
>>> Of course, I am certainly open to hearing alternative methods if they exist.
>>>
>>> On 21/11/13 08:39, Nico Kadel-Garcia wrote:
>>>> Stay out of udev if you can. It's often overwritten by component
>>>> addition and manipulation MTU is parsed, and overridden, by options in
>>>> /etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to
>>>> read and manage there, and if new devices are added or replaced, the
>>>> behavior is dominated by the "HWADDR" associated config files there,
>>>> no matter what "udev" thinks the device number or name should be..
>>>
>>> <snip>
>>>
>>>>>
>>>>> Another update;
>>>>>
>>>>> To make sure the VMs' vnetX devices are created with a larger MTU, you
>>>>> *sill* need to update udev[1].
>>>>>
>>>>> Append to /etc/udev/rules.d/70-persistent-net.rules;
>>>>>
>>>>> ====
>>>>> # Make all VMs' vnetX devices come up with an MTU of 9000.
>>>>> SUBSYSTEM=="net", ACTION=="add", KERNEL=="vnet*", ATTR{mtu}="9000"
>>>>> ====
>>>>>
>>>>> Assuming you find that you can use an MTU of '9000', of course. No
>>>>> need to reboot or even restart networking. Just add that line and then
>>>>> provision/boot your VMs. If the VMs are already running, you can adjust
>>>>> the MTU of the existing 'vnetX' devices with:
>>>>>
>>>>> ifconfig vnetX mtu 9000
>>>>>
>>>>> Cheers!
>>>>>
>>>>> PS - Credit for the udev rule:
>>>>>
>>>>> http://linuxaleph.blogspot.ca/2013/01/how-to-network-jumbo-frames-to-kvm-guest.html
>>>>>
>>>>> --
>>>>> Digimer
Hi,
I seem to lack a vnet to bridge device.
When I go to change my interface on the VM using the GUI, I do not see an option for "Host device vnet# (Bridge 'br6')
Instead I see "host device eth6 (Bridge 'br6') So before creating one via;
brctl addif...
Let me explain my config;
eth0 - standard MTU
eth1 - disabled
*eth6 - 10Gb at jumbo
* This card was added after KVM was setup and running.
My ifconfig output;
br0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A
inet addr:10.0.10.218 Bcast:10.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:754670 errors:0 dropped:0 overruns:0 frame:0
TX packets:43162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:160904242 (153.4 MiB) TX bytes:51752758 (49.3 MiB)
br6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29
inet addr:10.0.10.220 Bcast:10.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::205:33ff:fe48:7b29/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:4130 errors:0 dropped:0 overruns:0 frame:0
TX packets:11150 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:131498 (128.4 KiB) TX bytes:513156 (501.1 KiB)
eth0 Link encap:Ethernet HWaddr 00:25:90:63:9F:7A
inet6 addr: fe80::225:90ff:fe63:9f7a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3379929 errors:18 dropped:0 overruns:0 frame:18
TX packets:3565007 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:840911383 (801.9 MiB) TX bytes:3519831013 (3.2 GiB)
Memory:fbbe0000-fbc00000
eth6 Link encap:Ethernet HWaddr 00:05:33:48:7B:29
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:fbd40000-fbd7ffff
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:185130 errors:0 dropped:0 overruns:0 frame:0
TX packets:185130 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:138905226 (132.4 MiB) TX bytes:138905226 (132.4 MiB)
virbr0 Link encap:Ethernet HWaddr 52:54:00:CE:7A:65
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:11139 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:512410 (500.4 KiB)
vnet0 Link encap:Ethernet HWaddr FE:30:48:7E:65:72
inet6 addr: fe80::fc30:48ff:fe7e:6572/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:1045 errors:0 dropped:0 overruns:0 frame:0
TX packets:697730 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:119723 (116.9 KiB) TX bytes:175334262 (167.2 MiB)
vnet1 Link encap:Ethernet HWaddr FE:16:36:0E:E7:F4
inet6 addr: fe80::fc16:36ff:fe0e:e7f4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:3494450 errors:0 dropped:0 overruns:0 frame:0
TX packets:3243369 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:3466241191 (3.2 GiB) TX bytes:822212316 (784.1 MiB)
brctl show;
bridge name bridge id STP enabled interfaces
br0 8000.002590639f7a no eth0
vnet0
vnet1
br6 8000.000533487b29 no eth6
virbr0 8000.525400ce7a65 yes virbr0-nic
Should I have a virbr6?
I'm obviously pretty lost and must admin I sorta hate bridging in KVM.
- aurf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.centos.org/pipermail/centos-virt/attachments/20131121/e6140460/attachment-0001.html
More information about the CentOS-virt
mailing list