[CentOS-virt] proper bridging technoque

Thu Nov 21 04:03:04 UTC 2013
Digimer <lists at alteeve.ca>

On 20/11/13 20:49, aurfalien wrote:
> 
> On Nov 20, 2013, at 4:47 PM, Digimer wrote:
> 
>> On 20/11/13 19:47, aurfalien wrote:
>>>
>>> On Nov 20, 2013, at 4:44 PM, Digimer wrote:
>>>
>>>> On 20/11/13 19:25, aurfalien wrote:
>>>>>
>>>>> On Nov 20, 2013, at 4:13 PM, Digimer wrote:
>>>>>
>>>>>> On 20/11/13 19:04, aurfalien wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Wondering if this is the proper bridging technique to use for Centos6+KVM;
>>>>>>>
>>>>>>> http://wiki.centos.org/HowTos/KVM
>>>>>>>
>>>>>>> Before I embark on this again, I would like to do it by the book.
>>>>>>>
>>>>>>> Thanks in advance,
>>>>>>>
>>>>>>> - aurf
>>>>>>
>>>>>> Personally, I do this:
>>>>>>
>>>>>> https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_The_Bridge
>>>>>>
>>>>>> It gives the VMs direct access to the outside network, as if they were
>>>>>> normal servers. I've used this setup for years without issue under many
>>>>>> different VMs with various OSes.
>>>>>>
>>>>>> cheers
>>>>>
>>>>> Many many thanks, will use it.
>>>>>
>>>>> Sounds like it will bode well concerning jumbo frames.
>>>>>
>>>>> - aurf
>>>>
>>>> Jumbo frames should be fine. I don't generally use it myself, but I have
>>>> tested it with success. Just be sure to enable it on the bridge and
>>>> slaved devices. Simply adding 'MTU="xxxx"' to each ifcfg-x file should
>>>> be sufficient.
>>>>
>>>> -- 
>>>> Digimer
> 
> Man, really sorry to bug as this seems begnin as I've done this numerous time but on non bridged ints.
> 
> When I add MTU=9000 to the bridged int, I get;
> 
> RNETLINK answers invalid argument 
> 
> My phys int is showing jumbo but the bridged int is showing standard.

No bother at all. It has been a bit since I tested it though, so I will
have to experiment a bit myself....

Done!

I remember the trick now; The bridge will take the MTU of the _lowest_
MTU device connected to it. So in my case here, I up'ed the MTU of the
backing ethX and bondY devices, but the bridge stayed at 1500.

Trying to adjust it failed with 'SIOCSIFMTU: Invalid argument', which is
the kernel's way of saying that the MTU is too large for the device
(usually hit when surpassing the hardwares real MTU). Being a bridge
though, this didn't make sense. When I up'ed the MTU of the vnetX
devices though, the bridge jumped up on its own.

So I suspect that if you do 'brctl show' and then check the MTU of the
connected devices, one of them will still have a low MTU. Push it up and
then do a non-fragmenting ping 28 bytes smaller than your MTU size. If
the ping works, you know the MTU is increased.

All this said, my experience with realtek NICs left me detesting them.
I've seen cards advertised as supporting "jumbo frames" going up to
silly sizes like 7200 only. Further, in benchmarks, the performance
dropped over something like an MTU of 4000.

If you want to determine the actual maximum MTU of a given interface,
this might help;

https://github.com/digimer/network_profiler/blob/master/network_profiler

It's a little script that uses passwordless SSH between two nodes and
automatically determines the maximum MTU between the two machines and
then benchmarks at 100 byte intervals. When it's done, it spits out a
graph showing the full and half-duplex results so you can see which MTU
was the best to use.

Once you've profiled the real devices, you can then work on the MTU of
the higher-layer devices like bonds, bridges and virtual interfaces.

hth

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?