On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's definition file). They're created as needed to link the VM's interface to the bridge. Think of them as simple network cables.
Some of the formatting isn't showing well on my mail client (text only), so I am having a little trouble parsing some of the data...
If the VMs are using br6, then you see that it's already at 9000, so you should be able to use 9000 from inside the VM as well. Trick is, the vnetX devices are connected to the br0 bridge instead, which is set to 1500 because eth0 is still 1500. So at this point, the VMs are traversing br0, not br6.
As for 'virbr0', that is libvirtd's default NAT'ed bridge. I don't recommend using those. I usually destroy them, personally.
So to fix your problem, you need to tell the VMs to use br6. If you want to use jumbo frames on br0, you need to increase the MTU of eth0. Remember that the bridge will use the MTU of the lowest connected device.
So far my current VMs that work, I see there network as;
Host device vnet 0 (Bridge 'br0')
I do not see a;
Host device vnet# (Bridge 'br6')
My ints of interest are set to jumbo so thats not a prob. I think the prob is that I am missing the vnet bridge device for eth6.
So I'm curious why its not there and how do I create it?
- aurf