As part of my initial KVM host on C8 deployment, I decided to set up some HA features on the new host, specifically NIC teaming. Teaming seems to be bond++ of a sort, so I thought I would at least try it. So here's the scenario:
1.) Server with two gigabit ethernet ports, two Cisco switches.
2.) During install, used the 'Server with GUI' group and added the virtualization packages.
3.) During install, set up team0 to include the two gig-e ports set up active-backup (two switches).
4.) During install, set up three bridges, with the slave devices being VLANs pointed to the team0 subinterfaces (using VLANs 68, 101, and 302; 101 is to be the management bridge for the host, with guests on all three VLANs). So, for instance, bridge101 has a slave that is VLAN101 that points to team0.101 with a VLAN ID of 101. The bridge101 interface has a manual IP address, but bridge68 and bridge302 do not (IPv4 disabled; IPv6 Ignore)
5.) After reboot, the bridge101 interface comes up, and I successfully connect to the host, since the install is 8.1.1911, I ran a 'dnf update' up to 8.2.2004, which went well, then I successfully set up and used cockpit, cockpit-bridge, cockpit-machines, again over the IP address on bridge101.
Ok, now that the base connectivity is working:
1.) Connect to the host (traffic on bridge101 over team0.101) using virt-manager on my laptop and install a C8 guest, with the network pointed to bridge302, and a manual IP address.
2.) After reboot of guest, there is no IP connectivity to the guest's gateway on VLAN302.
3.) HOWEVER, the gateway's MAC address shows up in the host's bridge fdb for VLAN302, AND in the arp output for the guest; ALSO, the MAC address for the guest shows on the cisco switch 'show mac-address-table' output. The output of 'ip --br link' looks normal for this configuration, but there's a disconnect somewhere. So, since I see that VLAN101 is passing traffic to the bridge correctly (since the management IP is on that VLAN), I try to set up a guest on VLAN101; no dice, no work, but the management IP still works fine.
So, does anyone here have a working setup with KVM guests connecting to bridges using 802.1q VLANs on top of a team? Or even on top of a bond (I can reinstall and set it up as a bond easily enough, using active-backup, as far as I know; and, yes, I would reinstall the host from scratch to do this).
Hi,
the first thing that comes to mind, did you set ip_forward to enable in /etc/sysctl.conf ? net.ipv4.ip_forward = 1
Should explain why you IP on the bridge works but not on the vms.
Regards,
Michel
On Wed, 2020-06-17 at 09:43 -0400, Lamar Owen wrote:
As part of my initial KVM host on C8 deployment, I decided to set up some HA features on the new host, specifically NIC teaming. Teaming seems to be bond++ of a sort, so I thought I would at least try it. So here's the scenario:
1.) Server with two gigabit ethernet ports, two Cisco switches.
2.) During install, used the 'Server with GUI' group and added the virtualization packages.
3.) During install, set up team0 to include the two gig-e ports set up active-backup (two switches).
4.) During install, set up three bridges, with the slave devices being VLANs pointed to the team0 subinterfaces (using VLANs 68, 101, and 302; 101 is to be the management bridge for the host, with guests on all three VLANs). So, for instance, bridge101 has a slave that is VLAN101 that points to team0.101 with a VLAN ID of 101. The bridge101 interface has a manual IP address, but bridge68 and bridge302 do not (IPv4 disabled; IPv6 Ignore)
5.) After reboot, the bridge101 interface comes up, and I successfully connect to the host, since the install is 8.1.1911, I ran a 'dnf update' up to 8.2.2004, which went well, then I successfully set up and used cockpit, cockpit-bridge, cockpit-machines, again over the IP address on bridge101.
Ok, now that the base connectivity is working:
1.) Connect to the host (traffic on bridge101 over team0.101) using virt-manager on my laptop and install a C8 guest, with the network pointed to bridge302, and a manual IP address.
2.) After reboot of guest, there is no IP connectivity to the guest's gateway on VLAN302.
3.) HOWEVER, the gateway's MAC address shows up in the host's bridge fdb for VLAN302, AND in the arp output for the guest; ALSO, the MAC address for the guest shows on the cisco switch 'show mac-address-table' output. The output of 'ip --br link' looks normal for this configuration, but there's a disconnect somewhere. So, since I see that VLAN101 is passing traffic to the bridge correctly (since the management IP is on that VLAN), I try to set up a guest on VLAN101; no dice, no work, but the management IP still works fine.
So, does anyone here have a working setup with KVM guests connecting to bridges using 802.1q VLANs on top of a team? Or even on top of a bond (I can reinstall and set it up as a bond easily enough, using active-backup, as far as I know; and, yes, I would reinstall the host from scratch to do this).
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 6/17/20 9:59 AM, Deventer-2, M.S.J. van wrote:
Hi,
the first thing that comes to mind, did you set ip_forward to enable in /etc/sysctl.conf ? net.ipv4.ip_forward = 1
Should explain why you IP on the bridge works but not on the vms.
First, thanks for the reply and excellent suggestion. Yeah, I thought about that, and while it's not specifically defined in /etc/sysctl.conf or /etc/sysctl.d/*, if I:
[root@c8-kvm-pe1950-1 ~]# cat /proc/sys/net/ipv4/ip_forward 1
It shows as being defined to 1. I'm going to try adding to sysctl.conf and see if that makes any difference, though.
On 6/17/20 11:04 AM, Lamar Owen wrote:
... It shows as being defined to 1. I'm going to try adding to sysctl.conf and see if that makes any difference, though.
No difference. What is aggravating, though, is virtually every howto on bridging out there refers to the deprecated brctl utility (from the bridge-utils package), and C8 no longer includes that package (even though it's in current Fedora 32!). I know, I know, the new way is using the 'bridge' command or 'ip --br'.... So I grabbed the F32 source RPM and rebuilt on my C8 laptop and uploaded to the host: [root@c8-kvm-pe1950-1 ~]# brctl show bridge name bridge id STP enabled interfaces bridge101 8000.001ec9fcde9d yes team0.101 vnet0 bridge302 8000.001ec9fcde9d yes team0.302 bridge68 8000.001ec9fcde9d yes team0.68
Still no dice. Next step is tcpdump.....
On 17.06.20 17:36, Lamar Owen wrote:
On 6/17/20 11:04 AM, Lamar Owen wrote:
... It shows as being defined to 1. I'm going to try adding to sysctl.conf and see if that makes any difference, though.
No difference. What is aggravating, though, is virtually every howto on bridging out there refers to the deprecated brctl utility (from the bridge-utils package), and C8 no longer includes that package (even though it's in current Fedora 32!). I know, I know, the new way is using the 'bridge' command or 'ip --br'.... So I grabbed the F32 source RPM and rebuilt on my C8 laptop and uploaded to the host: [root@c8-kvm-pe1950-1 ~]# brctl show bridge name bridge id STP enabled interfaces bridge101 8000.001ec9fcde9d yes team0.101 vnet0 bridge302 8000.001ec9fcde9d yes team0.302 bridge68 8000.001ec9fcde9d yes team0.68
Still no dice. Next step is tcpdump.....
Just to make it sure: Did you try to disable firewalld? With my experience with libvirt and vlan bridges on Fedora, libvirt may include unwanted firewall rules which drops the traffic over the bridges.
Best regards Ulf
On 6/17/20 1:51 PM, Ulf Volmer wrote:
... Just to make it sure: Did you try to disable firewalld? With my experience with libvirt and vlan bridges on Fedora, libvirt may include unwanted firewall rules which drops the traffic over the bridges.
I haven't done that yet, so I'll try that next. Thanks for the idea.
On 6/17/20 4:07 PM, Lamar Owen wrote:
On 6/17/20 1:51 PM, Ulf Volmer wrote:
... Just to make it sure: Did you try to disable firewalld? With my experience with libvirt and vlan bridges on Fedora, libvirt may include unwanted firewall rules which drops the traffic over the bridges.
I haven't done that yet, so I'll try that next. Thanks for the idea.
So, I tried dropping the firewall, etc. No joy.
So I punted; I did a scratch reinstall of C8.2.2004 on the host, using the 'Virtualization Host' group, and creating one bridge on the management VLAN, but this time on top of a bond, not a team. After install, reboot, and updating to latest, which verified that the management IP and VLAN has connectivity, I then used nmtui to create the second bridge on the second VLAN on top of the bond. I then connected to libvirt with virt-manager on my laptop, and installed a minimal install of C8 as a guest, connected to bridge302 again, with a static address. After the install and reboot, I checked with a ping on the guest to its gateway; hooray now it works. I was able to update the guest and get the application server installed that is goig on the guest, with good throughput. Other than this one using a bond instead of a team I don't see any difference in the bridge setup. No extra work was required to get it to work, either.
So, I set up another development host, but this time using a team instead of a bond, and I'm going to try to get that working with bridged networking to a virtual guest, since it really is supposed to work, and I'm very curious why it didn't. But the first guest running that particular application server is required in pretty short order, too short for me to play around trying to get teaming to work properly for a bridged-on-a-VLAN guest.
On 6/18/20 1:35 PM, Lamar Owen wrote:
So, I set up another development host, but this time using a team instead of a bond, and I'm going to try to get that working with bridged networking to a virtual guest, since it really is supposed to work, and I'm very curious why it didn't.
So, finally got back around to this. So, on this host I had set up a team0 with a bridge192 and with the bridge192 having an IP address, etc. That all worked fine. However, after creating a guest on this host, the guest had no connectivity through the same bridge. So, I thought I might see what setting the hairpin option might do, on the port of the bridge connected to the team. Well, lo and behold, after setting this up in the cockpit web interface, and after the networking restart initiated by cockpit, now I have connectivity to the guest. HOWEVER, the bridge port is NOT showing hairpin as configured. Could it have been the connection restart that connected the bridge port up correctly, since the guest was running which the connection restart occurred? Whatever; it is now passing traffic with no other configuration changes.
Now to see if I can duplicate this on a reboot later this week; that server is busy running some other things right now.
On 18/6/20 1:36 am, Lamar Owen wrote:
...I know, I know, the new way is using the 'bridge' command or 'ip --br'....
I learnt something new just then. However, a search across all man pages: which I believe to be accomplished via man -wK -- --br did not return anything related to iproute2 files - not even man ip-link
Where is this gem (and possibly others) hidden?
Thanks.