I'm looking for some information regarding the interaction of KVM, VLANs, firewalld, and the kernel's forwarding configuration. I would appreciate input especially from anyone already running a similar configuration in production. In short, I'm trying to figure out if a current configuration is inadvertently opening up traffic across network segments.
On earlier versions of CentOS I've run HA clusters with and without VMs (in this case, based on xen). On those clusters, both the host machine's IPs and the VM IPs were in the same subnet (call it the DMZ).
In a CentOS 7 test HA cluster I'm building I want both traditional services running on the cluster and VMs running on both nodes (not necessarily under control of the cluster). In the new setup, I'd like to retain *some* VMs on the same subnet as the host machine's IP, however have other VMs on different VLANs. So the physical topology looks like this:
----------------- DMZ ------------------ | | bridged-if bridged-if | | node-1 --------- heartbeat-if -------- node-2 | | --|-- --|-- / \ / \ vlan2 vlan3 vlan2 vlan3 \ / \ / bridged-if bridged-if | | --------------- --------------- | | managed switch | | vlan2-net vlan3-net
A given VM will be assigned a single network interface, either in the DMZ, on vlan2, or on vlan3. Default routes for each of those networks are essentially different gateways. (The CentOS boxes in questions are *not* intended to be routers.)
I'll take a brief aside here to describe the bridge/vlan configuration:
Interface Details =================
On the DMZ side, the physical interface is eno1 on which is layered bridge br0. br0 is assigned a static IP used by the physical node (host OS). VMs that should be on the DMZ get assigned br0 as their underlying network device.
On the other network side, the physical interface is enp1s0, on which is layered bridge br2, on which is layered VLAN devices enp1s0.2 and enp1s0.3. None of these have IPs assigned in the host OS; The host is not supposed to have direct access to vlan2 or vlan3. VMs that are supposed to be on vlan2 and vlan3 are assigned either enp1s0.2 or enp1s0.3, respectively, as their underlying network device.
=================
A quick test with a VM using enp1s0.2 seems to show the desired connectivity.
However I'm looking at the firewalld configuration on the host nodes and am not sure if I'm missing something. There are currently two active zones defined, 'dmz' and 'heartbeat'. The 'heartbeat' zone only contains the physical interface for the heartbeat network between nodes which is fine.
The 'dmz' zone contains br0, br2, eno1, enp1s0, enp1s0.2, and enp1s0.3. It looks like default that firewall rules aren't applied to bridge devices so we can ignore those. enp1s0 is an expected interface for that zone. Where it gets muddy is enp1s0, enp1s0.2 and enp1s0.3. Since the host shouldn't have any IPs on those interfaces, what is the relevence of having them in the DMZ zone or another zone? By having them in the 'dmz' zone, does this mean that host firewall rules will impact VMs?
Finally, `sysctl -a | grep forward | grep ' = 1'` shows:
net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.br2.forwarding = 1 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.eno1.forwarding = 1 net.ipv4.conf.enp1s0.forwarding = 1 net.ipv4.conf.enp1s0/2.forwarding = 1 net.ipv4.conf.enp1s0/2.forwarding = 1 net.ipv4.conf.enp4s0.forwarding = 1 net.ipv4.conf.lo.forwarding = 1 net.ipv4.conf.virbr0.forwarding = 1 net.ipv4.conf.virbr0-nic.forwarding = 1 net.ipv4.ip_forward = 1
I understand that for bridging and vlans to work that I likely need these forwardings active, but am I opening things up so that (for example) a maliciously crafted packet seen on the enp1s0.2 interface could jump onto the dmz subnet on eno1?
I have to admit, the firewall-config GUI seems more like it's oriented to either the local machine or other machines behind NAT, rather than a router. (I don't want the host nodes generally acting as routers, but how can I tell if they are doing so inadvertently?)
Further my google-fu isn't bringing up much in the way of definitive information as to how all the pieces interact. I'm hoping it is the case that packets seen on the DMZ interface bound for vlan2 and vlan3 are dropped, and that the host can't be reached via vlan2 or vlan3, but it's not clear that this is the case.
Clues are welcome.
Devin
On 03/20/2016 08:51 PM, Devin Reade wrote:
In a CentOS 7 test HA cluster I'm building I want both traditional services running on the cluster and VMs running on both nodes
On a purely subjective note: I think that's a bad design. One of the primary benefits of virtualization and other containers is isolating the applications you run from the base OS. Putting services other than virtualization into the system that runs virtualization just makes upgrade more difficult later.
A given VM will be assigned a single network interface, either in the DMZ, on vlan2, or on vlan3. Default routes for each of those networks are essentially different gateways.
What do you mean by "essentially"?
On the DMZ side, the physical interface is eno1 on which is layered bridge br0.
...
On the other network side, the physical interface is enp1s0, on which is layered bridge br2, on which is layered VLAN devices enp1s0.2 and enp1s0.3.
That doesn't make any sense at all. In what way are enp1s0.2 and enp1s0.3 layered on top of the bridge device?
Look at the output of "brctl show". Are those two devices slaves of br2, like enp1s0 is? If so, you're bridging the network segments.
You should have individual bridges for enp1s0, enp1s0.2 and enp1s0.3. If there were any IP addresses needed by the KVM hosts, those would be on the bridge devices, just like on br0.
VMs that are supposed to be on vlan2 and vlan3 are assigned either enp1s0.2 or enp1s0.3, respectively, as their underlying network device.
How? Are you using macvtap for those? I'd suggest sticking with one of either bridged networking or macvtap.
The 'dmz' zone contains br0, br2, eno1, enp1s0, enp1s0.2, and enp1s0.3. It looks like default that firewall rules aren't applied to bridge devices so we can ignore those.
Correct: /usr/lib/sysctl.d/00-system.conf:# Disable netfilter on bridges. /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-ip6tables = 0 /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-iptables = 0 /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-arptables = 0
enp1s0 is an expected interface for that zone. Where it gets muddy is enp1s0, enp1s0.2 and enp1s0.3. Since the host shouldn't have any IPs on those interfaces, what is the relevence of having them in the DMZ zone or another zone?
Interfaces are part of some zone, whether an address is assigned or not. In terms of implementation, that means that filtering is set up before addresses. If you set up addresses and then filtering, there's a *very* brief window where traffic isn't filtered, and that is bad.
By having them in the 'dmz' zone, does this mean that host firewall rules will impact VMs?
Not unless you change the net.bridge.bridge-nf-call-* settings.
I understand that for bridging and vlans to work that I likely need these forwardings active
No, you don't. It's active because libvirtd defines a NAT network by default, and that one requires IP forwarding.
, but am I opening things up so that (for example) a maliciously crafted packet seen on the enp1s0.2 interface could jump onto the dmz subnet on eno1?
Not in the default firewalld rule set.
I have to admit, the firewall-config GUI seems more like it's oriented to either the local machine or other machines behind NAT, rather than a router. (I don't want the host nodes generally acting as routers, but how can I tell if they are doing so inadvertently?)
Examine the output of "iptables -L -nv" and check all of the ACCEPT rules.
On 21.03.2016 16:57, Gordon Messmer wrote:
On 03/20/2016 08:51 PM, Devin Reade wrote:
In a CentOS 7 test HA cluster I'm building I want both traditional services running on the cluster and VMs running on both nodes
On a purely subjective note: I think that's a bad design. One of the primary benefits of virtualization and other containers is isolating the applications you run from the base OS. Putting services other than virtualization into the system that runs virtualization just makes upgrade more difficult later.
A given VM will be assigned a single network interface, either in the DMZ, on vlan2, or on vlan3. Default routes for each of those networks are essentially different gateways.
What do you mean by "essentially"?
On the DMZ side, the physical interface is eno1 on which is layered bridge br0.
...
On the other network side, the physical interface is enp1s0, on which is layered bridge br2, on which is layered VLAN devices enp1s0.2 and enp1s0.3.
That doesn't make any sense at all. In what way are enp1s0.2 and enp1s0.3 layered on top of the bridge device?
Look at the output of "brctl show". Are those two devices slaves of br2, like enp1s0 is? If so, you're bridging the network segments.
You should have individual bridges for enp1s0, enp1s0.2 and enp1s0.3. If there were any IP addresses needed by the KVM hosts, those would be on the bridge devices, just like on br0.
As a side node it is actually possible now to have one bridge to manage multiple independent vlans. Unfortunately this is basically undocumented (at least I can't find any decent documentation about this). One user of this is Cumulus Linux: https://support.cumulusnetworks.com/hc/en-us/articles/204909397-Comparing-Tr...
Apparently you can manage this with the "bridge" command. Here is what i get on my Fedora 22 System:
0 dennis@nexus ~ $ bridge fdb 01:00:5e:00:00:01 dev enp4s0 self permanent 33:33:00:00:00:01 dev enp4s0 self permanent 33:33:ff:ef:69:e6 dev enp4s0 self permanent 01:00:5e:00:00:fb dev enp4s0 self permanent 01:00:5e:00:00:01 dev virbr0 self permanent 01:00:5e:00:00:fb dev virbr0 self permanent 52:54:00:d3:ca:6b dev virbr0-nic master virbr0 permanent 52:54:00:d3:ca:6b dev virbr0-nic vlan 1 master virbr0 permanent 01:00:5e:00:00:01 dev virbr1 self permanent 52:54:00:a6:af:5d dev virbr1-nic vlan 1 master virbr1 permanent 52:54:00:a6:af:5d dev virbr1-nic master virbr1 permanent 0 dennis@nexus ~ $ bridge vlan port vlan ids virbr0 1 PVID Egress Untagged
virbr0-nic 1 PVID Egress Untagged
virbr1 1 PVID Egress Untagged
virbr1-nic 1 PVID Egress Untagged
I'm not sure if the CentOS 7 kernel is recent enough to support this but I thought I'd mention this anyway to make people aware that the "one bridge per vlan" model is no longer the only one in existence.
Regards, Dennis
--On Monday, March 21, 2016 08:57:59 AM -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
On 03/20/2016 08:51 PM, Devin Reade wrote:
In a CentOS 7 test HA cluster I'm building I want both traditional services running on the cluster and VMs running on both nodes
On a purely subjective note: I think that's a bad design. One of the primary benefits of virtualization and other containers is isolating the applications you run from the base OS. Putting services other than virtualization into the system that runs virtualization just makes upgrade more difficult later.
I understand. In this case the primary role of these machines is for a non-virtualized HA cluster. Where the VMs enter the picture is for a small number of services that I'd prefer to be isolated from the DMZ, and in this case there is sensitivity to the physical machine count. I'm aware of how this affects upgrades, having been through the cycle a few times. It is what it is. (But thanks.)
A given VM will be assigned a single network interface, either in the DMZ, on vlan2, or on vlan3. Default routes for each of those networks are essentially different gateways.
What do you mean by "essentially"?
The default routes for the DMZ, vlan2, and vlan3 go to different interfaces of the same (OpenBSD) firewall cluster, however from the perspective of both the physical nodes and the VMs, they are different default routes. The firewall cluster itself is multihomed on the upstream side, but again that is not visible to the nodes and VMs.
The fact that both the cluster and the VMs are protected by the OpenBSD firewalls is the reason that I'm primarily concerned with vectors coming from the non-DMZ VMs onto the DMZ via the hosts.
On the DMZ side, the physical interface is eno1 on which is layered bridge br0.
...
On the other network side, the physical interface is enp1s0, on which is layered bridge br2, on which is layered VLAN devices enp1s0.2 and enp1s0.3.
That doesn't make any sense at all. In what way are enp1s0.2 and enp1s0.3 layered on top of the bridge device?
No, it doesn't. Brain fart on my part, too tired, too many noisy distractions from kids, too many cosmic rays, or something :)
br0 is layered on eno1 br2 is layered on enp1s0.2. br3 is layered on enp1s0.3.
The non-DMZ VMs get connected to br2 and br3.
enp1s0 is an expected interface for that zone. Where it gets muddy is enp1s0, enp1s0.2 and enp1s0.3. Since the host shouldn't have any IPs on those interfaces, what is the relevence of having them in the DMZ zone or another zone?
Interfaces are part of some zone, whether an address is assigned or not. In terms of implementation, that means that filtering is set up before addresses. If you set up addresses and then filtering, there's a *very* brief window where traffic isn't filtered, and that is bad.
However, in this case the host won't have addresses on (based on my above correction) either br2 or br3. It does sound, though, like having enp1so, enp1s0.2, and enpe1s0.3 in the 'DMZ' zone means that filtering rules on the host will affect inbound traffic to the VMs on br2 and br3.
At least that question is easy to empirically verify, and if so, then it would argue that the three enp1s0* interfaces should be in their own zone, presumably with a lenient rule set.
I understand that for bridging and vlans to work that I likely need these forwardings active
No, you don't. It's active because libvirtd defines a NAT network by default, and that one requires IP forwarding.
Ah. That makes sense. So in this case where I don't need a NAT network in the libvirtd config, I should be able to eliminate it and thus eliminate the forwarding sysctls.
Thanks for all of your feedback.
Devin
On 03/21/2016 10:18 PM, Devin Reade wrote:
However, in this case the host won't have addresses on (based on my above correction) either br2 or br3. It does sound, though, like having enp1so, enp1s0.2, and enpe1s0.3 in the 'DMZ' zone means that filtering rules on the host will affect inbound traffic to the VMs on br2 and br3.
No, because:
/usr/lib/sysctl.d/00-system.conf:# Disable netfilter on bridges. /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-ip6tables = 0 /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-iptables = 0 /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-arptables = 0
(Unless you change the defaults)