--On Monday, March 21, 2016 08:57:59 AM -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
On 03/20/2016 08:51 PM, Devin Reade wrote:
In a CentOS 7 test HA cluster I'm building I want both traditional services running on the cluster and VMs running on both nodes
On a purely subjective note: I think that's a bad design. One of the primary benefits of virtualization and other containers is isolating the applications you run from the base OS. Putting services other than virtualization into the system that runs virtualization just makes upgrade more difficult later.
I understand. In this case the primary role of these machines is for a non-virtualized HA cluster. Where the VMs enter the picture is for a small number of services that I'd prefer to be isolated from the DMZ, and in this case there is sensitivity to the physical machine count. I'm aware of how this affects upgrades, having been through the cycle a few times. It is what it is. (But thanks.)
A given VM will be assigned a single network interface, either in the DMZ, on vlan2, or on vlan3. Default routes for each of those networks are essentially different gateways.
What do you mean by "essentially"?
The default routes for the DMZ, vlan2, and vlan3 go to different interfaces of the same (OpenBSD) firewall cluster, however from the perspective of both the physical nodes and the VMs, they are different default routes. The firewall cluster itself is multihomed on the upstream side, but again that is not visible to the nodes and VMs.
The fact that both the cluster and the VMs are protected by the OpenBSD firewalls is the reason that I'm primarily concerned with vectors coming from the non-DMZ VMs onto the DMZ via the hosts.
On the DMZ side, the physical interface is eno1 on which is layered bridge br0.
...
On the other network side, the physical interface is enp1s0, on which is layered bridge br2, on which is layered VLAN devices enp1s0.2 and enp1s0.3.
That doesn't make any sense at all. In what way are enp1s0.2 and enp1s0.3 layered on top of the bridge device?
No, it doesn't. Brain fart on my part, too tired, too many noisy distractions from kids, too many cosmic rays, or something :)
br0 is layered on eno1 br2 is layered on enp1s0.2. br3 is layered on enp1s0.3.
The non-DMZ VMs get connected to br2 and br3.
enp1s0 is an expected interface for that zone. Where it gets muddy is enp1s0, enp1s0.2 and enp1s0.3. Since the host shouldn't have any IPs on those interfaces, what is the relevence of having them in the DMZ zone or another zone?
Interfaces are part of some zone, whether an address is assigned or not. In terms of implementation, that means that filtering is set up before addresses. If you set up addresses and then filtering, there's a *very* brief window where traffic isn't filtered, and that is bad.
However, in this case the host won't have addresses on (based on my above correction) either br2 or br3. It does sound, though, like having enp1so, enp1s0.2, and enpe1s0.3 in the 'DMZ' zone means that filtering rules on the host will affect inbound traffic to the VMs on br2 and br3.
At least that question is easy to empirically verify, and if so, then it would argue that the three enp1s0* interfaces should be in their own zone, presumably with a lenient rule set.
I understand that for bridging and vlans to work that I likely need these forwardings active
No, you don't. It's active because libvirtd defines a NAT network by default, and that one requires IP forwarding.
Ah. That makes sense. So in this case where I don't need a NAT network in the libvirtd config, I should be able to eliminate it and thus eliminate the forwarding sysctls.
Thanks for all of your feedback.
Devin