On Thu, Mar 24, 2016 at 4:30 PM, Kevin Ross <sedecim@gmail.com> wrote:
Thanks, Mike. When running tcpdump on the VM I'm not seeing traffic
unless it's explicitly intended for that particular VM, so no traffic
between the other VMs is getting forwarded from the virtual interface
to the "network appliance" VM.

There is connectivity between the VMs on the private network and the
"network appliance" VM which is acting as a gateway.

Here's the output of the brctl command:

virbr1
 bridge id              8000.5254007e2f5b
 designated root        8000.5254007e2f5b
 root port                 0                    path cost                  0
 max age                  19.99                 bridge max age            19.99
 hello time                1.99                 bridge hello time          1.99
 forward delay             0.00                 bridge forward delay       0.00
 ageing time             299.95
 hello timer               0.29                 tcn timer                  0.00
 topology change timer     0.00                 gc timer                   0.29
 hash elasticity           4                    hash max                 512
 mc last member count      2                    mc init query count        2
 mc router                 1                    mc snooping                1
 mc last member timer      0.99                 mc membership timer      259.96
 mc querier timer        254.96                 mc query interval        124.98
 mc response interval      9.99                 mc init query interval    31.24
 flags


virbr1-nic (0)
 port id                0000                    state                  disabled
 designated root        8000.5254007e2f5b       path cost                100
 designated bridge      8000.5254007e2f5b       message age timer          0.00
 designated port        8001                    forward delay timer        0.00
 designated cost           0                    hold timer                 0.00
 mc router                 1
 flags

I'm not sure why virbr1-nic is showing up as disabled, and also why

That STP output says the virbr1-nic interface is disabled -- maybe your VM is powered off?
 
the vnet# interfaces don't show up (they do show up on another host,
although VMs on that host are having the same non-promiscuous issue as
these VMs). I've tried this with and without NAT, as well as with STP
on/off with no effect.

You'll need to enable IP forwarding and set rules to route the traffic for those VMs.
http://www.linux-kvm.org/page/Networking#Routing_with_iptables

The gotcha is that if you're not doing any IP routing on the KVM node, your "network appliance" VM needs to have one NIC bridged to your real network and the other as part of virbr1. You could NAT it on the KVM host as well.

Read the KVM networking documentation as it will help you determine what configuration you have and if that's what you want.

--
---~~.~~---
Mike
//  SilverTip257  //