[CentOS] using ip address on bonded channels in a cluster

Thu Jul 26 17:38:16 UTC 2012
Steve Campbell <campbell at cnpapers.com>

On 7/26/2012 12:01 PM, Digimer wrote:
> On 07/26/2012 08:05 AM, Steve Campbell wrote:
>> I'm creating a firewall HA cluster. The proof of concept for the basic
>> firewall cluster is OK. I can bring up the cluster, start the iptables
>> firewall, and move all of this with no problem. I'm using Conga to do
>> all of this configuration on Centos 6.3 servers.
>>
>> To extend the "HA" part of this, I'd like to use bonded channels instead
>> of plain old NICs. The firewall uses the "IP address" service for the
>> outside firewall IP addresses. Each server behind the firewall is NATted
>> to one of these external IPs on the firewall's external interface.
>>
>> I'm not seeing how I can use bonded channels anywhere for these "IP
>> address" services. Part of the problem is that Conga will "guess" at
>> which interface to place the ip address service upon. In the case of
>> bonded channels, I don't think Conga is even aware of the "bondx"
>> interface, and Conga only uses interfaces like eth0, eth1, etc.
>>
>> I realize that the sysconfig network scripts will come into play here as
>> well, but that's another problem for me to tackle.
>>
>> Does anyone have any experience with bonded channels and Conga? I could
>> sure use some help with this.
>>
>> Thanks,
>>
>> steve campbell
>
> I use bonding extensively, but I always edit cluster.conf directly. If 
> conga doesn't support "bond*" device names, please file a bug in red 
> hat's bugzilla.
>
> Once the bondX device is up, it will have the IP and the "ethX" 
> devices can be totally ignored from the cluster's perspective. Use the 
> bondX device just as you would have used simple ethX devices.
>
> In case it helps, here is how I setup bonded interfaces on red hat 
> clusters for complete HA;
>
> https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network
Digimer,

Thanks very much for the reply. I believe you had pointed out the link 
to me before on a more basic query. It was very helpful in giving me a 
real nice introduction to all the new stuff in Centos 6 for clustering.

After reading this page once again, I think my question is not being 
understood. It seems to be a problem of mine to not state those 
questions plainly.

In your example, you use a VM to move the entire server from one VM host 
to another (or how ever you have that configured). That VM is a 
"service" defined under the cluster and it carries the IPs along with 
the VM.

In my situation, my cluster consists of non-VM servers. The servers are 
real, with an inside and outside interface and IPs. They become 
firewalls by moving the external IPs and iptables rules as services. So 
in my situation, I use "ip address" and "script" to only move the IP 
addresses and start and stop iptables. The IP addresses would be bonded 
channels, much like you do in your VMs.

If I'm not mistaken, the parameters for "ip address" do not offer 
anything like device or interface, so I'm failing to see how I can move 
the IPs between nodes as bonded channels. Individual IP addresses are 
not a problem. It works as expected.

My network experience is not strong enough to know why I'd need a bridge 
in my situation as well.

Perhaps I should back up and consider VMs. The main problem I see there 
is the time it might take to shutdown one VM and start another VM as 
opposed to just moving IPs and starting iptables.

I've still not attacked conntrack yet either, so there's plenty more for 
me to do.

Thanks again for your very helpful reply.

steve