[CentOS] Cluster Failover Troubleshooting (luci and ricci)

Ryan Bunce

RBunce at micatholicconference.org
Fri Jul 1 19:54:09 UTC 2011


Hello all.  I posted this in the forum and was told to instead post it to 
the mailing list.  My apologies for the redundancy if you have already 
seen and been irritated by my blatherings.

Thanks.
_________________________

I am working on a CentOS clustered LAMP stack and running into problems. I 
have searched extensively and have come up empty.

Here's my setup:

Two node cluster identical hardware. IBM x226 with RSAII adapters for 
fencing.
Configured for Active/Passive failover - no load balancing.
No shared storage - manual rsync of data (shared SSH keys, rsync over SSH, 
cron job).
Single shared IP address

I used luci and ricci to configure the cluster. It's a bit confusing that 
there's an 'apache' script but you have to use the custom init script. I'm 
past that though.

The failover function is working when it's kicked off manually from the 
luci web interface. I can tell it to transfer the services (IP, httpd, 
msqld) to the secondary server and it works fine.

I run into problems when I attempt to simulate a failure (a pulled network 
cord for instance). The primary system recognizes the failure, shuts down 
it's services, attempts to inform the secondary server to take over and 
then it never does. Here is a log excerpt from a cable pull test:

Jun 16 15:33:27 flex kernel: tg3: eth0: Link is down.
Jun 16 15:33:34 flex clurgmgrd: [2970]: <warning> Link for eth0: Not 
detected
Jun 16 15:33:34 flex clurgmgrd: [2970]: <warning> No link on eth0...
Jun 16 15:33:34 flex clurgmgrd[2970]: <notice> status on ip "10.6.2.25" 
returned 1 (generic error)
Jun 16 15:33:34 flex clurgmgrd[2970]: <notice> Stopping service 
service:web
Jun 16 15:33:35 flex proftpd[6321]: 10.6.2.47 - ProFTPD killed (signal 15)
Jun 16 15:33:35 flex proftpd[6321]: 10.6.2.47 - ProFTPD 1.3.3c standalone 
mode SHUTDOWN
Jun 16 15:33:39 flex avahi-daemon[2850]: Withdrawing address record for 
10.6.2.25 on eth0.
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> Service service:web is 
recovering
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> Recovering failed service 
service:web
Jun 16 15:33:49 flex clurgmgrd: [2970]: <warning> Link for eth0: Not 
detected
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> start on ip "10.6.2.25" 
returned 1 (generic error)
Jun 16 15:33:49 flex clurgmgrd[2970]: <warning> #68: Failed to start 
service:web; return value: 1
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> Stopping service 
service:web
Jun 16 15:33:49 flex clurgmgrd: [2970]: <err> script:mysqld: stop of 
/etc/rc.d/init.d/mysqld failed (returned 1)
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> stop on script "mysqld" 
returned 1 (generic error)
Jun 16 15:33:49 flex clurgmgrd[2970]: <crit> #12: RG service:web failed to 
stop; intervention required
Jun 16 15:33:49 flex clurgmgrd[2970]: <notice> Service service:web is 
failed
Jun 16 15:33:49 flex clurgmgrd[2970]: <crit> #13: Service service:web 
failed to stop cleanly
Jun 16 15:36:43 flex kernel: tg3: eth0: Link is up at 100 Mbps, full 
duplex.
Jun 16 15:36:43 flex kernel: tg3: eth0: Flow control is off for TX and off 
for RX.
Jun 16 16:04:52 flex luci[2904]: Unable to retrieve batch 306226694 status 
from web2:11111: Unable to disable failed service web before starting 
it:clusvcadm failed to stop web:
Jun 16 16:05:28 flex clurgmgrd[2970]: <notice> Starting disabled service 
service:web
Jun 16 16:05:31 flex avahi-daemon[2850]: Registering new address record 
for 10.6.2.25 on eth0.
Jun 16 16:05:31 flex luci[2904]: Unable to retrieve batch 1997354692 
status from web2:11111: module scheduled for execution
Jun 16 16:05:33 flex proftpd[1926]: 10.6.2.47 - ProFTPD 1.3.3c (maint) 
(built Thu Nov 18 2010 03:38:57 CET) standalone mode STARTUP
Jun 16 16:05:33 flex clurgmgrd[2970]: <notice> Service service:web started



I have followed the HowTos for setting up the cluster (with the exception 
of the shared storage) as closely as possible.

Here's what I've already troubleshot:

No IPTables running
No SELinux running
Hosts file resolves all IP address/host names properly.

I must say that I am less familiar with how all of the cluster components 
work together. All of the Linux clusters I have built thus far have been 
heartbeat+mon style clusters.

I'm looking to find out if there is an additional debug layer that I can 
put in place to get some more detailed information about what is 
transacting (or not) between the two cluster members.

Many thanks. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20110701/00dde0c5/attachment.html>


More information about the CentOS mailing list