It seems that all the problem was caused by selinux. I booted all the nodes with the kernel option selinux=0 and now I can mount GFS partitions without problems. Thanks,
Sandra
sandra-llistes wrote:
Hi,
I'm not a GFS expert either :-) In fact, I erased all the configuration of the cluster and try to do it again from scratch with luci/ricci configuration tools (perhaps I did something wrong last time)
mount -vv gives the following information: [root@node1 gfs]# mount -t gfs -vv /dev/home2/home2 /home2 /sbin/mount.gfs: mount /dev/mapper/home2-home2 /home2 /sbin/mount.gfs: parse_opts: opts = "rw" /sbin/mount.gfs: clear flag 1 for "rw", flags = 0 /sbin/mount.gfs: parse_opts: flags = 0 /sbin/mount.gfs: parse_opts: extra = "" /sbin/mount.gfs: parse_opts: hostdata = "" /sbin/mount.gfs: parse_opts: lockproto = "" /sbin/mount.gfs: parse_opts: locktable = "" /sbin/mount.gfs: message to gfs_controld: asking to join mountgroup: /sbin/mount.gfs: write "join /home2 gfs lock_dlm gfs-test:gfs-data rw /dev/mapper/home2-home2" ...
And hangs at that point (On the other node it happens the same)
I tried it turning off the local firewalls on the nodes and they reached each other without problem with pings. Also, there are no more firewalls between them.
The new configuration is more simple: [root@node1 gfs]# more /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="gfs-test" config_version="6" name="gfs-test"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="node1.fib.upc.es" nodeid="1" votes="1"> <fence> <method name="1"> <device name="test" nodename="node1.fib.upc.es"/> </method> </fence> </clusternode> <clusternode name="node2.fib.upc.es" nodeid="2" votes="1"> <fence> <method name="1"> <device name="test" nodename="node2.fib.upc.es"/> <device name="test" nodename="node2.fib.upc.es"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="1" two_node="1"/> <fencedevices> <fencedevice agent="fence_manual" name="test"/> </fencedevices> <rm> <failoverdomains/> <resources> <clusterfs device="/dev/home2/home2" force_unmount="0" fsid="3280" fstype="gfs" mountpoint="/home2" name="home" self_fence="0"/> </resources> </rm> </cluster>
Finally, I reformatted /dev/home2/home2 with the following command that gave no errors but it doesn't affect the final result: gfs_mkfs -O -j 3 -p lock_dlm -t gfs-test:gfs-data /dev/home2/home2
Thanks,
Sandra
PD: I append an strace but I can't see anything useful.