Dear all,
After some trouble finding the right multipath.conf, I'm running Centos
5.5 against the SUN 7310 more or less successfully now. Unfortunately one
issue remains:
the SUN 7310 has two storage heads, H1 and H2, which are in a
active/passive configuration. If I define a LUN on H1, everything works
great. I edit multipath.conf, re-create the initrd and I find 8 paths with
active/ready and active/ghost:
mpath0 (3600...0001) dm-0 SUN,Sun Storage 7310
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 0:0:0:0 sda 8:0 [active][ready]
\_ 0:0:1:0 sdd 8:48 [active][ready]
\_ 1:0:0:0 sdg 8:96 [active][ready]
\_ 1:0:1:0 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=4][enabled]
\_ 1:0:2:0 sdm 8:192 [active][ghost]
\_ 0:0:2:0 sdn 8:208 [active][ghost]
\_ 1:0:3:0 sds 65:32 [active][ghost]
\_ 0:0:3:0 sdt 65:48 [active][ghost]
Thing is: as soon as I set up a machine with the LUN(s) on the second
head, all works well until I want to boot with the freshly created
initrd/multipath.conf. System hangs with Kernel Panic, "Creating multipath
devices... no devices found" and "mount: could not find filesystem
'/dev/root'".
Workaround: when I remove the paths to the passive ("ghost paths") Storage
head in the SAN Fabric, the system boots fine and I can re-enable the
paths when the kernel is up. The paths become [active][ghost] as wished
after some time when they show up in the FC Zoning.
This smells like broken mkinitrd to me.
System is very up-to-date, Centos 5.5 Version below:
[root@dev-db3 ~]# yum list device-mapper-multipath
Installed Packages
device-mapper-multipath.x86_64 0.4.7-34.el5_5.1 installed
[root@dev-db3 ~]# uname -r
2.6.18-194.3.1.el5
Any suggestions?
best regards from Berlin,
Jens Neu
Health Services Network Administration