Anything special needed to boot from a SAN in CentOS 5.1 x86-64 ?
We have a new system running a QLA2342 FC HBA connected to a SAN, have a volume exported to it and would like to boot from it.
We PXEboot kickstart and that works, the installer sees the disk, installs to it, and reboots. When GRUB tries to load all it prints is "GRUB" and sits there.
When we try the Xen edition of CentOS 5.1, when GRUB tries to load the screen goes black and the system reboots.
When we try VMWare ESX 3.5.0 it works perfectly, so the BIOS configurations and the SAN config is good, something specific with CentOS 5.1.
I see in the release notes that SAN boot is supported, but doesn't give any special instructions.
Anyone else booting CentOS directly from a FC SAN ? Any special considerations needed for multipathing and stuff when booting ?
thanks
nate
On Mon, Mar 17, 2008 at 5:56 PM, nate centos@linuxpowered.net wrote:
I see in the release notes that SAN boot is supported, but doesn't give any special instructions.
Anyone else booting CentOS directly from a FC SAN ? Any special considerations needed for multipathing and stuff when booting ?
Use the 'mpath' when you're installing, and that should pretty much take care of it. The one issue that we had is that the Centos/RHEL mpath option defaults to active/active pathing, so storage systems that use active/passive (like our DS4700) can get angry unless you change the configuration after install. Also, I've heard reports of some folks having delays with older HBAs like the qla23xx, needed a longer timeout than the 250 listed in /proc/scsi/scsi. I've not confirmed this, and it could simply be related to the way the san is cabled up or how the zones are defined.
Jim Perrin wrote:
Use the 'mpath' when you're installing, and that should pretty much take care of it. The one issue that we had is that the Centos/RHEL mpath option defaults to active/active pathing, so storage systems that use active/passive (like our DS4700) can get angry unless you change the configuration after install. Also, I've heard reports of some folks having delays with older HBAs like the qla23xx, needed a longer timeout than the 250 listed in /proc/scsi/scsi. I've not confirmed this, and it could simply be related to the way the san is cabled up or how the zones are defined.
Cool thanks! Our array is active-active so that's good to know. I have a theory on why the system failed to boot in CentOS, the system actually has a pair of internal disks connected to a SATA controller, we turned the SATA controller off in the bios but the installer still may of seen it and installed stuff to it. With VMWare it doesn't support that SATA controller so it never touched the internal disks.
nate