I have an iscsi array that I'd like to mount and share using NFS and I need it to happen without user intervention on a reboot. In the default configuration this doesn't seem to work very well because the iscsi intiator isn't started until after the network is up (obviously) and by that time all local filesystems are mounted. I can't mount the partitions in rc.local because NFS has already started.
Does a tidy way to handle this already exist, or do I need to do something like hack /etc/init.d/iscsi to mount and unmount iscsi partitions as necessary?
James
You want to share the array using both iSCSI and NFS?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org]On Behalf Of James Fidell Sent: Friday, December 01, 2006 10:51 AM To: CentOS mailing list Subject: [CentOS] another iscsi question
I have an iscsi array that I'd like to mount and share using NFS and I need it to happen without user intervention on a reboot. In the default configuration this doesn't seem to work very well because the iscsi intiator isn't started until after the network is up (obviously) and by that time all local filesystems are mounted. I can't mount the partitions in rc.local because NFS has already started.
Does a tidy way to handle this already exist, or do I need to do something like hack /etc/init.d/iscsi to mount and unmount iscsi partitions as necessary?
James _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Friday 01 December 2006 07:51, James Fidell wrote:
I have an iscsi array that I'd like to mount and share using NFS and I need it to happen without user intervention on a reboot. In the default configuration this doesn't seem to work very well because the iscsi intiator isn't started until after the network is up (obviously) and by that time all local filesystems are mounted. I can't mount the partitions in rc.local because NFS has already started.
Not too familiar with booting iscsi drives. Is there a script run, or a syntax for /etc/fstab?
Does a tidy way to handle this already exist, or do I need to do something like hack /etc/init.d/iscsi to mount and unmount iscsi partitions as necessary?
If you have a startup script, the SysV init system (/etc/rc.d/rc#.d, chkconfig) is useful for just this thing. Looking in /etc/rc.d/rc3.d (rc5.d if you boot to X) will show you all the symlinks to init.d for starting and stopping processes. The S## in the beginning of a symlink denotes that the process with be started (called through /etc/init.d with argument "start") and in which order (the number). The K## in a symlink denotes that the the process is killed (called with argument "stop") in this runlevel, and the order. The chkconfig(8) command automates the configuration of this process fairly well, and it's man page will explain more.
Looking at /etc/init.d/rc.d on my system show that the network is started at priority 10, and nfs at priority 60. That leaves quite a bit of room to choose where in the boot order you want the iscsi drives mounted.
Kevan Benson wrote:
Not too familiar with booting iscsi drives. Is there a script run, or a syntax for /etc/fstab?
I'm new to this whole iscsi thing myself. I just got a simple link working this weekend...
to automount a iscsi share after you boot CentOS 4.4, I had to......
# yum -y install iscsi-initiator-utils ....
then, edit /etc/iscsi.conf and define username and password and the iscsi server like... Username=iscsiuser Password=iscsipassword
DiscoveryAddress=<ip-of-iscsi-target> Username=iscsiuser Password=iscsipassword
then, # tail -f /var/log/messages &
(that prints live syslog data on this terminal from now on)
# chkconfig iscsi on # service iscsi start
and it should show a scsi discovery in the log. it can take half a minute for it to sort itself out and create any iscsi visible logical volumes....
Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Dec 1 21:24:39 svfis-blade04 iscsi: iscsi config check succeeded Dec 1 21:24:40 svfis-blade04 kernel: iscsi-sfnet: Loading iscsi_sfnet version 4:0.1.11-3 Dec 1 21:24:40 svfis-blade04 kernel: iscsi-sfnet: Control device major number 254 Dec 1 21:24:40 svfis-blade04 iscsi: Loading iscsi driver: succeeded Starting iscsid: [ OK ] Dec 1 21:24:45 svfis-blade04 iscsid[13131]: version 4:0.1.11-3 variant (02-May-2006) Dec 1 21:24:45 svfis-blade04 iscsi: iscsid startup succeeded Dec 1 21:24:45 svfis-blade04 iscsid[13135]: Connected to Discovery Address 10.5.160.91 Dec 1 21:24:45 svfis-blade04 kernel: iscsi-sfnet:host8: Session established Dec 1 21:24:45 svfis-blade04 kernel: scsi8 : SFNet iSCSI driver Dec 1 21:24:45 svfis-blade04 kernel: Vendor: Openfile Model: Virtual disk Rev: 0 Dec 1 21:24:45 svfis-blade04 kernel: Type: Direct-Access ANSI SCSI revision: 04 Dec 1 21:24:45 svfis-blade04 kernel: SCSI device sdc: 42205184 512-byte hdwr sectors (21609 MB) Dec 1 21:24:45 svfis-blade04 kernel: SCSI device sdc: drive cache: write through Dec 1 21:24:45 svfis-blade04 kernel: SCSI device sdc: 42205184 512-byte hdwr sectors (21609 MB) Dec 1 21:24:45 svfis-blade04 kernel: SCSI device sdc: drive cache: write through Dec 1 21:24:45 svfis-blade04 kernel: sdc: sdc1 Dec 1 21:24:45 svfis-blade04 kernel: Attached scsi disk sdc at scsi8, channel 0, id 0, lun 0 Dec 1 21:24:45 svfis-blade04 scsi.agent[13195]: disk at /devices/platform/host8/target8:0:0/8:0:0:0
# fdisk -l /dev/sdc
Disk /dev/sdc: 21.6 GB, 21609054208 bytes 64 heads, 32 sectors/track, 20608 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System /dev/sdc1 1 20608 21102576 83 Linux
On Friday 01 December 2006 21:30, John R Pierce wrote:
# chkconfig iscsi on # service iscsi start
and it should show a scsi discovery in the log. it can take half a minute for it to sort itself out and create any iscsi visible logical volumes....
Well there you go. There will be an S##iscsi symlink after you run that chkconfig command. You just want to make sure it's in between network and nfs in the startup order. If there's a flag or something to make sure it blocks and does not return until it correctly mounts a device or waits a set timeout period, that would be useful to require that the iscsi drive is mounted before it proceeds to other items in the startup process.
Actually, looking at the init script, it looks like it waits 30 by default after initiating the subsystem to let drives associate. If that isn't long enough for you, you can set it explicitly in /etc/sysconfig/iscsi.