On Nov 11, 2009, at 9:33 AM, "nate" centos@linuxpowered.net wrote:
carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks
Why would you think to attempt this? iSCSI is slow enough as it is, layering RAID on top of it would be even worse. Run RAID on the remote iSCSI system and don't try to do RAID between two networked iSCSI volumes, it will hurt performance even more.
I think the OP was thinking storage subsystem redundancy. It's not that bad of an idea since Linux RAID1 uses bitmaps for changed blocks so the whole RAID1 doesn't have to be resilvered on a disconnect.
ISCSI isn't that slow, I have found that it is only limiting for extremely high bandwidth applications, but there is 10Gbe for those.
Why md0 is not activated when I reboot this server? How can I do this persistent between reboots??
Probably because the iSCSI sessions are not established when the software raid stuff kicks in. You must manually start the RAID volume after the iSCSI sessions are established.
True, and I believe there is a initrd option for iSCSI devices at boot that one can set in /etc/sysconfig/initrd if you apropos/grep/google I'm sure you will find it. It's for iSCSI boot devices.
The exception would be if you were using a hardware iSCSI HBA in which case the devices wouldn't show up as iSCSI volumes they would show up as SCSI volumes and be available immediately upon booting, as the HBA would handle session management.
True HW iSCSI would make sure they were available at boot, but CentOS has had iSCSI boot device support since 5.1 one just needs the module and tools in the initrd and there is an option for that.
-Ross