Hi all,
I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why.
"mdadm --detail --scan" doesn't returns me any output. "mdadm --examine --scan" returns me:
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe
and it's ok. My mdadm.conf is:
DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=9295e5a2:b28d4fbd:b61fed29:f232ebfe devices=/dev/iopsda1,/dev/iopsdb1 MAILADDR root
mdmonitor init script is activated.
Why md0 is not activated when I reboot this server? How can I do this persistent between reboots??
Many thanks.
On Wed, 2009-11-11 at 12:43 +0100, carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why.
Just a thought, but are the raid partitions all marked as ID type fd (Linux raid autodetect) ?
regards Brendan
Brendan Minish wrote:
On Wed, 2009-11-11 at 12:43 +0100, carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why.
Just a thought, but are the raid partitions all marked as ID type fd (Linux raid autodetect) ?
regards Brendan
Yes.
Brendan Minish wrote:
On Wed, 2009-11-11 at 12:43 +0100, carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i don't understand why.
Just a thought, but are the raid partitions all marked as ID type fd (Linux raid autodetect) ?
If the system wasn't installed on raid you may not have the raid module loaded in the initrd to autodetect at boot time. Or the autodetect is happening before the iscsi connnections are made.
carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks
Why would you think to attempt this? iSCSI is slow enough as it is, layering RAID on top of it would be even worse. Run RAID on the remote iSCSI system and don't try to do RAID between two networked iSCSI volumes, it will hurt performance even more.
Why md0 is not activated when I reboot this server? How can I do this persistent between reboots??
Probably because the iSCSI sessions are not established when the software raid stuff kicks in. You must manually start the RAID volume after the iSCSI sessions are established.
The exception would be if you were using a hardware iSCSI HBA in which case the devices wouldn't show up as iSCSI volumes they would show up as SCSI volumes and be available immediately upon booting, as the HBA would handle session management.
nate
On Nov 11, 2009, at 9:33 AM, "nate" centos@linuxpowered.net wrote:
carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks
Why would you think to attempt this? iSCSI is slow enough as it is, layering RAID on top of it would be even worse. Run RAID on the remote iSCSI system and don't try to do RAID between two networked iSCSI volumes, it will hurt performance even more.
I think the OP was thinking storage subsystem redundancy. It's not that bad of an idea since Linux RAID1 uses bitmaps for changed blocks so the whole RAID1 doesn't have to be resilvered on a disconnect.
ISCSI isn't that slow, I have found that it is only limiting for extremely high bandwidth applications, but there is 10Gbe for those.
Why md0 is not activated when I reboot this server? How can I do this persistent between reboots??
Probably because the iSCSI sessions are not established when the software raid stuff kicks in. You must manually start the RAID volume after the iSCSI sessions are established.
True, and I believe there is a initrd option for iSCSI devices at boot that one can set in /etc/sysconfig/initrd if you apropos/grep/google I'm sure you will find it. It's for iSCSI boot devices.
The exception would be if you were using a hardware iSCSI HBA in which case the devices wouldn't show up as iSCSI volumes they would show up as SCSI volumes and be available immediately upon booting, as the HBA would handle session management.
True HW iSCSI would make sure they were available at boot, but CentOS has had iSCSI boot device support since 5.1 one just needs the module and tools in the initrd and there is an option for that.
-Ross
Ross Walker wrote:
On Nov 11, 2009, at 9:33 AM, "nate" centos@linuxpowered.net wrote:
carlopmart wrote:
Hi all,
I have setup a raid1 between two iscsi disks
Why would you think to attempt this? iSCSI is slow enough as it is, layering RAID on top of it would be even worse. Run RAID on the remote iSCSI system and don't try to do RAID between two networked iSCSI volumes, it will hurt performance even more.
I think the OP was thinking storage subsystem redundancy. It's not that bad of an idea since Linux RAID1 uses bitmaps for changed blocks so the whole RAID1 doesn't have to be resilvered on a disconnect.
ISCSI isn't that slow, I have found that it is only limiting for extremely high bandwidth applications, but there is 10Gbe for those.
Why md0 is not activated when I reboot this server? How can I do this persistent between reboots??
Probably because the iSCSI sessions are not established when the software raid stuff kicks in. You must manually start the RAID volume after the iSCSI sessions are established.
True, and I believe there is a initrd option for iSCSI devices at boot that one can set in /etc/sysconfig/initrd if you apropos/grep/google I'm sure you will find it. It's for iSCSI boot devices.
The exception would be if you were using a hardware iSCSI HBA in which case the devices wouldn't show up as iSCSI volumes they would show up as SCSI volumes and be available immediately upon booting, as the HBA would handle session management.
True HW iSCSI would make sure they were available at boot, but CentOS has had iSCSI boot device support since 5.1 one just needs the module and tools in the initrd and there is an option for that.
-Ross
Thanks to all. i have reconfigured initrd to use iscsi disks and all works well.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos