Is it possible or will there be any problems with using mdraid on top of mdraid?
specifically say mdraid 1/5 on top of mdraid multipath.
e.g. 4 storage machines exporting iSCSI targets via two different physical network switches
then use multipath to create md block devices then use mdraid on these md block devices
The purpose being the storage array surviving a physical network switch failure.
Is it possible or will there be any problems with using mdraid on top of mdraid?
specifically say mdraid 1/5 on top of mdraid multipath.
e.g. 4 storage machines exporting iSCSI targets via two different physical network switches
then use multipath to create md block devices then use mdraid on these md block devices
The purpose being the raid storage array surviving a physical network switch failure.
In case of duplicates, for some reason the first post isn't showing up on the mailing list so I'm resending it.
On Mon, Mar 21, 2011 at 7:51 PM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
Is it possible or will there be any problems with using mdraid on top of mdraid?
specifically say mdraid 1/5 on top of mdraid multipath.
e.g. 4 storage machines exporting iSCSI targets via two different physical network switches
then use multipath to create md block devices then use mdraid on these md block devices
The purpose being the raid storage array surviving a physical network switch failure.
In case of duplicates, for some reason the first post isn't showing up on the mailing list so I'm resending it. _______________________________________________
Since your exporting the storage as iSCSI, the host machine will see it as a raw disk, irrespective of how it's setup on the exported server. So, yes, you can do this.
On Mon, Mar 21, 2011 at 14:51, Emmanuel Noobadmin centos.admin@gmail.com wrote:
Is it possible or will there be any problems with using mdraid on top of mdraid?
specifically say mdraid 1/5 on top of mdraid multipath.
e.g. 4 storage machines exporting iSCSI targets via two different physical network switches
then use multipath to create md block devices
then use mdraid on these md block devices
The purpose being the raid storage array surviving a physical network switch failure.
Multipath should survive ONE switch failure and your storage should provide a fast raid implementation. You'll have really poor performance if you raid over iscsi...
Multipath should survive ONE switch failure and your storage should provide a fast raid implementation.
Am I correct to understand that multi-path will survive up to N-1 switch failure and not just one switch if I used more than two switches? If so, then I can see that I wouldn't need raid on top of multi-path. e.g. Storage Node -> Disk export as 192.168.1.10, 192.168.2.10 and 192.168.3.10 to 3 different physical switch?
You'll have really poor performance if you raid over iscsi...
I was pondering over the need of RAID on top of multi-path but wasn't sure if multi-path alone would be enough.
Would running LVM, for flexibility in expanding volume sizes, on top of iSCSI be badly affected as well?
You'll have really poor performance if you raid over iscsi...
I've spent more time thinking about this and reading over my research.The crux here seems to be that while RAID does impose additional write IOPS costs, it may be more than offset by the read advantage. According to my own notes (which I overlooked in my first reply), the increased performance from scaling was part of the original reason I was thinking of using RAID on top of iSCSI.
Searching more on this, the consensus seems to be that RAID 1/10 on iSCSI is quite okay and may provide read performance increase but RAID 5's iops penalty will probably be a killer.
On 03/21/11 9:11 PM, Emmanuel Noobadmin wrote:
You'll have really poor performance if you raid over iscsi...
I've spent more time thinking about this and reading over my research.The crux here seems to be that while RAID does impose additional write IOPS costs, it may be more than offset by the read advantage. According to my own notes (which I overlooked in my first reply), the increased performance from scaling was part of the original reason I was thinking of using RAID on top of iSCSI.
Searching more on this, the consensus seems to be that RAID 1/10 on iSCSI is quite okay and may provide read performance increase but RAID 5's iops penalty will probably be a killer.
if you can use 2 dedicated ethernet adapters to 2 iSCSI servers for this mirror, its probably a win.
Searching more on this, the consensus seems to be that RAID 1/10 on iSCSI is quite okay and may provide read performance increase but RAID 5's iops penalty will probably be a killer.
if you can use 2 dedicated ethernet adapters to 2 iSCSI servers for this mirror, its probably a win.
That is my intention, 2 or 3 NICs on the host, to 2 or 3 physical switches for each dedicated VLAN, then to the iSCSI servers with 2/3 NICs. Would doing so eliminate the impact of using RAID 5 across the network though? Considering the trade-offs since it is increasingly expensive to add more storage with RAID 1.