[CentOS] Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

Fri Mar 13 18:42:53 UTC 2009
nate <centos at linuxpowered.net>

nate wrote:

> If you want 2Gbit+/sec throughput what you should probably be looking
> at instead of multipathing is something like 802.3ad, if the equalogic
> box has different IPs on each interface you will be able to get up
> to 3Gbit/s of throughput and still have fault tolerance and not have
> to mess with multipathing(assuming the equalogic box does IP takeover
> when one of the interfaces goes down).


Now that I think about it this won't work either for iSCSI for a
single volume. You could get more throughput only with more than
1 volume mounted at a different IP.

Though it would still be a simpler setup than configuring device mapper.
(assuming the Equallogic box does IP takeover)

If the Equallogic box has a single IP over multiple interfaces and
runs 802.3ad on them you also won't get aggregate performance
improvements from a single host to it as 802.3ad seperates the
data transfer on either per-IP basis or per-IP:port basis, if your
connecting to a single IP:port between two systems that have
say 4x1Gbps connections the max connections you'll ever use for
that is 1.

So I'd say split up your volumes, to get faster throughput, but I
gotta say that 1Gbit of performance is VERY fast, more than
100 megabytes per second, assuming you can get line rate on the
software initiator..Depending on your workload, for my workload
I typically see about 3MB/second per SATA disk before the disk is
saturated(mostly random writes), so to get to 100 megabytes per
second I'd need 33 disks, just for that one link.

In my case I have 200 SATA disks, and can manage to get roughly
750 megabytes per second aggregate(peak), though the spindle
response times are very high at those levels.

This storage array(controllers) are among the fastest in the world,
but you still may be limited by the disks. The array is rated for
6.4 gigabytes per second with 4 controllers(3.2GB/s on front end
3.2GB/s on back end). With 200 SATA disks they top out at about
1.3-1.4 gigabytes/second of total(front and back end) throughput,
add another 200 SATA disks and the first pair of controllers still
won't be saturated.

nate