[CentOS] Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

nate centos at linuxpowered.net
Fri Mar 13 16:49:20 UTC 2009


James Pearson wrote:
> I'm trying to test out an Equallogic PS5500 with a server running CentOS 4.7
>
> I can create a volume and mount it fine using the standard
> iscsi-initiator-utils tools.
>
> The Equallogic box has 3 Gigabit interfaces and I would like to try to
> set up things so I can read/write from/to the volume using multiple NICs
> on the server i.e. get 200+ Mbyte/s access to the volume - I've had some
>   basic info from Equallogic (Dell) saying that I need to set up DM
> multipathing - however, the info I have starts by saying that I have to:
>
> "Make sure the host can see multiple devices representing the same
> target volume"
>
> However, I'm not sure how I get to this point i.e. how do I set up the
> Equallogic and/or server so that I can see a single volume over multiple
> network links?

Dell should be able to tell you this.

If you want 2Gbit+/sec throughput what you should probably be looking
at instead of multipathing is something like 802.3ad, if the equalogic
box has different IPs on each interface you will be able to get up
to 3Gbit/s of throughput and still have fault tolerance and not have
to mess with multipathing(assuming the equalogic box does IP takeover
when one of the interfaces goes down).

Now I have a hard time believing that the software iscsi client in
linux can handle that much throughput but maybe it can, I'd put money
down that you'd be looking at a pretty good chunk of your CPU time
being spent on that though.

And of course use jumbo frames, and have a dedicated network for
storage(at least dedicated VLANs), and dedicated NICs. Last I heard
equallogic didn't support anything other than jumbo frames so you
should have that setup already.

multipathing is more for storage systems that have multiple controllers
and detecting when one of those controllers is not accessible and
failing over to it. While you may be able to come up with a multipathing
config with device mapper that has different volumes being presented
down different paths thus aggregating the total throughput of the
system(to the different volumes), I'm not aware of a way myself to
aggregate paths in device mapper to a single volume.

For my storage array I use device mapper with round robin multipathing
which alternates between however many I/O paths I have(typically there
are 4 paths per volume), however at any given moment for any
given volume only 1 path is used. This can only be used on arrays that
are truely "active-active", do not attempt this on an active-passive
system or you'll run into a heap of trouble.

I haven't used equallogic stuff before but if they have IP takeover
for downed interfaces/controllers than your best bet is likely handling
link aggregation/failover at the network level, move the controller
failover entirely to the equalogic system, instead of trying to track
it at the server level(of course with 802.3ad you will be tracking
local network link status at the server level).

My storage vendor gave explicit step by step instructions(PDF) as to what
was needed to setup device mapper to work properly with their systems.
Given Dell is a big company too, I'd expect you to easily be able
to get that information from them as well.

While most of the connectivity on my storage array is Fiber, there are
4x1Gbps iSCSI ports on it, however the ports are not aggregated, so
the most any single system can pull from the array at a time is 1Gbps
(despite there being 4x1Gbps ports, two on each HBA, there is actually
only 1Gbps of throughput available on each HBA, so in essence there are
2x1Gbps ports available). In aggregate though, with multiple systems
hitting the array simultaneously you can drive more throughput.

That is assuming you still want to go the device mapper route.

nate






More information about the CentOS mailing list