[CentOS] iSCSI best practices

Fri Dec 9 17:21:43 UTC 2011
Digimer <linux at alteeve.com>

On 12/09/2011 11:27 AM, Alan McKay wrote:
> Hey folks,
> 
> I had some general questions and when reading through the list archives I
> came across an iSCSI discussion back in February where a couple of
> individuals were going back and forth about drafting up a "best practices"
> doc and putting it into a wiki.   Did that ever happen?    And if so, where
> is it?
> 
> Now my questions :
> We are not using iSCIS yet at work but I see a few places where it would be
> useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) that I
> believe would be slightly more efficient if I converted them to iSCSI.   I
> also want to introduce some virtual machines which I think would work out
> best if I created iSCSI drives for them back on my Oracle/Sun ZFS appliance.

As you are aware, NFS vs. iSCSI is an apples/oranges comparison. As two
or more machines will see the same "block device" using iSCSI, it falls
on higher layers to ensure that the storage is accessed safely (ie:
using clustered LVM, GFS2, etc). Alternatively, you need to ensure that
no two nodes access the same partitions at the same time, which
precludes live migration of VMs.

> I mentioned iSCSI to the guy whose work I have taken over here so that he
> can concentrate on his real job, and when I mentioned that we should have a
> separate switch so that all iSCSI traffic is on it's own switch, he balked
> and said something like "it is a switched network, it should not matter".
>  But that does not sit right with me - the little bit I've read about iSCSI
> in the past always stresses that you should have it on its own network.

"Switched network" simply means that data going from machine A to
machine B isn't sent to machine C. It doesn't speak to capacity issues.

> So 2 questions :
> - how important is it to have it on its own network?

>From what standpoint? I always have a dedicated network, primarily to
ensure that if/when the network is saturated, other traffic isn't
interrupted. This is particularly important when you have
latency-sensitive applications like clustering.

There are also arguments for security, but this is only half-true. A
VLAN would isolate the network, and encrypting the traffic (with it's
performance trade-offs) would protect it from sniffing.

> - is it OK to use an unmanaged switch (as long as it is Gigabit), or are
> there some features of a managed switch that are desirable/required with
> iSCSI?

My concern wouldn't be with managed/unmanaged so much as slow/fast.
Cheap switches generally have low(er) internal switching bandwidth, so
be sure to look up that in the switch's specs and compare it to your
expected loads. There are other differences, too, like MAC table sizes
and whatnot.

> thanks,
> -Alan

This may or may not be of use to you, but here is a link to an
(in-progress, incomplete) tutorial I am working on. The block diagram
just below this link shows how I configure my (VM Cluster) networks. It
uses a dedicate "SN" (Storage Network) for DRBD replication traffic, so
it's not for iSCSI but the concept is similar.

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network

The network in this configuration is completely redundant (bonding
mode=1 (Active/Passive) across two switches). In my case, there are just
two managed switches, but there is no real reason that you can't use a
pair of unmanaged switches for each subnet (paired for redundancy), so
long as those switches are sufficient for your expected load/growth.

Cheers

-- 
Digimer
E-Mail:              digimer at alteeve.com
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron