[CentOS] Re: ATA-over-Ethernet v's iSCSI -- CORAID is NOT SAN, also check multi-target SAS

Bryan J. Smith thebs413 at earthlink.net
Tue Nov 8 12:19:51 UTC 2005


On Tue, 2005-11-08 at 15:16 +1100, Nick Bryant wrote:
> Indeed - that's how I found out about them.

Indeed - that's how everyone is.  ;->

> It's not *just* the cost of the HBA though - the storage device itself is
> quite a bit more expensive.

Exactly!  The HBA is the _least_ of your concerns.

It's a multi-target intelligence that allows you to address the same
space from 2 different hosts.  CORAID doesn't allow that.  Some cheaper
SCSI, SAS and iSCSI don't either.  You get what you pay for, and if you
want decent performance thanx to hardware multi-targetting of the same
storage from multiple devices, you need such.

> Ok forgive me for my knowledge of SANs isn't great, but I thought if you
> were using a SAN that represents itself as a block device (a real SAN) that
> only one machine could have true read/write access to the "slice"?

Yes and no.  Yes, you can have a SAN that does such.  But no, you can
have SANs that handle targetting of the same storage area.  This
required _both_ intelligence on the target SCSI, SAS, iSCSI or FC/FC-AL
device, as well as software on the hosts that are aware of it.

> That was unless you used a file system like GFS (I wasn't intending too).

GFS in 100% software-host controlled mode can synchronize non-unified
storage.  It's slow and pitiful.  I've only tried GFS in this mode --
although I think it can use shared/unified space between two hosts
(never tried it myself).

You do not need GFS to use shared/unified space between two hosts.  You
merely need a way for those hosts to access and share the unified space
in a way that is coherent -- i.e., one mounts read/write, while the
other mounts read and the changes by the other is synchronized.  Red Hat
has been doing this since well before GFS -- sharing out NFS and Samba
in a fail-over between two hosts to the same space.  That was the work
that Red Hat gained from their Mission Critical Linux acquisition.

That's what I've used in the past for my clustered file services.

> When I said shared storage I didn't mean it had to be accessed at the same
> time from all hosts. The RHEL cluster suite in an active/standby setup
> actually mounts the partitions as a host changes from standby to
> active after its sure the active host hasn't got access anymore with a
> "lights out" OoB setup. 
> Well that was my understanding of how it worked anyhow?

Yes, but you're missing a key point.  The system designated for failover
is still mounting the volume -- even if by standby.  Think of it as a
"read-only" mount that tracks changes done by the other system with the
"read/write" (I know this is a mega-oversimplification).  And when it
does fail-over, it still has to be allowed to mount it when the other
system may be in a state that the target device believes is still
accessing it.

CORAID will _refuse_ to allow anything to access to volume after one
system mounts it.  It is not multi-targettable.  SCSI-2, iSCSI and
FC/FC-AL are.  AoE is not.

> I have not but I'll be sure to check it out now... my worry is that out of
> the vendors I'm talking to (Acer, Dell, HP and Sun) no one offered it up.

Ask where their SAS capabilities are at.  In a few cases, some FC-AL
product lines are also adding SAS.  But since SAS is still young in the
multi-targetted area (even though it leverages existing SCSI-2), it
wouldn't surprise me if few products are out -- let alone companies are
not offering them up.

> The distance does limit the flexibility but can be worked around.

The idea behind multi-targeted SAS is "storage in the same closet."  If
that limitation is an issue, then you don't want SAS.

> Definitely one to look at in the future then.

Yep, SAS is one of the most unknown technologies right now.  People
either think it's some SATA technology that's not useful for the
enterprise, or they think SCSI is dead.  Quite the opposite, it's
superior to SATA, while being compatible with it, and offers the full
SCSI-2 command set.  SAS drives roll off the same lines as U320 SCSI
drives at most HD fabs.

> That's the problem, this "chump" is paying for it himself... sadly I know
> longer have bank/telco budgets to play with :( But still 500USD isn't really
> bad.

Well, you're probably better off, price-wise, with internal storage and
using 100% GFS.

> I know they made the standard but it was my understanding it was open now?

It's always been open.  It's their standard.  And it's rather limited.

> Whether or not other vendors will use it remains to be seen though.

Doesn't matter.  I look at the features of the protocol.  It doesn't
support multi-targeting.

> I'm aware they exist but go and try and product from dell with anything
> larger than a 250gb sata disk in it. Good luck ;)

Hitachi and Seagate started selling their 500GB and 400GB, respectively,
24x7 rated disks just a few months ago.  As far as Dell, they are often
working with Maxtor, who doesn't sell many 24x7 rated disks.  In fact, I
wouldn't be surprised if Dell is their only major partner.

> If you ask you'll be told that the larger disks haven't yet been approved
> in the enterprise type systems yet,

Again, Dell is using Maxtor.  Maxtor's focus has always been on
commodity price.  They don't have a heavy interest in 24x7 versions for
a premium.

Hitachi and Seagate do.

> but I imagine part of it will be they don't want to cannibalise
> part of their SCSI market by offering products with a *much* loser cost per
> GB, well not yet anyhow.

Has nothing to do with it.  Even 24x7 commodity drives are still not as
reliable as enterprise drives.

In case you didn't know, enterprise drives with low-vibration and high-
tolerances come in capacities of 18, 36, 73 and 146GB.  Commodity drives
that have vibrations that are 3-10x worse, and far lower tolerances,
come in capacities up to 500GB.

The 24x7 commodity drives are either those drives that test to higher
tolerances, or are manufactured with improved components, but still not
the same as enterprise capacities/reliability.

For more on Enterprise v. Commodity v. 24x7 Commodity drives, see:  
  http://www.samag.com/documents/s=9841/sam0509a/0509a_s1.htm
  http://www.samag.com/documents/s=9841/sam0509a/0509a_t2.htm  


-- 
Bryan J. Smith     b.j.smith at ieee.org     http://thebs413.blogspot.com
----------------------------------------------------------------------
The best things in life are NOT free - which is why life is easiest if
you save all the bills until you can share them with the perfect woman




More information about the CentOS mailing list