[CentOS] connecting 2 servers using an FC card via iSCSI

nate centos at linuxpowered.net
Wed Mar 18 23:46:59 UTC 2009


Erick Perez wrote:
> Nate, Ross. Thanks for the information. I now understand the difference.
>
> Ross: I cant ditch MSSS since it is a government purchase, so I *must* use
> it until something breaks and budget is assigned and maybe in 2 years we can
> buy something else. the previous boss purchased this equipment and i guess
> an HP EVA, Netapp or some other sort of NAS/SAN equipment was better suited
> for the job...but go figure!.
>
> Nate: The whole idea is to use the MSSServer and connect serveral servers to
> it. it has 5 available slots so a bunch of cards can be placed there.
>
> I think (after reading your comments) that i can install 2 dual port 10gb
> netcards in the MSSS, configure it for jumbo frames (9k) and then put 10gb
> netcards on the servers that will connect to this MSSS and also enable 9k
> frames. All this of course, connected to a good 10gb switch with a good
> backplane. Im currently using 1Gb so switching to fiber at 1Gb will not
> provide a lot of gain.
>
> using IOMeter we saw that we will not incurr in IOWait due to slow hard
> disks.


Don't set your expectations too high on performance no matter what
network card or HBA you have in the server. Getting high performance
is a lot more than just fast hardware, your software needs to take
advantage of it. MSSServer, I've never heard of so I'm not sure what
it is, my storage array here at work is a 3PAR T400, which at max
capacity can run roughly 26 gigabits per second of fiber channel
throughput to servers(and another 26Gbit to the disks), which would
likely require at least 640 15,000 RPM drives to provide that level
of throughput(the current max of my array is 640 drives, expecting
to go to 1,280 within a year).

This T400 is among the fastest storage systems on the planet, in the
top 5 or 6 I believe. My configuration is by no means fully decked
out, but I have plenty of room to grow into. Currently it has
200x750GB SATA-II disks and with I/O evenly distributed across all
200 drives I can get about 11Gbit of total throughput(about 60% to
disk, and 40% to servers). 24GB of cache, ultra fast ASICs for
RAID. The limitation in my system is the disks, the controllers don't
break a sweat.

You can have twice as many drives that are twice as fast as these
SATA disks and still get poorer performance, it's all about the
software and architecture. I know this because we've been busy
migrating off such a system onto this new system for weeks now.

So basically what I'm trying to say is just because you might get
a couple 10gig cards for a storage server don't expect anywhere near
10gig performance for most workloads.

There's a reason why companies like BlueArc charge $150k for 10gig
capable NFS head units(that is the head unit only, no disks),
getting that level of performance isn't easy.

nate




More information about the CentOS mailing list