[CentOS] Re: Hardware RAID Controller -- not a "bug"

Wed May 11 14:58:45 UTC 2005
Joshua Baker-LePain <jlb17 at duke.edu>

On Wed, 11 May 2005 at 3:06pm, Chris Croome wrote

> On Wed 11-May-2005 at 08:40:33AM -0500, Bryan J. Smith wrote:
> > 
> > The 3Ware 9000 series adds a good amount of DRAM for more
> > buffering operations, such as RAID-5 writes.  But they are new,
> > and the drivers are still maturing.
> 
> Yeah...
> 
> FWIW there was a thread last month on fedora-devel:
> 
> - 3w-9xxx module version in FC4
>   https://www.redhat.com/archives/fedora-devel-list/2005-April/thread.html#00872
> 
> I thought this comment was interesting:
> 
>   We never use these systems for high usage scenarios like a
>   database server or sometimes even a home directory server.  Nearline
>   backup and slow storage is what we consider them useful for.
> 
>   https://www.redhat.com/archives/fedora-devel-list/2005-April/msg00886.html

I'm in the midst of testing a dual 9500-12 based system, and I've got all 
sorts of results (I posted tiobench numbers for XFS and ext3 recently).  
I've been playing with IOR 
<http://www.llnl.gov/asci/purple/benchmarks/limited/ior/>, 
<ftp://ftp.llnl.gov/pub/siop/ior/> for the past couple of days.  This is 
the output from a sample run across 10 clients (NFS over tcp, 
wsize=rsize=32768, centos-3 clients, centos-4 server):

Command line used: /home/jlb/src/IOR-2.8.4/src/C/IOR -F -W -R -b 40m -t 4m -s 103 -e
Participating tasks: 10

Summary:
        api                = POSIX
        test filename      = testFile
        access             = file-per-process
        clients            = 10 (1 per node)
        repetitions        = 1
        xfersize           = 4 MiB
        blocksize          = 40 MiB
        aggregate filesize = 40.23 GiB
        Lustre stripe size = Use default
              stripe count = Use default

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   ----
write     25.92      40960      4096       0.067203   1589.32    78.92      0
read      208.72     40960      4096       0.093569   197.39     191.23     0

Note that the server is dual homed, with half of the clients accessing 
each address -- thus the read number (yes, gigabit everywhere).  For that 
test, the server was running XFS.  Doing the same test with ext3 the write 
number is slightly higher (~30 MiB/s) and the read number slightly lower 
(~190MiB/s).

Just putting it out there.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University