On Apr 15, 2011, at 12:32 PM, Rudi Ahlers <Rudi(a)SoftDux.com> wrote:
>
>
> On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker <rswwalker(a)gmail.com> wrote:
> On Apr 15, 2011, at 9:17 AM, Rudi Ahlers <Rudi(a)SoftDux.com> wrote:
>
>>
>>
>> On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <christopher.chan(a)bradbury.edu.hk> wrote:
>> On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
>> > On 04/14/2011 09:00 PM, Christopher Chan wrote:
>> >>
>> >> Wanna try that again with 64MB of cache only and tell us whether there
>> >> is a difference in performance?
>> >>
>> >> There is a reason why 3ware 85xx cards were complete rubbish when used
>> >> for raid5 and which led to the 95xx/96xx series.
>> >> _
>> >
>> > I don't happen to have any systems I can test with the 1.5TB drives
>> > without controller cache right now, but I have a system with some old
>> > 500GB drives (which are about half as fast as the 1.5TB drives in
>> > individual sustained I/O throughput) attached directly to onboard SATA
>> > ports in a 8 x RAID6 with *no* controller cache at all. The machine has
>> > 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
>> >
>> > Version 1.96 ------Sequential Output------ --Sequential Input-
>> > --Random-
>> > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> > --Seeks--
>> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>> > /sec %CP
>> > pbox3 32160M 389 98 76709 22 91071 26 2209 95 264892 26
>> > 590.5 11
>> > Latency 24190us 1244ms 1580ms 60411us 69901us
>> > 42586us
>> > Version 1.96 ------Sequential Create------ --------Random
>> > Create--------
>> > pbox3 -Create-- --Read--- -Delete-- -Create-- --Read---
>> > -Delete--
>> > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
>> > /sec %CP
>> > 16 10910 31 +++++ +++ +++++ +++ 29293 80 +++++ +++
>> > +++++ +++
>> > Latency 775us 610us 979us 740us 370us
>> > 380us
>> >
>> > Given that the underlaying drives are effectively something like half as
>> > fast as the drives in the other test, the results are quite comparable.
>>
>> Woohoo, next we will be seeing md raid6 also giving comparable results
>> if that is the case. I am not the only person on this list that thinks
>> cache is king for raid5/6 on hardware raid boards and the using hardware
>> raid + bbu cache for better performance one of the two reasons why we
>> don't do md raid5/6.
>>
>>
>> >
>> > Cache doesn't make a lot of difference when you quickly write a lot more
>> > data than the cache can hold. The limiting factor becomes the slowest
>> > component - usually the drives themselves. Cache isn't magic performance
>> > pixie dust. It helps in certain use cases and is nearly irrelevant in
>> > others.
>> >
>>
>> Yeah, you are right - but cache is primarily to buffer the writes for
>> performance. Why else go through the expense of getting bbu cache? So
>> what happens when you tweak bonnie a bit?
>> _______________________________________________
>>
>>
>>
>> As matter of interest, does anyone know how to use an SSD drive for cach purposes on Linux software RAID drives? ZFS has this feature and it makes a helluva difference to a storage server's performance.
>
> Put the file system's log device on it.
>
> -Ross
>
>
> _______________________________________________
>
>
>
> Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra protection / redundancy to the whole pool.
>
> But the Cache / L2ARC drive caches all common reads & writes (simply put) onto SSD to improve overall system performance.
>
> So I was wondering if one could do this with mdraid or even just EXT3 / EXT4?
Ext3/4 and XFS allow specifying an external log device which if is an SSD can speed up writes. All these file systems aggressively use page cache for read/write cache. The only thing you don't get is L2ARC type cache, but I heard of a dm-cache project that might provide provide that type of cache.
-Ross