Now you are telling me that somehow you have code that makes your database stuff its journal on your RAID controller's cache. Cool, mind sharing it with the rest of us?
fsync(handle);
If we -dont- do this after processing each event, and the system fails catastrophically, a thousand or so events (a couple seconds worth of realtime data) are lost in the operating systems buffering. I feel like I'm repeating myself.
If the aggregate queues are up to 10GB, I really wonder wonder how much faster your hardware raid makes things unless of course your cache is much larger than 2GB. Just on the basis of the inadequate size of your cache I would give software raid + RAM card the benefit of the doubt.
the combined queue files average a few to 10GB total under a normal workload. if a downstream subscriber backs up, they can grow quite a bit, up to an arbitrarily set 100GB limit.. its these queue files that we are flushing with fsync(). each fsync is writing a few K to a few 100K bytes out, one 'event' worth of data which has been appended to one or another of the queues, from where it will eventually be forwarded to some number of downstream subscribers. What we're calling a journal is just the index/state of these queues, stored in a couple seperate very small files, that also get fsync() on writes, it has NOTHING to do with the file system.
to store these queues on a ramcard, we'd need 100GB to handle the backup cases, which, I hope you can agree, is ludicrious.
Throughput under test load (incoming streams free running as fast as they can be processed)
no fsync - 1000 events/second fsync w/ direct connect disk - 50-80 events/second fsync w/ hardware writeback cached raid - 800/second
seems like a clear win to me.