Timo Schoeler wrote:
[off list]
Thanks for your eMail, Ross. So, reading all the stuff here I'm really concerned about moving all our data to such a system. The reason we're moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs after a crash. XFS seemed to me to fit perfectly as I never had issues with fsck here. However, this discussion seems to change my mindset. So, what would be an alternative (if possible not using hardware RAID controllers, as already mentioned)? ext3 is not, here we have long fsck runs, too. Even ext4 seems not too good in this area...
I thought 3ware would have been good. Their cards have been praised for quite some time...have things changed? What about Adaptec?
Well, for me the recommended LSI is okay as it's my favorite vendor, too. I used to abandon Adaptec quite a while ago and my optinion was confirmed when the OpenBSD vs. Adaptec discussion came up. However, the question on the hardware RAID's vendor is totally independent from the file system discussion.
Oh yeah it is. If you use hardware raid, you do not need barriers and can afford to turn it off for better performance or use LVM for that matter.
Hi, this ist off list: Could you please explain me the LVM vs. barrier thing?
AFAIU, one should turn off write caches on HDs (in any case), and -- if there's a BBU backed up RAID controller -- use this cache, but turn off barriers. When does LVM come into play here? Thanks in advance! :)
No, barriers are specifically to allow you to turn on write caches on HDs and not lose data. Before barriers, fsync/fsyncdata lied. They would return before data hit the platters. With barriers, fsync/fsyncdata will return only after data hit the platters.
However, the dm layer does not support barriers so you need to turn write caches off if you care about data with lvm and you have no bbu cache to use.
If you use a hardware raid card with bbu cache, you can use lvm without worrying and if not using lvm, you can (should in the case of XFS) turn off barriers.
I re-read XFS's FAQ on this issues, seems to me that we have to set up two machines in the lab, one purely software RAID driven, and one with a JBOD configured hardware RAID controller, and then benchmark and stress testing the setup.
JBOD? You plan to use software raid with that? Why?!
Mainly due to better manageability and monitoring. Honestly, all the proprietary tools are not the best.
3dm2 for 3ware was pretty decent whether http or cli...