[CentOS] XFS and LVM2 (possibly in the scenario of snapshots)
rswwalker at gmail.com
Thu Dec 10 14:59:11 UTC 2009
On Dec 10, 2009, at 4:28 AM, Timo Schoeler
<timo.schoeler at riscworks.net> wrote:
> [off list]
>>>>> Thanks for your eMail, Ross. So, reading all the stuff here I'm
>>>>> concerned about moving all our data to such a system. The reason
>>>>> moving is mainly, but not only the longisch fsck UFS (FreeBSD)
>>>>> after a crash. XFS seemed to me to fit perfectly as I never had
>>>>> with fsck here. However, this discussion seems to change my
>>>>> mindset. So,
>>>>> what would be an alternative (if possible not using hardware RAID
>>>>> controllers, as already mentioned)? ext3 is not, here we have
>>>>> long fsck
>>>>> runs, too. Even ext4 seems not too good in this area...
>>>> I thought 3ware would have been good. Their cards have been
>>>> praised for
>>>> quite some time...have things changed? What about Adaptec?
>>> Well, for me the recommended LSI is okay as it's my favorite vendor,
>>> too. I used to abandon Adaptec quite a while ago and my optinion was
>>> confirmed when the OpenBSD vs. Adaptec discussion came up.
>>> However, the
>>> question on the hardware RAID's vendor is totally independent from
>>> file system discussion.
>> Oh yeah it is. If you use hardware raid, you do not need barriers and
>> can afford to turn it off for better performance or use LVM for
>> that matter.
> Hi, this ist off list: Could you please explain me the LVM vs. barrier
> AFAIU, one should turn off write caches on HDs (in any case), and --
> there's a BBU backed up RAID controller -- use this cache, but turn
> barriers. When does LVM come into play here? Thanks in advance! :)
LVM like md raid and drbd is a layered block device and
If you turn the wire caches off on the HDs then there is no problem,
but HDs aren't designed to perform to spec with the write cache
disabled they expect important data is written with FUA access (forced
unit access), so performance will be terrible.
>>> I re-read XFS's FAQ on this issues, seems to me that we have to
>>> set up
>>> two machines in the lab, one purely software RAID driven, and one
>>> with a
>>> JBOD configured hardware RAID controller, and then benchmark and
>>> testing the setup.
>> JBOD? You plan to use software raid with that? Why?!
> Mainly due to better manageability and monitoring. Honestly, all the
> proprietary tools are not the best.
> CentOS mailing list
> CentOS at centos.org
More information about the CentOS