Mark Caudill wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
Yes, the Linux kernel has long been criticized for a fake fsync/fsyncdata implementation. At the latest, since 2001. Unless you had your hard drive caches turned off, you were at risk of losing data no matter what you used: ext2, ext3, reiserfs, xfs, jfs, whether on lvm or not.
Write barriers were introduced to give data guarantees with hard drives that have their write cache enabled. Unfortunately, not everything has been given barrier support. LVM and JFS do not have write barrier support.
So it is use LVM but turn off write caches on disks (painfully slow) or do not use LVM and use a filesystem with write barrier support and enable write caches on disks.
Hardware raid with bbu caches were introduced to provide speed and data guarantees. The other option would be to use software raid, disable write caching, use a bbu nvram stick and use ext3 with data=journal.