thus Christopher Chan spake:
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
Anyway, data loss issues today should come down to not setting up properly. Like disabling barriers on disks that have their write cache enabled.
That's exactly the point; maybe it is due to XFS coming from an enterprise-class OS (IRIX) to the open source community. On IRIX, there was a distinctive hardware platform on which IRIX and thusly XFS was run on. When XFS was ported to GNU/Linux, it not only had to deal with different LVM and RAID devices/mechanisms, but also with some hassles when being deployed on 32bit environments, for which it just wasn't designed.
So, to sum it up: IMHO it was surely in most cases not XFS's fault when data loss occured, but more due to errors that were made when being deployed (in GNU/Linux environments), be it 32bit issues, (missing) barriering or whatever.
It'd be interesting to see some statistics on XFS issues on IRIX vs GNU/Linux.
Regards,
Timo