----- Original Message ----- | On 5/28/2014 11:13 AM, m.roth@5-cent.us wrote: | > We're looking at getting an HBR (that's a technical term, honkin' | > big | > RAID). What I'm considering is, rather than chopping it up into | > 14TB or | > 16TB filesystems, of using xfs for really big filesystems. The | > question | > that's come up is: what's the state of xfs on CentOS6? I've seen a | > number | > of older threads seeing problems with it - has that mostly been | > resolved? | > How does it work if we have some*huge* files, and lots and lots of | > smaller files? | | | I've had good luck with XFS file systems of 80TB or so for nearline | archival storage. thats 36 3TB SAS drives organized as 3 x 11 | raid6+0 | with 3 hotspares. | | found two minor(?) gotchas so far with XFS | | 1) NFS doesn't like 64bit inodes. you can A) only nfs share the root | of | the giant XFS file system (this *is* the traditional way, but people | from a Windows background seem to like to micromanage their shares), | or | B) use UUID exports (not compatible with all nfs clients in my | experience), or C) specify fsid=NNN for a arbitrary unique NNN for | each | export on a given server. We opted for C. | | 2) I just discovered the other night that KVM doesn't like booting | disk | image files stored on xfs on a 4K sector device (in my case, this was | an | SSD). solution was to specify cache=writeback, which somehow | bypasses | O_DIRECT. There's probably other fixes, but that works well enough. | | also, there was a bad kernel in 6.3 or something, that had a serious | bug | with XFS. the fix came out 2-3 weeks after 6.3 was released, but I | ran into internal operations people who don't update production | systems, | if you say you tested something on 6.3, then they use 6.3 forever. | They pathologically skip my installation step 2, "yum -y update".
Would this be the xfs_asynd chewing up CPU time bug? If that's the one, it was just spinning and didn't actually cause "problems" except that the load went berserk. In either case, there have been bugs with *every file system* I've used with large volumes, but XFS remains my goto file system for all large volumes as it is the best performance/stability match out there.