On 2014-01-21, Steve Brooks steveb@mcs.st-and.ac.uk wrote:
mkfs.xfs -d su=512k,sw=14 /dev/sda
where "512k" is the Stripe-unit size of the single logical device built on the raid controller. "14" is from the total number of drives minus two (raid 6 redundancy).
The usual advice on the XFS list is to use the defaults where possible. But you might want to ask there to see if they have any specific advice.
I mounted the filesystem with the default options assuming they would be sensible but I now believe I should have specified the "inode64" mount option to avoid all the inodes will being stuck in the first TB.
The filesystem however is at 87% and does not seem to have had any issues/problems.
df -h | grep raid
/dev/sda 51T 45T 6.7T 87% /raidstor
Wow, impressive! I know of a much smaller fs which got bit by this issue. What probably happened is, as a new fs, the entire first 1TB was able to be reserved for inodes.
Another question is could I now safely remount with the "inode64" option or will this cause problems in the future? I read this below in the XFS FAQ but wondered if it have been fixed (backported?) into el6.4?
I have mounted a large XFS fs that previously didn't use inode64 with it, and it went fine. (I did not attempt to roll back.) You *must* umount and remount for this option to take effect. I do not know when the inode64 option made it to CentOS, but it is there now.
I also noted that "xfs_check" ran out of memory and so after some reading noted that it is reccommended to use "xfs_repair -n -vv" instead as it uses far less memory. One remark is so why is "xfs_check" there at all?
The XFS team is working on deprecating it. But on a 51TB filesystem xfs_repair will still use a lot of memory. Using -P can help, but it'll still use quite a bit (depending on the extent of any damage and how many inodes, probably a bunch of other factors I don't know).
--keith