On 11/9/2015 11:34 AM, Valeri Galtsev wrote: > I wonder how filesystem behaves when almost every file has some 400 hard > links to it. (thinking in terms of a year worth of daily backups). XFS handles this fine. I have a backuppc storage pool with backups of 27 servers going back a year... now, I just have 30 days of incrementals, and 12 months of fulls, but in backuppc's implementation the distinction between incremental and full is quite blurred as both are fully deduped across the whole pool via use of hard links. * Pool is 5510.40GB comprising 9993293 files and 4369 directories (as of 11/9 02:08), * Pool hashing gives 3452 repeated files with longest chain 54, * Nightly cleanup removed 737 files of size 1.64GB (around 11/9 02:08), * Pool file system was recently at 35% (11/9 11:44), today's max is 35% (11/9 01:00) and yesterday's max was 36%. There are 27 hosts that have been backed up, for a total of: * 441 full backups of total size 71125.43GB (prior to pooling and compression), * 623 incr backups of total size 20775.88GB (prior to pooling and compression). so 90+TB of backups take 5.5TB of actual space. -- john r pierce, recycling bits in santa cruz