And then Btrfs (no LVM). mkfs.btrfs -d single /dev/sd[bcde] mount /dev/sdb /mnt/bigbtr cp -a /usr /mnt/bigbtr Unmount. Poweroff. Kill 3rd of 4 drives. Poweron. mount -o degraded,ro /dev/sdb /mnt/bigbtr ## degraded,ro is required or mount fails cp -a /mnt/bigbtr/usr/ /mnt/btrfs ## copy to a different volume No dmesg errors. Bunch of I/O errors only when it was trying to copy data on the 3rd drive. But it continues. # du -sh /mnt/btrfs/usr 2.5G usr Exactly 1GB was on the missing drive. So I recovered everything that wasn't on that drive. One gotcha that applies to all three fs's that I'm not testing: in-use drive failure. I'm simulate drive failure by first cleanly unmounting and powering off. Super ideal. How the file system and anything underneath it (LVM and maybe RAID) handles drive failures while in use, is a huge factor. Chris Murphy