On Fri, Feb 27, 2015 at 9:00 PM, Marko Vojinovic <vvmarko at gmail.com> wrote: > And this is why I don't like LVM to begin with. If one of the drives > dies, you're screwed not only for the data on that drive, but even for > data on remaining healthy drives. It has its uses, just like RAID0 has uses. But yes, as the number of drives in the pool increases, the risk of catastrophic failure increases. So you have to bet on consistent backups and be OK with any intervening dataloss. If not, well, use RAID1+ or use a distributed-replication cluster like GlusterFS or Ceph. > Hardware fails, and storing data without a backup is just simply > a disaster waiting to happen. I agree. I kind get a wee bit aggressive and say, if you don't have backups the data is by (your own) definition not important. Anyway, changing the underlying storage as little as possible gives the best chance of success. linux-raid@ list is full of raid5/6 implosions due to people panicking, reading a bunch of stuff, not identifying their actual problem, and just start typing a bunch of commands and end up with user induced data loss. In the case of this thread, I'd say the best chance for success is to not remove or replace the dead PV, but to do a partial activation. # vgchange -a y --activationmode partial And then ext4 it's a scrape operation with debugfs -c. And for XFS looks like some amount of data is possibly recoverable with just an ro mount. I didn't try any scrape operation, too tedious to test. -- Chris Murphy