On Sun, Mar 1, 2015 at 8:07 PM, Khemara Lin <lin.kh at wicam.com.kh> wrote: > Dear Chris, James, Valeri and all, > > Sorry to have not responded as I'm still on struggling with the recovery > with no success. > > I've been trying to set up a new system with the exact same scenario (4 2TB > hard drives and remove the 3rd one afterwards). I still cannot recover. Well, it's effectively a raid0. While it's not block level striping, it's a linear allocation, the way ext4 and XFS write, you're going to get file extents and fs metadata strewn across all four drives. As soon as any one drive is removed the whole thing is sufficiently damaged it can't recover without a lot of work. Imagine a (really bad example physics wise) single drive scenario and magically punching a hole through a drive such that it'll still spin. The fs on that drive is doing to have all sorts of problems because of the hole, even if it can read 3/4 of the drive. > > We did have a backup system but it went bad for a while and we did not have > replacement on time until this happened. > > From all of your responses, it seems, recovery is almost impossible. I'm now > trying to look at the hardware part and get the damaged hard drive to fixed. About the best case scenario with such a situation is literally do nothing with the LVM setup, and send that PV off for block level data recovery (you didn't say how it failed but I'm assuming it's beyond the ability to fix it locally). Then once the recovered replacement PV is back in the setup, things will just work again. *shrug* LVM linear isn't designed to be fail safe in the face of a single device failure. -- Chris Murphy