I have a Centos 7 system with an mdraid array (raid 1). I removed a drive from it a couple of months ago and replaced it with a new drive. Now I want to recover some information from that old drive.
I know how to mount the drive, and have done so on another system to confirm that the information I want is there.
My question is this:
What is going to happen when I try to mount a drive that the system thinks is part of an existing array?
To put it another way: I had two drives in md127. I removed one (call it drive1), and replaced it with a new drive. Some files were accidentally deleted from md127, so now I want to connect drive1 back to the same machine and mount it as a separate array from md127 so I can copy the files from drive1 back to md127. What do I need to do to make that happen?
Thanks,
Bowie
Once upon a time, Bowie Bailey Bowie_Bailey@BUC.com said:
What is going to happen when I try to mount a drive that the system thinks is part of an existing array?
I don't _think_ anything special will happen - md RAID doesn't go actively looking for drives like that AFAIK. And RAID 1 means you should be able to ignore RAID and just access the contents directly.
However, the contents could still be a problem. If LVM was in use on it, that will be a problem, because LVM does auto-probe and will react when it sees the same UUID (IIRC LVM will only block access to the newly seen drive). I don't think any filesystems care (I know I've mounted snapshots of ext4 and IIRC xfs on the same system, haven't touched btrfs).
On 3/8/2023 4:08 PM, Chris Adams wrote:
Once upon a time, Bowie Bailey Bowie_Bailey@BUC.com said:
What is going to happen when I try to mount a drive that the system thinks is part of an existing array?
I don't _think_ anything special will happen - md RAID doesn't go actively looking for drives like that AFAIK. And RAID 1 means you should be able to ignore RAID and just access the contents directly.
However, the contents could still be a problem. If LVM was in use on it, that will be a problem, because LVM does auto-probe and will react when it sees the same UUID (IIRC LVM will only block access to the newly seen drive). I don't think any filesystems care (I know I've mounted snapshots of ext4 and IIRC xfs on the same system, haven't touched btrfs).
I'm not using LVM on this drive, so that won't be an issue.
My concern is that since the raid info on the drive will identify itself as part of the active raid, the system will try to add it to the raid (probably as a spare) when it comes online. I don't think that would be destructive, but I would have to figure out how to separate it out if that happens. I'm hoping that it won't be an issue since there are no missing drives in the existing raid.
I know I will have to bring the drive online as a broken array, but I've done that from other systems. The only question there is can I simply rebuild it with a different name. I assume I can just do "mdadm -A --run /dev/md0 /dev/sdc1" (possibly with "--force" due to the broken array) even if sdc1 was originally part of the existing md127 array?
The system in question is in a data center, so I'm trying to get ahead of any possible problems to avoid having to deal with unexpected issues while I'm there.