[CentOS] Mount removed raid disk back on same machine as original raid

Tue Mar 14 13:50:33 UTC 2023
Bowie Bailey <Bowie_Bailey at BUC.com>

On 3/8/2023 4:08 PM, Chris Adams wrote:
> Once upon a time, Bowie Bailey <Bowie_Bailey at BUC.com> said:
>> What is going to happen when I try to mount a drive that the system
>> thinks is part of an existing array?
> I don't _think_ anything special will happen - md RAID doesn't go
> actively looking for drives like that AFAIK.  And RAID 1 means you
> should be able to ignore RAID and just access the contents directly.
>
> However, the contents could still be a problem.  If LVM was in use on
> it, that will be a problem, because LVM does auto-probe and will react
> when it sees the same UUID (IIRC LVM will only block access to the newly
> seen drive).  I don't think any filesystems care (I know I've mounted
> snapshots of ext4 and IIRC xfs on the same system, haven't touched
> btrfs).

I'm not using LVM on this drive, so that won't be an issue.

My concern is that since the raid info on the drive will identify itself 
as part of the active raid, the system will try to add it to the raid 
(probably as a spare) when it comes online.  I don't think that would be 
destructive, but I would have to figure out how to separate it out if 
that happens.  I'm hoping that it won't be an issue since there are no 
missing drives in the existing raid.

I know I will have to bring the drive online as a broken array, but I've 
done that from other systems.  The only question there is can I simply 
rebuild it with a different name.  I assume I can just do "mdadm -A 
--run /dev/md0 /dev/sdc1" (possibly with "--force" due to the broken 
array) even if sdc1 was originally part of the existing md127 array?

The system in question is in a data center, so I'm trying to get ahead 
of any possible problems to avoid having to deal with unexpected issues 
while I'm there.

-- 
Bowie