[CentOS] CentOS 7 install on one RAID 1 [not-so-SOLVED]

Wed Jan 25 17:31:07 UTC 2017
Gordon Messmer <gordon.messmer at gmail.com>

You didn't answer all of the questions I asked, but I'll answer as best 
I can with the information you gave.

On 01/25/2017 04:47 AM, mark wrote:
>
> Made an md RAID 0 on the raw disks - /dev/sda /dev/sdb. No partitions, 
> nothing.

OK, so right off the bat we have to note that this is not a 
configuration supported by Red Hat.  It is possible to set such a system 
up, but it may require advanced knowledge of grub2 and mdadm.  Because 
the vendor doesn't support this configuration, and as you've seen, the 
tools don't always parse out the information they need, you'll forever 
be responsible for fixing any boot problems that come up.  Do you really 
want that?

I sympathize.  I wanted to use full disk RAID, too.  I thought that 
replacing disks would be much easier this way, since there'd just be one 
md RAID device to manage.  That was an attractive option after working 
with hardware RAID controllers that were easy to manage but expensive, 
unreliable, and performed very poorly in some conditions.  But after a 
thorough review, I found my earlier suggestion of partitioned RAID with 
the kickstart and RAID management script I provided was the least work 
for me, in the long term.

> However, when I bring it up, fdisk shows an MBR with no partitions. I 
> can, however, mount /dev/md127p3 as /mnt/sysimage, and all is there.

I assume you're booting with BIOS, then?

One explanation for fdisk showing nothing is that you're using GPT 
instead of MBR (I think).  In order to boot on such a system, you'd need 
a bios_boot partition at the beginning of the RAID volume to provide 
enough room for grub2 not to stomp on the first partition with a filesystem.

The other explanation that comes to mind is that you're using an mdadm 
metadata version stored at the beginning of the drive instead of the 
end.  Do you know what metadata version you used?

> Did I need to make a single partition, on each drive, and then make 
> the RAID 1 out of *those*? I don't think I need to have /boot not on a 
> RAID.


That's one option, but it still won't be a supported configuration.