[CentOS] CentOS 7 grub.cfg missing on new install

Thu Dec 11 17:45:52 UTC 2014
Gordon Messmer <gordon.messmer at gmail.com>

On 12/10/2014 10:13 AM, Jeff Boyce wrote:
> The short story is that got my new install completed with the 
> partitioning I wanted and using software raid, but after a reboot I 
> ended up with a grub prompt, and do not appear to have a grub.cfg file.
...
> I initially created the sda[1,2] and sdb[1,2] partitions via GParted 
> leaving the remaining space unpartitioned.

I'm pretty sure that's not necessary.  I've been able to simply change 
the device type to RAID in the installer and get mirrored partitions.  
If you do your setup entirely in Anaconda, your partitions should all 
end up fine.

> At this point I needed to copy my /boot/efi and /boot partitions from 
> sda[1,2] to sdb[1,2] so that the system would boot from either drive, 
> so I issued the following sgdisk commands:
>
> root#  sgdisk -R /dev/sdb1 /dev/sda1
> root#  sgdisk -R /dev/sdb2 /dev/sda2
> root#  sgdisk -G /dev/sdb1
> root#  sgdisk -G /dev/sdb2

sgdisk manipulates GPT, so you run it on the disk, not on individual 
partitions.  What you've done simply scrambled information in sdb1 and sdb2.

The correct way to run it would be
# sgdisk -R /dev/sdb /dev/sda
# sgdisk -G /dev/sdb

However, you would only do that if sdb were completly unpartitioned.  As 
you had already made at least one partition on sdb a member of a RAID1 
set, you should not do either of those things.

The entire premise of what you're attempting is flawed.  Making a 
partition into a RAID member is destructive.  mdadm writes its metadata 
inside of the member partition.  The only safe way to convert a 
filesystem is to back up its contents, create the RAID set, format the 
RAID volume, and restore the backup.  Especially with UEFI, there are a 
variety of ways that can fail.  Just set up the RAID sets in the installer.

> I then installed GRUB2 on /dev/sdb1 using the following command:
> root#  grub2-install /dev/sdb1
>    Results:  Installing for x86_64-efi platform.  Installation 
> finished. No error reported.

Again, you can do that, but it's not what you wanted to do.  GRUB2 is 
normally installed on the drive itself, unless there's a chain loader 
that will load it from the partition where you've installed it.  You 
wanted to:
# grub2-install /dev/sdb

> I rebooted the system now, only to be confronted with a GRUB prompt.

I'm guessing that you also constructed RAID1 volumes before rebooting, 
since you probably wouldn't install GRUB2 until you did so, and doing so 
would explain why GRUB can't find its configuration file (the filesystem 
has been damaged), and why GRUB shows "no known filesystem detected" on 
the first partition of hd1.

If so, that's expected.  You can't convert a partition in-place.

> Looking through the directories, I see that there is no grub.cfg file.

It would normally be in the first partition, which GRUB cannot read on 
your system.

> So following the guidance I had I issued the following commands in 
> grub to boot the system.
>
> grub#  linux /vmlinuz -3.10.0-123.el7.x86_64 root=/dev/sda2 ro
> grub#  initrd /initramfs-3.10.0-123.el7.x86_64.img
> grub#  boot
>
> Unfortunately the system hung on booting, with the following 
> information in the "journalctl" file:
> #  journalctl
> Not switching root: /sysroot does not seem to be an OS tree. 
> /etc/os-release is missing.

On your system, /dev/sda2 is "/boot" not the root filesystem.  Your 
"root=" arg should refer to your root volume, which should be something 
like "root=/dev/mapper/vg_jab-hostroot".  dracut may also need 
additional args to initialize LVM2 volumes correctly, such as 
"rd.lvm.lv=vg_jab/hostroot".  If you had encrypted your filesystems, it 
would also need the uuid of the LUKS volume.