Hi
I've recently rebuilt my home server using centos 7, and transplanted over the main storage disks
It's a 3 disk raid5, with an lvm storage group (vg03) on there
Activating and mounting works fine:
# vgscan Reading volume groups from cache. Found volume group "vg03" using metadata type lvm2
# vgchange -ay 1 logical volume(s) in volume group "vg03" now active
# lvscan ACTIVE '/dev/vg03/storage' [<1.82 TiB] inherit
I can then mount /dev/vg03/storage as expected
However on a reboot, boot fails if I add that entry to fstab:
'Timed out waiting for device dev-mapper-vg03\x2dstorage.device'
I then have to activate it again with vgchange. I'm guessing I'm going to need a grub option, or do something with dracut but I'm a bit stuck here
Thanks
Duncan
On 09/30/2017 08:30 AM, Duncan Brown wrote:
However on a reboot, boot fails if I add that entry to fstab: 'Timed out waiting for device dev-mapper-vg03\x2dstorage.device'
I then have to activate it again with vgchange. I'm guessing I'm going to need a grub option, or do something with dracut but I'm a bit stuck here
You can add the kernel option "rd.lvm.lv=vg03/storage" but that *should* only be necessary if that mount point is needed in the early boot process.
What does your fstab entry look like?
On 30/09/2017 17:49, Gordon Messmer wrote:
On 09/30/2017 08:30 AM, Duncan Brown wrote:
However on a reboot, boot fails if I add that entry to fstab: 'Timed out waiting for device dev-mapper-vg03\x2dstorage.device'
I then have to activate it again with vgchange. I'm guessing I'm going to need a grub option, or do something with dracut but I'm a bit stuck here
You can add the kernel option "rd.lvm.lv=vg03/storage" but that *should* only be necessary if that mount point is needed in the early boot process.
What does your fstab entry look like?
Thanks for the reply
No joy after adding the kernel option, exactly the same issue
As for the fstab entry:
# cat /etc/fstab
UUID=84cb3521-4722-4993-8f8d-07289d6486cb / xfs defaults 0 0 UUID=3f7c32cd-49bb-4fda-8dc1-db88d2912786 /boot xfs defaults 0 0 UUID=a36c7e69-67d6-4ad2-b4c5-01228b168c4b swap swap defaults 0 0 /dev/vg03/storage /mnt/storage xfs defaults,sunit=1024,swidth=2048,inode64 1 2
On 09/30/2017 05:25 PM, Duncan Brown wrote:
No joy after adding the kernel option, exactly the same issue
Is /etc/mdadm.conf up to date? Run "mdadm --detail --scan" to get the information you need, and either replace the lines in mdadm.conf or add the one that's missing. You might need to rebuild the initrd afterward (dracut --force). I'm unclear on why any of that would be necessary, though. I don't usually add pre-existing arrays to running systems, so I'm a bit out of my experience here.
On 01/10/17 11:25, Duncan Brown wrote:
No joy after adding the kernel option, exactly the same issue
It might require a vgexport then vgimport to fix.
vgimport man page:
DESCRIPTION vgimport allows you to make a Volume Group that was previously exported using vgexport(8) known to the system again, perhaps after moving its Physical Volumes from a different machine. vgexport clears the VG sys‐ tem ID, and vgimport sets the VG system ID to match the host running vgimport (if the host has a system ID).
On 05/10/2017 12:10, Anthony K wrote:
It might require a vgexport then vgimport to fix.
On 03/10/2017 21:28, Gordon Messmer wrote:
Is /etc/mdadm.conf up to date? Run "mdadm --detail --scan" to get the information you need, and either replace the lines in mdadm.conf or add the one that's missing. You might need to rebuild the initrd afterward (dracut --force). I'm unclear on why any of that would be necessary, though. I don't usually add pre-existing arrays to running systems, so I'm a bit out of my experience here.
Thanks for the replies both
I'd already tried both those ideas before posting, I should have mentioned
In the end what fixed it was copying over the lvm.conf from the old system backup, and rebuilding the initrd
I didn't think to diff the two before hand so not sure what it was that changed, but sorted now anyway!
thanks again
Duncan