[CentOS-virt] LVM Lockout

Mon Oct 19 19:34:17 UTC 2009
Ben M. <centos at rivint.com>

I JUST got it straightened out last night and was going to try and make 
a short note for simpletons like me to follow. Rereading the notes and 
starting over, cleaning up the mess I made. Thank you Christopher.

This procedure works on nvidia (sata_nv) and probably the other types. I 
hope I started off as clean as I think and this will work for others. I 
wish I could have found a tutorial like this.

(This is from fresh, if you have been messing around, like I did, you 
have to do a very thorough cleanup. If you need a cleanup routine, let 
me know. It really is a pain. Lot's of pvck and lvcheck, 
--removemissing's, fsck's, etc. Don't do this if you are not sure you 
are 100% clean.)

Adding new arrays for Simpletons (about an hour plus or minus):

1) Put in your new RAID1 disks (plug pulled of course). Plug back in 
when done.

2) Fire up, go to BIOS, make sure RAID enabled on those disk sockets. 
Save, reboot if needed.

3) Fire up again, go to "Raid BIOS", add the new RAID set, clear the MBR 
in the Raid setup. Save, reboot.

4) Login ssh on dom0 as root


  [root at thisdom0 ~]# dmraid -s

should yield similar output:

*** Active Set
name   : nvidia_fbacacad
size   : 293046656
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
*** Set
name   : nvidia_hfcehfah
size   : 1250263680
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0


In this case nvidia_hfcehfah is our target, the new array. It is Set, 
but not Active Set. If it doesn't appear, STOP HERE. There is a problem 
you have to figure out.

5) Check what has been "auto-mapped"

[root at thisdom0 ~]# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 62 Oct 18 21:46 control
brw-rw---- 1 root disk 253,  0 Oct 18 21:46 nvidia_fbacacad
brw-rw---- 1 root disk 253,  1 Oct 19 01:46 nvidia_fbacacadp1
brw-rw---- 1 root disk 253,  2 Oct 18 21:46 nvidia_fbacacadp2
brw-rw---- 1 root disk 253,  8 Oct 19 01:46 serv1vg00-root
brw-rw---- 1 root disk 253,  9 Oct 18 21:46 serv1vg00-swap
brw-rw---- 1 root disk 253, 10 Oct 19 01:47 serv1vg00-vgw2k8--1
brw-rw---- 1 root disk 253, 12 Oct 19 01:46 serv1vg01-isos
brw-rw---- 1 root disk 253, 11 Oct 19 01:46 serv1vg01-storage
brw-rw---- 1 root disk 253, 13 Oct 19 01:46 serv1vg01-vgw2k8swap--1


There should be NO reference to nvidia_hfcehfah (or whatever your new 
array is named). (If there is, you have to do a cleanup. Remove all LVMs 
referencing, check with  # mount to see if any mounts, umount them if 
so. And more including fstab cleanup, restore your init if you did a 
mkinitrd, etc. STOP here if you need cleanup.)

~~  If you reboot before these next items are completely done, you will 
have nasty bite marks.~~

6) Activate your new array manually.

  [root at thisdom0 ~]# dmraid -ay

RAID set "nvidia_fbacacad" already active
RAID set "nvidia_hfcehfah" was activated
RAID set "nvidia_fbacacadp1" already active
RAID set "nvidia_fbacacadp2" already active


You should see your target (nvidia_hfcehfah in this case) 'was activated'.

7) Confirm what kernel you are running on.

  [root at thisdom0 ~]# uname -r
2.6.18-164.el5xen

8) Confirm that your target is indeed booting off of the grub default 
that we are going to update the init image for. It is the first "title" 
entry when default=0

  [root at thisdom0 ~]# cat /boot/grub/grub.conf
default=0
timeout=10
# splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-164.el5xen)
         root (hd0,0)
         kernel /xen.gz-2.6.18-164.el5 dom0_mem=1G dom0-min-mem=256
         module /vmlinuz-2.6.18-164.el5xen ro root=/dev/serv1vg00/root 
elevator=deadline
         module /initrd-2.6.18-164.el5xen.img


That is it: module /initrd-2.6.18-164.el5xen.img


9) Prepare for the worse. The initrd here is my current target. Do not 
copy and paste these names from here. You will be wrong.

(this is all on one line)
[root at thisdom0 ~]# cp /boot/initrd-2.6.18-164.el5xen.img 
/boot/initrd-2.6.18-164.el5xen.img.original

[root at thisdom0 ~]# rm /boot/initrd-2.6.18-164.el5xen.img

(this is all on one line)
[root at thisdom0 ~]#  /sbin/mkinitrd -v /boot/initrd-2.6.18-164.el5xen.img 
2.6.18-164.el5xen

(double check your kernel versions before you hit enter)

With the -v toggle you will see your new init assembling. You will NOT 
see the new array in the output. It won't show, that threw me off 
several times until I just said heck with it.

Sit back, count to ten. Then go over what you did. You are going to 
reboot when you feel sure.

10) Reboot. If you came back up, you did good. Check your raidsets right 
away.

[root at thisdom0 ~]# dmraid -s
*** Active Set
name   : nvidia_fbacacad
size   : 293046656
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
*** Active Set
name   : nvidia_hfcehfah
size   : 1250263680
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

11) Hoo-effing-ray! That looks like it worked. Let's confirm.

[root at thisdom0 ~]# ls -l /dev/mapper/
total 0
crw------- 1 root root  10, 62 Oct 18 21:46 control
brw-rw---- 1 root disk 253,  0 Oct 18 21:46 nvidia_fbacacad
brw-rw---- 1 root disk 253,  1 Oct 19 01:46 nvidia_fbacacadp1
brw-rw---- 1 root disk 253,  2 Oct 18 21:46 nvidia_fbacacadp2
brw-rw---- 1 root disk 253,  3 Oct 18 21:46 nvidia_hfcehfah
brw-rw---- 1 root disk 253,  8 Oct 19 01:46 serv1vg00-root
brw-rw---- 1 root disk 253,  9 Oct 18 21:46 serv1vg00-swap
brw-rw---- 1 root disk 253, 10 Oct 19 01:47 serv1vg00-vgw2k8--1
brw-rw---- 1 root disk 253, 12 Oct 19 01:46 serv1vg01-isos
brw-rw---- 1 root disk 253, 11 Oct 19 01:46 serv1vg01-storage
brw-rw---- 1 root disk 253, 13 Oct 19 01:46 serv1vg01-vgw2k8swap--1

And there target is there: nvidia_hfcehfah

12) Setup your partitions now. I used fdisk, don't forget the full path:

[root at thisdom0 ~]# fdisk /dev/mapper/nvidia_hfcehfah

Do what you want. I made little partitions in front for a boot and swap 
size arrangement if needed in the future. Divided the rest equally in 
case I want to move things around. Don't forget to set your id types to 
what they should be (8e in most cases where you are laying in vm's)

After fdisk'ing I did a 'shutdown -hF now', pulled the plug and let it 
sit for a while. I wanted to make sure everything came back up after a 
dead cold stop event.

Now you can lay in your PV's, VG's, LV's etc. as you wish. I currently 
keep my PV's isolated to respective hard devices. A missing volume 
setting in your "root" drive can be troublesome and cut you off from 
remote access. You lose some convenience of migrating extents, but you 
gain a little bit of bullet proofing in maintaining remote access.

- Ben Montanelli IV


Christopher G. Stach II wrote:
> ----- "Ben M." <centos at rivint.com> wrote:
> 
>> They show up in /dev as sd(x)s, but not in /dev/mapper with the
>> nvidia_ 
>> handle. I type dmraid -ay and there they are, but not auto-magically 
>> like the first pair do at boot. I don't see a conf file to make this
>> happen
> 
> You can temporarily edit rc.sysinit around the dmraid logic and add some debugging to see what it's doing or not doing.
> 
> Have you considered skipping dmraid and just using MD RAID? You would probably be better off.
>