[CentOS-virt] LVM Lockout SIMPLER

Thu Oct 15 01:53:00 UTC 2009
Ben M. <centos at rivint.com>

Give what I wrote before, after boot up I want this:

[root at ThisXen1 ~]# dmraid -s
*** Active Set
name   : nvidia_fbacacad
size   : 293046656
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
*** Set
name   : nvidia_hfcehfah
size   : 1250263680
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0


To have BOTH sets Active so that my LVs for my domUs are up. 
nvidia_hfcehfah is not active.

If I type dmraid -ay, they will both be active. I can't find a grub.conf 
kernel argument to do that. Which would seem "safer" than the fstab 
entry and have a more "graceful" failure, likely limited to just domUs.






Ben M. wrote:
>  > It may be easier to help if you explain where you're at, where you 
> want to be, and what you did to make it worse. :) The dmraid and LVM 
> layouts would help, as well as the symptoms you're having.
> 
> 
> This is all "Standard CentOS 5.3" install, with X Windows and XFCE4 
> loaded afterwards and initiated as command line, not as service. No 
> repos other than 'stock'.
> 
> I use virt-manager to start of domUs quickly and then edit them to 
> polish off.
> 
> Scenario
> ========
> Small machine, a test case, but will be using pretty much the same for 
> the next few builds, only twice the capacity. 4cpu cores, 16gig ram.
> 
> - 2X AMD 2212s (2 core by two)
> - Tyan Board, ECC Mem 16gig.
> - MCP55 nvidia fake raid (I have had good fortune with this chipset).
> - Pair of 160g WD Velocipaptors, RAID1 on the BIOS, dmraid on OS.
> - DVD plus 160gig pata drive on IDE for odds and ends. 160gig
> - ACPI Manufacturer's Table set to Off in BIOS, was troublesome.
> 
> Essentially everything is good with exception of a couple of xm dmesg 
> WSMR and NMI warnings that are not fatal.
> 
> I'm ran decently with above. Fine on domU's with CentOS x86, not sure on 
> x64 but don't need it yet. Have Windows 2008 Web Server (x86 version, 
> not x64) "running" okay with GPLPV. I say okay because I must be getting 
> some sloppy shutdowns due to some Registry Hive and corruption errors. 
> It could be that W2K8 is not the stable creature that I found in W2k3. 
> It certainly is loaded with an incredible amount of crap that is useless 
> and some of which seems to be serving DRM more than system needs. They 
> should have called it Vista Server, heh.
> 
> Then instability caused by addition of another RAID1 set.
> 
> 
> Situation (Where I'm at)
> =========
> I add in a pair of WD 640g Caviars, not RE types, which I modded a tiny 
> bit with WDTyler utility to make them more "raid ready." It's a minor 
> trick, but gives you a little more assurance.
> 
> They show up in /dev as sd(x)s, but not in /dev/mapper with the nvidia_ 
> handle. I type dmraid -ay and there they are, but not auto-magically 
> like the first pair do at boot. I don't see a conf file to make this happen
> 
> Mistake 1:
> ==========
> Never mount to the "primary" device in mapper, only the ones that end 
> with "p1" "p2" etc.
> 
> Mistake 2:
> ==========
> Never pvextend a disk into the same vggroup as your / (root). I wanted 
> this so I could migrate the extents off of the test domUs that were 
> good. That would have been the easy, and slick way.
> 
> Probably Mistake 3:
> ===================
> Don't pvextend hard devices at all. Keep VolumeGroups dedicated 
> PhysicalVolumes that don't cross over. My initial raidset was 'vg00' and 
> the added on pata drive 'vg01' and I was fine with that.
> 
> The loss of a LogicalVolume that is on a dropped device is rather 
> inelegant. I have not found out what happens if all is contained by a 
> dedicated Volume Group (VG) to a dedicated device (PV), but if a 
> LogicalVolume (LV) is on another device (PV) than the rest of the LVs 
> within that VG then Xen, dom0 and all the domUs are non-booting and 
> inaccessible via IP protocols.
> 
> fstab entry for a /dev/mapper/raid_device(PartNo)/LVname kills the boot 
> if the LV isn't there, whether by hardware drop, or non-initialization 
> by dmraid.
> 
> All seems fine on one dmraid device box. It is two Raid1s where I am 
> hitting a wall. I'm going to try, right now, one more time, with the 
> additional raid set, on its own PV/VG setup (e.g. vg02).
> 
> 
> Where I was:
> Stable with 1 Raid1 set, running out of space.
> 
> Where I am:
> Unstable after attempting to add additional Raid1 set.
> 
> Where I want to be:
> 2 Raid1 sets
> 1 "miscellaneous" pata
>   and stable.
> 
> I am going to try one more time to add the additional RAID1 set with own 
> PV and VG and figure out how to move existing domUs to it after I bring 
> it to office (150 miles away). I hope I don't have to run back and forth 
> to go on the console.
> 
> 
> 
> 
> Christopher G. Stach II wrote:
>> ----- "Ben M." <centos at rivint.com> wrote:
>>
>>>> The metadata should get everything loaded for you out of the
>>> initrd,
>>>  > as long as it has all of the necessary RAID level drivers in it.
>>>
>>> I wish. This has been agonizing and it looks like I am going to be 
>>> installing a kludge. I'm going to have to accelerate the build of the
>>> next ones. I don't have a warm and fuzzy on this machine anymore.
>> It may be easier to help if you explain where you're at, where you want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
>>
> 
> _______________________________________________
> CentOS-virt mailing list
> CentOS-virt at centos.org
> http://lists.centos.org/mailman/listinfo/centos-virt
>