----- "Ben M." centos@rivint.com wrote:
Gack. I added an additional Raid1 pair to my machine just before I planned to bring it over to the office and I did something dumb and locked out.
I have the pv's, vg's and lv's cleared. All I need to do is get on root and remove a line from fstab, but I can't get it out of read only mode
to save the edit.
My root login at "Repair Filesystem" seems unable to make the file writeable. I have done this in past with Knoppix, but can't seem to file the utility to make the write editable (same with CentOS and other live CD's, am downloading newer BackTrack now).
The root filesystem's LV/device is not writable or your root filesystem is mounted read-only? If it's the latter, just ``mount -o remount,rw /''.
Thanks, I am going to try this. I finally got in by disconnecting the new RAID1 (nv fakeraid) and then linux rescue could mount the drives and volumes, I deleted the fstab entry and I'm good again.
I'm gonna take one more shot at adding this additional storage and will try your suggestion if it happens again.
Are there any known issues with Xen/CentOS "standard" and adding new drives and dmraid "automagically" activating them?
I have to say my biggest mistake was trying to add the pv (hd) to the existing vgs so I could migrate some virtual machine to the other array.
Big mess.
Christopher G. Stach II wrote:
----- "Ben M." centos@rivint.com wrote:
Gack. I added an additional Raid1 pair to my machine just before I planned to bring it over to the office and I did something dumb and locked out.
I have the pv's, vg's and lv's cleared. All I need to do is get on root and remove a line from fstab, but I can't get it out of read only mode
to save the edit.
My root login at "Repair Filesystem" seems unable to make the file writeable. I have done this in past with Knoppix, but can't seem to file the utility to make the write editable (same with CentOS and other live CD's, am downloading newer BackTrack now).
The root filesystem's LV/device is not writable or your root filesystem is mounted read-only? If it's the latter, just ``mount -o remount,rw /''.
----- "Ben M." centos@rivint.com wrote:
Are there any known issues with Xen/CentOS "standard" and adding new drives and dmraid "automagically" activating them?
There are problems with things like snapshots of LVs in VMs on LVs on dmraid in dom0. :) I don't know of anything about adding RAID devices in dom0. The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it.
The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it.
I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy on this machine anymore.
Christopher G. Stach II wrote:
----- "Ben M." centos@rivint.com wrote:
Are there any known issues with Xen/CentOS "standard" and adding new drives and dmraid "automagically" activating them?
There are problems with things like snapshots of LVs in VMs on LVs on dmraid in dom0. :) I don't know of anything about adding RAID devices in dom0. The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it.
----- "Ben M." centos@rivint.com wrote:
The metadata should get everything loaded for you out of the
initrd,
as long as it has all of the necessary RAID level drivers in it.
I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy on this machine anymore.
It may be easier to help if you explain where you're at, where you want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
It may be easier to help if you explain where you're at, where you
want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
This is all "Standard CentOS 5.3" install, with X Windows and XFCE4 loaded afterwards and initiated as command line, not as service. No repos other than 'stock'.
I use virt-manager to start of domUs quickly and then edit them to polish off.
Scenario ======== Small machine, a test case, but will be using pretty much the same for the next few builds, only twice the capacity. 4cpu cores, 16gig ram.
- 2X AMD 2212s (2 core by two) - Tyan Board, ECC Mem 16gig. - MCP55 nvidia fake raid (I have had good fortune with this chipset). - Pair of 160g WD Velocipaptors, RAID1 on the BIOS, dmraid on OS. - DVD plus 160gig pata drive on IDE for odds and ends. 160gig - ACPI Manufacturer's Table set to Off in BIOS, was troublesome.
Essentially everything is good with exception of a couple of xm dmesg WSMR and NMI warnings that are not fatal.
I'm ran decently with above. Fine on domU's with CentOS x86, not sure on x64 but don't need it yet. Have Windows 2008 Web Server (x86 version, not x64) "running" okay with GPLPV. I say okay because I must be getting some sloppy shutdowns due to some Registry Hive and corruption errors. It could be that W2K8 is not the stable creature that I found in W2k3. It certainly is loaded with an incredible amount of crap that is useless and some of which seems to be serving DRM more than system needs. They should have called it Vista Server, heh.
Then instability caused by addition of another RAID1 set.
Situation (Where I'm at) ========= I add in a pair of WD 640g Caviars, not RE types, which I modded a tiny bit with WDTyler utility to make them more "raid ready." It's a minor trick, but gives you a little more assurance.
They show up in /dev as sd(x)s, but not in /dev/mapper with the nvidia_ handle. I type dmraid -ay and there they are, but not auto-magically like the first pair do at boot. I don't see a conf file to make this happen
Mistake 1: ========== Never mount to the "primary" device in mapper, only the ones that end with "p1" "p2" etc.
Mistake 2: ========== Never pvextend a disk into the same vggroup as your / (root). I wanted this so I could migrate the extents off of the test domUs that were good. That would have been the easy, and slick way.
Probably Mistake 3: =================== Don't pvextend hard devices at all. Keep VolumeGroups dedicated PhysicalVolumes that don't cross over. My initial raidset was 'vg00' and the added on pata drive 'vg01' and I was fine with that.
The loss of a LogicalVolume that is on a dropped device is rather inelegant. I have not found out what happens if all is contained by a dedicated Volume Group (VG) to a dedicated device (PV), but if a LogicalVolume (LV) is on another device (PV) than the rest of the LVs within that VG then Xen, dom0 and all the domUs are non-booting and inaccessible via IP protocols.
fstab entry for a /dev/mapper/raid_device(PartNo)/LVname kills the boot if the LV isn't there, whether by hardware drop, or non-initialization by dmraid.
All seems fine on one dmraid device box. It is two Raid1s where I am hitting a wall. I'm going to try, right now, one more time, with the additional raid set, on its own PV/VG setup (e.g. vg02).
Where I was: Stable with 1 Raid1 set, running out of space.
Where I am: Unstable after attempting to add additional Raid1 set.
Where I want to be: 2 Raid1 sets 1 "miscellaneous" pata and stable.
I am going to try one more time to add the additional RAID1 set with own PV and VG and figure out how to move existing domUs to it after I bring it to office (150 miles away). I hope I don't have to run back and forth to go on the console.
Christopher G. Stach II wrote:
----- "Ben M." centos@rivint.com wrote:
The metadata should get everything loaded for you out of the
initrd,
as long as it has all of the necessary RAID level drivers in it.
I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy on this machine anymore.
It may be easier to help if you explain where you're at, where you want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
Give what I wrote before, after boot up I want this:
[root@ThisXen1 ~]# dmraid -s *** Active Set name : nvidia_fbacacad size : 293046656 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 *** Set name : nvidia_hfcehfah size : 1250263680 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0
To have BOTH sets Active so that my LVs for my domUs are up. nvidia_hfcehfah is not active.
If I type dmraid -ay, they will both be active. I can't find a grub.conf kernel argument to do that. Which would seem "safer" than the fstab entry and have a more "graceful" failure, likely limited to just domUs.
Ben M. wrote:
It may be easier to help if you explain where you're at, where you
want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
This is all "Standard CentOS 5.3" install, with X Windows and XFCE4 loaded afterwards and initiated as command line, not as service. No repos other than 'stock'.
I use virt-manager to start of domUs quickly and then edit them to polish off.
Scenario
Small machine, a test case, but will be using pretty much the same for the next few builds, only twice the capacity. 4cpu cores, 16gig ram.
- 2X AMD 2212s (2 core by two)
- Tyan Board, ECC Mem 16gig.
- MCP55 nvidia fake raid (I have had good fortune with this chipset).
- Pair of 160g WD Velocipaptors, RAID1 on the BIOS, dmraid on OS.
- DVD plus 160gig pata drive on IDE for odds and ends. 160gig
- ACPI Manufacturer's Table set to Off in BIOS, was troublesome.
Essentially everything is good with exception of a couple of xm dmesg WSMR and NMI warnings that are not fatal.
I'm ran decently with above. Fine on domU's with CentOS x86, not sure on x64 but don't need it yet. Have Windows 2008 Web Server (x86 version, not x64) "running" okay with GPLPV. I say okay because I must be getting some sloppy shutdowns due to some Registry Hive and corruption errors. It could be that W2K8 is not the stable creature that I found in W2k3. It certainly is loaded with an incredible amount of crap that is useless and some of which seems to be serving DRM more than system needs. They should have called it Vista Server, heh.
Then instability caused by addition of another RAID1 set.
Situation (Where I'm at)
I add in a pair of WD 640g Caviars, not RE types, which I modded a tiny bit with WDTyler utility to make them more "raid ready." It's a minor trick, but gives you a little more assurance.
They show up in /dev as sd(x)s, but not in /dev/mapper with the nvidia_ handle. I type dmraid -ay and there they are, but not auto-magically like the first pair do at boot. I don't see a conf file to make this happen
Mistake 1:
Never mount to the "primary" device in mapper, only the ones that end with "p1" "p2" etc.
Mistake 2:
Never pvextend a disk into the same vggroup as your / (root). I wanted this so I could migrate the extents off of the test domUs that were good. That would have been the easy, and slick way.
Probably Mistake 3:
Don't pvextend hard devices at all. Keep VolumeGroups dedicated PhysicalVolumes that don't cross over. My initial raidset was 'vg00' and the added on pata drive 'vg01' and I was fine with that.
The loss of a LogicalVolume that is on a dropped device is rather inelegant. I have not found out what happens if all is contained by a dedicated Volume Group (VG) to a dedicated device (PV), but if a LogicalVolume (LV) is on another device (PV) than the rest of the LVs within that VG then Xen, dom0 and all the domUs are non-booting and inaccessible via IP protocols.
fstab entry for a /dev/mapper/raid_device(PartNo)/LVname kills the boot if the LV isn't there, whether by hardware drop, or non-initialization by dmraid.
All seems fine on one dmraid device box. It is two Raid1s where I am hitting a wall. I'm going to try, right now, one more time, with the additional raid set, on its own PV/VG setup (e.g. vg02).
Where I was: Stable with 1 Raid1 set, running out of space.
Where I am: Unstable after attempting to add additional Raid1 set.
Where I want to be: 2 Raid1 sets 1 "miscellaneous" pata and stable.
I am going to try one more time to add the additional RAID1 set with own PV and VG and figure out how to move existing domUs to it after I bring it to office (150 miles away). I hope I don't have to run back and forth to go on the console.
Christopher G. Stach II wrote:
----- "Ben M." centos@rivint.com wrote:
The metadata should get everything loaded for you out of the
initrd,
as long as it has all of the necessary RAID level drivers in it.
I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy on this machine anymore.
It may be easier to help if you explain where you're at, where you want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having.
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt