Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine:
Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use.
Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems.
I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume.
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Here are some config files: http://pastebin.com/m6d5075dc
Thanks! Nick
Hi You have all the history of your lvm volumes in /etc/lvm/archive|backup search for the missing id in the archive to understand what append !
Does raid synchronization was ended before your have created your new PV or before to reboot. It should not change anything, but just an idea.
On 9/4/07, Nick Webb webbn@acm.org wrote:
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine:
Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use.
Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems.
I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume.
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Here are some config files: http://pastebin.com/m6d5075dc
Thanks! Nick _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi Alain,
On 9/4/07, Alain Spineux aspineux@gmail.com wrote:
Hi You have all the history of your lvm volumes in /etc/lvm/archive|backup search for the missing id in the archive to understand what append !
Does raid synchronization was ended before your have created your new PV or before to reboot. It should not change anything, but just an idea.
I will check that. I found those files before, but had no direction on them.
Anyone have an idea on how to capture the output of LVM on boot? It goes by pretty fast then reboots, I have only 1 or 2 seconds to write it down before it reboots. Took me 2 or 3 reboots just to get the gist of the message last time.
I've tried using a serial console (append options to grub: console=tty0 console=ttyS0,38400), but the errors don't show up on the serial console only on the real console. If I could get it to output to the serial console, it would be much easier to track down the volumes with errors.
Nick
On 9/4/07, Nick Webb webbn@acm.org wrote:
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine:
Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use.
Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems.
I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume.
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Here are some config files: http://pastebin.com/m6d5075dc
Thanks! Nick _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Alain Spineux aspineux gmail com May the sources be with you _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Nick Webb
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine:
Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use.
Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems.
I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume.
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Here are some config files: http://pastebin.com/m6d5075dc
Did you PV /dev/sda3 and /dev/sdb3 instead of /dev/md2 either before or after the RAID set was created?
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On 9/4/07, Ross S. W. Walker rwalker@medallion.com wrote:
From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Nick Webb
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine:
Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use.
Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems.
I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume.
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Here are some config files: http://pastebin.com/m6d5075dc
Did you PV /dev/sda3 and /dev/sdb3 instead of /dev/md2 either before or after the RAID set was created?
-Ross
I don't think so, the partition types are 'fd' and if I boot from the rescue CD, everything seems to be working fine.
sh-3.1# fdisk -l
Disk /dev/sda: 18.2 GB, 18210036736 bytes 255 heads, 63 sectors/track, 2213 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 32 257008+ fd Linux raid autodetect /dev/sda2 33 1122 8755425 fd Linux raid autodetect /dev/sda3 1123 2213 8763457+ fd Linux raid autodetect
Disk /dev/sdb: 18.2 GB, 18210036736 bytes 255 heads, 63 sectors/track, 2213 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 * 1 32 257008+ fd Linux raid autodetect /dev/sdb2 33 1122 8755425 fd Linux raid autodetect /dev/sdb3 1123 2213 8763457+ fd Linux raid autodetect
Nick
This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install.
Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file?
Did you re-create the initrd after adding the new raid group? Not 100% about this but my guess would be that the initrd isn't starting md2 and thus can't find the pv on that raid device.
-Shad