How many people use LVM's in their production servers? Are they reliable for use in these sorts of systems? I am asking this because i was reading the LVM Howto about upgrading discs etc.
Regards,
Peter
On 1/10/06, Peter Kitchener peter@registriesltd.com.au wrote:
How many people use LVM's in their production servers? Are they reliable for use in these sorts of systems? I am asking this because i was reading the LVM Howto about upgrading discs etc.
In the same boat...just building a backup server with LVM. However people from my LUG have been using it and claim this to be rock steady... -- Sudev Barar Learning Linux
On Monday 09 January 2006 08:49 pm, Sudev Barar wrote:
However people from my LUG have been using it and claim this to be rock steady...
Never used it. Our backup systems use multiple 250G drives (our next backup server will use multipe 400G drives. We use multiple homeX partitions, and we manage what backs up where so this works for us.
My recollection is that if any drive in an LVM fails the whole LVM fails.
If that's true, then I wouldn't ever use it as it would increas the failure rate.
Jeff
On Tue, 2006-01-10 at 11:27 -0800, Jeff Lasman wrote:
On Monday 09 January 2006 08:49 pm, Sudev Barar wrote:
However people from my LUG have been using it and claim this to be rock steady...
Never used it. Our backup systems use multiple 250G drives (our next backup server will use multipe 400G drives. We use multiple homeX partitions, and we manage what backs up where so this works for us.
My recollection is that if any drive in an LVM fails the whole LVM fails.
LVM doesn't fail .. but hard drives do. If a hard drive failed that was part of the LVM, you would lose all that info, true. That is why one should use a CRC type RAID (1, 1+0, 5) under LVM ... (or LVM on top of RAID) :)
Then you can replace the failed drive and keep going.
If that's true, then I wouldn't ever use it as it would increase the failure rate.
On Tue, 2006-01-10 at 13:40, Johnny Hughes wrote:
My recollection is that if any drive in an LVM fails the whole LVM fails.
LVM doesn't fail .. but hard drives do. If a hard drive failed that was part of the LVM, you would lose all that info, true. That is why one should use a CRC type RAID (1, 1+0, 5) under LVM ... (or LVM on top of RAID) :)
Then you can replace the failed drive and keep going.
Is there a simple way to install the system on LVM-on-top-of-RAID1?
On 1/10/06, Les Mikesell lesmikesell@gmail.com wrote:
On Tue, 2006-01-10 at 13:40, Johnny Hughes wrote:
My recollection is that if any drive in an LVM fails the whole LVM fails.
LVM doesn't fail .. but hard drives do. If a hard drive failed that was part of the LVM, you would lose all that info, true. That is why one should use a CRC type RAID (1, 1+0, 5) under LVM ... (or LVM on top of RAID) :)
Then you can replace the failed drive and keep going.
Is there a simple way to install the system on LVM-on-top-of-RAID1?
I'd like to second this question. The other day I tried a CentOS 4.2 install on two IDE drives. My goal was to mirror them (software raid 1) and put them in an LVM so that I might, in future, add two more drives and add them to the existing mirror.
1) Is this possible? 2) If so, how? I could not figure it out despite trying what seemed like every combination of options in disk druid.
I succeeded in getting a RAID 1, but not in putting a RAID inside a logical volume.
Thanks, Matt
On Jan 10, 2006, at 4:13 PM, Matt Morgan wrote:
- Is this possible?
yes.
- If so, how? I could not figure it out despite trying what seemed
like every combination of options in disk druid.
if memory serves...
1) create /dev/hda1 with type Software RAID autodetect and size 100M 2) create /dev/hdb1 with type Software RAID autodetect and size 100M 3) click the "RAID" button and create a RAID 1 volume, /dev/md0, using those two partitions 4) format /dev/md0 as ext3 with mount point /boot 5) create /dev/hda2 with type Software RAID autodetect and size <the rest of the disk> 6) create /dev/hdb2 with type Software RAID autodetect and size <the rest of the disk> 7) click the "RAID" button and create a RAID 1 volume, /dev/md1, using those two partitions 8) format /dev/md1 as Logical volume 9) click the "LVM" button and create a new volume group containing / dev/md1 10) create logical volumes within this volume group as necessary (i'd recommend at least swap, /var, /)
proceed with your install. when finished, you've got one more step after you reboot; you need to copy grub onto the MBR of /dev/hdb. here are instructions:
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
somebody else may have written this up on this list before.
-steve
--- If this were played upon a stage now, I could condemn it as an improbable fiction. - Fabian, Twelfth Night, III,v
On 1/10/06, Steve Huff shuff@vecna.org wrote:
On Jan 10, 2006, at 4:13 PM, Matt Morgan wrote:
- Is this possible?
yes.
- If so, how? I could not figure it out despite trying what seemed
like every combination of options in disk druid.
if memory serves...
- create /dev/hda1 with type Software RAID autodetect and size 100M
- create /dev/hdb1 with type Software RAID autodetect and size 100M
- click the "RAID" button and create a RAID 1 volume, /dev/md0,
using those two partitions 4) format /dev/md0 as ext3 with mount point /boot 5) create /dev/hda2 with type Software RAID autodetect and size <the rest of the disk> 6) create /dev/hdb2 with type Software RAID autodetect and size <the rest of the disk> 7) click the "RAID" button and create a RAID 1 volume, /dev/md1, using those two partitions 8) format /dev/md1 as Logical volume 9) click the "LVM" button and create a new volume group containing / dev/md1 10) create logical volumes within this volume group as necessary (i'd recommend at least swap, /var, /)
proceed with your install. when finished, you've got one more step after you reboot; you need to copy grub onto the MBR of /dev/hdb. here are instructions:
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
somebody else may have written this up on this list before.
-steve
Thanks! I'll have a chance to try this next week and will write back with confirmation, unless someone else gets to it first.
Les Mikesell wrote:
On Tue, 2006-01-10 at 13:40, Johnny Hughes wrote:
Is there a simple way to install the system on LVM-on-top-of-RAID1?
Yes, you simply setup the RAID1 array and then use it as the LVM PV..
I use software RAID1 on two SATA drives and then create the LVM structure on the RAID1 "disk".. Have 3 servers running like this and its great, one was running out of space in /home but had pleanty free in /var so I just reduced the size of /var and added to /home.. The whole process took just a few min..
LVM can also be useful to take backups of volumes where files are constantly changing by using the "snapshot" feature..
When I installed CentOS 4 it created a LVM partition occupying the bulk of my disk. I would like to trim it back so I can put another partition for another (thrid) operating system.
I found some directions that I thought would let me do this. I used resize2fs to resize the filesystem and lvreduce to resize the logical volume.
My problem is that I cannot reduce the size of the partition. When I do, I get errors when I try to boot:
Found volume gdevice-mapper: dm-linear: Device lookup failed roup "VolGroup00device-mapper: dm-linear: Device lookup failed " using metadataKernel panic - not syncing: Attempted to kill init! type lvm2 device-mapper ioctl cmd 9 failed: Invalid argument Couldn't load device 'VolGroup00-LogVol00'. device-mapper ioctl cmd 9 failed: Invalid argument Couldn't load device 'VolGroup00-LogVol01'.
Here is the current configuration:
lvm> vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n 154.03G 78.06G lvm> lvs LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-ao 75.00G LogVol01 VolGroup00 -wi-ao 992.00M lvm> vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 154.03 GB PE Size 32.00 MB Total PE 4929 Alloc PE / Size 2431 / 75.97 GB Free PE / Size 2498 / 78.06 GB VG UUID P8ykKE-BJgP-tOkw-IgU3-btPA-54Ge-7JlcHn
I would like to shrink the size of the Volume Group. Here is my partition table: Device Boot Start End Blocks Id System /dev/sda1 * 554 10280 78132127+ 7 HPFS/NTFS /dev/sda2 1 553 4441941 1b Hidden W95 FAT32 /dev/sda3 10281 10293 104422+ 83 Linux /dev/sda4 10294 30401 161517510 5 Extended /dev/sda5 10294 30401 161517478+ 8e Linux LVM
I thought I could just shrink sda5 to be about 77GB, but that results in the boot errors.
lvm says that pvresize is not implemented.
Any pointers?
Mike
On 1/17/06, Michael Ubell ubell@sleepycat.com wrote:
When I installed CentOS 4 it created a LVM partition occupying the bulk of my disk. I would like to trim it back so I can put another partition for another (thrid) operating system.
I found some directions that I thought would let me do this. I used resize2fs to resize the filesystem and lvreduce to resize the logical volume.
My problem is that I cannot reduce the size of the partition. When I do, I get errors when I try to boot:
Just for the record, are you up to 4.2 level? AFAIK, lvm(2) support was quite broken prior to that. Waiting for others to comment.
-- Collins Richey Debugging is twice as hard as writing the code ... If you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -Brian Kernighan
On Jan 17, 2006, at 6:28 PM, Collins Richey wrote:
Just for the record, are you up to 4.2 level? AFAIK, lvm(2) support was quite broken prior to that. Waiting for others to comment.
I believe this is 4.2: Linux summer 2.6.9-22.0.1.EL #1 Thu Oct 27 14:29:45 CDT 2005 x86_64 x86_64 x86_64 GNU/Linux
no?
On Tue, 2006-01-17 at 20:28, Collins Richey wrote:
Just for the record, are you up to 4.2 level? AFAIK, lvm(2) support was quite broken prior to that. Waiting for others to comment.
Does that mean it is no longer broken, or just not quite as broken as it used to be? Are snapshots reliable now?
On Wed, 2006-01-18 at 00:20 -0600, Les Mikesell wrote:
On Tue, 2006-01-17 at 20:28, Collins Richey wrote:
Just for the record, are you up to 4.2 level? AFAIK, lvm(2) support was quite broken prior to that. Waiting for others to comment.
Does that mean it is no longer broken, or just not quite as broken as it used to be? Are snapshots reliable now?
Snapshots are not reliable yet, in my experience. (In the currently released code from EL4u2).
There are more improvements in the u3 kernel:
------ -set of device mapper changes: (Alasdair G Kergon) -fix dm_swap_table error cases -device-mapper multipath: Use private workqueue [154432] -device-mapper dm-emc: Fix a memset [154435] -device-mapper multipath: Flush workqueue when destroying [156412] -dm-raid1: Limit bios to size of mirror region -device-mapper snapshots: Handle origin extension -device-mapper dm.c - set of locking fixes [156569] -device-mapper multipath: Clean up presuspend hook -device-mapper multipath: Not every error is EIO [155427 151324] -device-mapper multipath: Fix pg initialisation races [154442] -device-mapper multipath: Default to SCSI error handler ------
and in the u3 LVM2
------ * Fri Dec 02 2005 Alasdair Kergon agk@redhat.com - 2.02.01-1.1
- Build against latest device-mapper package.
* Wed Nov 23 2005 Alasdair Kergon agk@redhat.com - 2.02.01-1.0
- Fix lvdisplay command line for snapshots; fix open file promotion to read/write; fix an lvcreate error path. - Update package dependencies.
* Thu Nov 10 2005 Alasdair Kergon agk@redhat.com - 2.02.00-1.0
- Lots of fixes and new activation code. ------
We shall see if this is any better, so far it seems to be ... but I am not running this in production yet.
The solution is here: http://www.redhat.com/archives/linux-lvm/2004-December/msg00049.html
In brief:
vgcfgbackup, edit backup file, vgcfgrestore.
The tricky part was figuring out that Vol1 (the swap space) had to be moved down as it was declared to start at a high extent. Not sure why they would do that.
Mike
On Jan 17, 2006, at 11:10 AM, Michael Ubell wrote:
When I installed CentOS 4 it created a LVM partition occupying the bulk of my disk. I would like to trim it back so I can put another partition for another (thrid) operating system.
I found some directions that I thought would let me do this. I used resize2fs to resize the filesystem and lvreduce to resize the logical volume.
My problem is that I cannot reduce the size of the partition. When I do, I get errors when I try to boot:
Found volume gdevice-mapper: dm-linear: Device lookup failed roup "VolGroup00device-mapper: dm-linear: Device lookup failed " using metadataKernel panic - not syncing: Attempted to kill init! type lvm2 device-mapper ioctl cmd 9 failed: Invalid argument Couldn't load device 'VolGroup00-LogVol00'. device-mapper ioctl cmd 9 failed: Invalid argument Couldn't load device 'VolGroup00-LogVol01'.
Here is the current configuration:
lvm> vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n 154.03G 78.06G lvm> lvs LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-ao 75.00G LogVol01 VolGroup00 -wi-ao 992.00M lvm> vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 154.03 GB PE Size 32.00 MB Total PE 4929 Alloc PE / Size 2431 / 75.97 GB Free PE / Size 2498 / 78.06 GB VG UUID P8ykKE-BJgP-tOkw-IgU3-btPA-54Ge-7JlcHn
I would like to shrink the size of the Volume Group. Here is my partition table: Device Boot Start End Blocks Id System /dev/sda1 * 554 10280 78132127+ 7 HPFS/NTFS /dev/sda2 1 553 4441941 1b Hidden W95 FAT32 /dev/sda3 10281 10293 104422+ 83 Linux /dev/sda4 10294 30401 161517510 5 Extended /dev/sda5 10294 30401 161517478+ 8e Linux LVM
I thought I could just shrink sda5 to be about 77GB, but that results in the boot errors.
lvm says that pvresize is not implemented.
Any pointers?
Mike
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
We use LVM directly (LVM whole disk, no partition tables) on top of 3ware hardware raid on about ten NFS servers (total 35 TiB). Filesystem size (and therefore LVM lv and vg sizes) are 0.4 - 3.6 TiB. All is well. We are currently using a non-centos kernel but LVM has behaved nicely on stock kernel too.
/Peter K
On Tuesday 10 January 2006 03:33, Peter Kitchener wrote:
How many people use LVM's in their production servers? Are they reliable for use in these sorts of systems? I am asking this because i was reading the LVM Howto about upgrading discs etc.
Regards,
Peter