Hi
This topic is perhaps not for this list, but it I'm running on a CentOS 4.4 and it seems that a lot of people here uses 3Ware and RAID volumes.
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth.
I initially imagined that I could let the physical partition grow and then the LVM PV, the LV and the FS (ext3) to grow to fill up the new exported disk size. Then I realized that this is not how LVM is designed - you add PVs and let the LV and FS to grow.
But, what if the underlying exported disk from a RAID card is growing as you add disks to it's RAID volume? Should you create new physical partitions on the exported disk, and specifically, extended partitions, since you might in the future add more than 4 partitions?
Secondly, it seems there is a problem after this migration: The graphical LVM shows that I have an "Unpartitioned space" on the exported disk, but I can't click on "Initialize entry" button. The property window on the right says: "Not initializable: Partition manually".
parted shows only one partition:
[root@acrux ~]# parted /dev/sdb print Disk geometry for /dev/sdb: 0.000-1430490.000 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0.000 715245.000 primary lvm
fdisk in expert mode (x) shows that partition 2-4 fields are all zeros.
Should I now make a lvm partition in an extended partition in parted, and then hopefully, the graphical LVM will be able to add the new physical partition to my logical volume, and then resize my fs (ext3)?
Thanks in advance for any input on this, Christian
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth.
I've been doing this recently with VMWare ESX server. To save space, I create base disk images of clean OS installs on a minimalistic sized disk. If I need space, I use tools from VMWare to make the virtual disk bigger, and then grow the bits inside Linux with LVM.
I used the following two documents for info: http://fedoranews.org/mediawiki/index.php/Expanding_Linux_Partitions_with_LV... http://www.knoppix.net/wiki/LVM2
In my case, I was growing the root filesystem, so I needed to boot into something like Knoppix (hence the 2nd link above).
To summarize the links...(usual caveats, backup data, etc, etc) -Create a new partition of type 8e (Linux LVM) on the new empty space.
-Add that pv to LVM If the new partition is /dev/sda3, then this would look like pvcreate /dev/sda3
-Extend the volume group that contains the logical volume you want to add this space to If the the VG is VolGroup00 then vgextend VolGroup00 /dev/sda3
-Here I usually run vgdisplay, and get the amount of free disk space that now exists. (Look for the line that says Free PE / Size) If the Free PE / Size says there are 2.2GB free and the LV is LogVol00 you could do lvextend -L+2.2G /dev/VolGroup00/LogVol00
-Extend the filesystem. For ext2/ext3, use resize2fs (you may want to fsck before this, this is with the filesystem unmounted) resize2fs /dev/VolGroup00/LogVol00
-Fsck e2fsck -fy /dev/VolGroup00/LogVol00 (you may now want to use -y)
The trick I had when doing this in Knoppix for existing LVMs was after fdisk'ing to run vgscan; vgchange -a y to get the existing LVM partitions recognized and /dev entries created.
-Shawn
On 10/16/06, Shawn K. O'Shea shawn@ll.mit.edu wrote:
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth.
I've been doing this recently with VMWare ESX server. To save space, I create base disk images of clean OS installs on a minimalistic sized disk. If I need space, I use tools from VMWare to make the virtual disk bigger, and then grow the bits inside Linux with LVM.
(...) (thanks for the links!)
To summarize the links...(usual caveats, backup data, etc, etc) -Create a new partition of type 8e (Linux LVM) on the new empty space.
-Add that pv to LVM If the new partition is /dev/sda3, then this would look like pvcreate /dev/sda3
This part is actually the main question I had (in the Subject) - each time I add a disk to my RAID-5 volume on the RAID card, the exported disk is getting bigger. And every time I do this I have to add a partition on this disk to use this new space. With primary partitions I can only repeat this 4 times.
So, is it the "proper way of doing this" when you grow this exported disk, to add partitions in a extended partition table each time you add a disk, and then add that PV partition to the VG, resize LVs etc? Compare to when you add a physical disk "directly" to a VG, you only create one partition on each new disk and then let your VG and LV to grow. This other situation with a RAID card you make one exported disk larger each time you add one or more physical disks to the RAID volume and then have to add a new partition on the same (emulated) disk (to the BIOS and operating system).
Regards, Christian
-Shawn
On Tue, 2006-10-17 at 08:11 +0200, Christian Wahlgren wrote:
On 10/16/06, Shawn K. O'Shea shawn@ll.mit.edu wrote:
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth.
I've been doing this recently with VMWare ESX server. To save space, I create base disk images of clean OS installs on a minimalistic sized disk. If I need space, I use tools from VMWare to make the virtual disk bigger, and then grow the bits inside Linux with LVM.
(...) (thanks for the links!)
To summarize the links...(usual caveats, backup data, etc, etc) -Create a new partition of type 8e (Linux LVM) on the new empty space.
-Add that pv to LVM If the new partition is /dev/sda3, then this would look like pvcreate /dev/sda3
This part is actually the main question I had (in the Subject) - each time I add a disk to my RAID-5 volume on the RAID card, the exported disk is getting bigger. And every time I do this I have to add a partition on this disk to use this new space. With primary partitions I can only repeat this 4 times.
So, is it the "proper way of doing this" when you grow this exported disk, to add partitions in a extended partition table each time you add a disk, and then add that PV partition to the VG, resize LVs etc? Compare to when you add a physical disk "directly" to a VG, you only create one partition on each new disk and then let your VG and LV to grow. This other situation with a RAID card you make one exported disk larger each time you add one or more physical disks to the RAID volume and then have to add a new partition on the same (emulated) disk (to the BIOS and operating system).
I have never tried this ... however, you can extend the size of a partition with fdisk by removing partition and starting it on exactly the same cylinder that it started before and ending at a larger cylinder. (Please test this on a partition that you can afford to lose :)
I don't know how (or if) that effects the pv size that was assigned that partition, but I do know that the file system stays the old size and needs to be extended if it is non pv.
Here is something else I see ... http://episteme.arstechnica.com/eve/forums/a/tpc/f/96509133/m/839007490831
(that is not what I recommended, but something I found on google)
On Tue, 2006-10-17 at 06:00 -0500, Johnny Hughes wrote:
On Tue, 2006-10-17 at 08:11 +0200, Christian Wahlgren wrote:
On 10/16/06, Shawn K. O'Shea shawn@ll.mit.edu wrote:
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth.
I've been doing this recently with VMWare ESX server. To save space, I create base disk images of clean OS installs on a minimalistic sized disk. If I need space, I use tools from VMWare to make the virtual disk bigger, and then grow the bits inside Linux with LVM.
(...) (thanks for the links!)
To summarize the links...(usual caveats, backup data, etc, etc) -Create a new partition of type 8e (Linux LVM) on the new empty space.
-Add that pv to LVM If the new partition is /dev/sda3, then this would look like pvcreate /dev/sda3
This part is actually the main question I had (in the Subject) - each time I add a disk to my RAID-5 volume on the RAID card, the exported disk is getting bigger. And every time I do this I have to add a partition on this disk to use this new space. With primary partitions I can only repeat this 4 times.
So, is it the "proper way of doing this" when you grow this exported disk, to add partitions in a extended partition table each time you add a disk, and then add that PV partition to the VG, resize LVs etc? Compare to when you add a physical disk "directly" to a VG, you only create one partition on each new disk and then let your VG and LV to grow. This other situation with a RAID card you make one exported disk larger each time you add one or more physical disks to the RAID volume and then have to add a new partition on the same (emulated) disk (to the BIOS and operating system).
I have never tried this ... however, you can extend the size of a partition with fdisk by removing partition and starting it on exactly the same cylinder that it started before and ending at a larger cylinder. (Please test this on a partition that you can afford to lose :)
I don't know how (or if) that effects the pv size that was assigned that partition, but I do know that the file system stays the old size and needs to be extended if it is non pv.
Here is something else I see ... http://episteme.arstechnica.com/eve/forums/a/tpc/f/96509133/m/839007490831
(that is not what I recommended, but something I found on google)
Just a note ... looks like pvresize can be used to extend the PV after the partition is extended.
I just finished playing this "game", though with a SAN volume. It took a couple of steps to take advantage of the extra space. First, the SAN manager extended the volume from 100G to 200G. I idled everything referencing the drive by disabling the volume group ( vgchange --available n vgname ). I then ran fdisk against the volume, noting the starting cylinder of the only partition on the drive. I deleted the partition, recreated it using the same starting cylinder and let fdisk figure out the last usable cylinder on the drive, reset the partition type to LVM, and wrote the partition table. At this point I had to reboot to get the system to re-read the partition table. Once it came back up I used pvresize to extend the physical volume to use the additional space in the partition. After that, vgdisplay showed the additional space as available for allocation. This was complicated by the fact that it was actually a two-node cluster (RHCS and GFS) so I had to reboot both nodes before running pvresize.
I was originally just going take the easy way out by creating a second partition/physical volume to use the additional space, but it seemed inelegant. I'm unlikely to ever extend this LUN more than once, but you just never know!
On Tue, 2006-10-17 at 20:29 -0500, Jay Leafey wrote:
I just finished playing this "game", though with a SAN volume. It took a couple of steps to take advantage of the extra space. First, the SAN manager extended the volume from 100G to 200G. I idled everything referencing the drive by disabling the volume group ( vgchange --available n vgname ). I then ran fdisk against the volume, noting the starting cylinder of the only partition on the drive. I deleted the partition, recreated it using the same starting cylinder and let fdisk figure out the last usable cylinder on the drive, reset the partition type to LVM, and wrote the partition table. At this point I had to reboot to get the system to re-read the partition table.
It used to be possible to get the partition table re-read with
sfdisk -R /dev/XXX
Still can do?
Once it came back up I used pvresize to extend the physical volume to use the additional space in the partition. After that, vgdisplay showed the additional space as available for allocation. This was complicated by the fact that it was actually a two-node cluster (RHCS and GFS) so I had to reboot both nodes before running pvresize.
I was originally just going take the easy way out by creating a second partition/physical volume to use the additional space, but it seemed inelegant. I'm unlikely to ever extend this LUN more than once, but you just never know!
AHEM! Inelegant? In the eye of the beholder! I *prefer* multiple partitions because of the flexibility it gives. I keep vg sizes closer to the anticipated max and then allocate more temporarily or permanently as needed. This is useful for snapshot volumes too as you can "cross snapshot" by putting the snapshot on a different physical unit.
Moreover, with some forethought, performance gains can be had by getting different portions of a volume group onto different physical units (hda, hdb,...) or even on the same physical units (put swap smack-dab in the middle of busy logical volumes to reduce head movement if you don't have a better physical unit to put it on).
Anyway, I like multiple partitions. The problem is in not overdoing it.
MO -- Bill
William L. Maltby wrote:
It used to be possible to get the partition table re-read with
sfdisk -R /dev/XXX
Still can do?
Hmm, didn't try that. I did do a "blockdev --rereadpt /dev/XXX", which purports to do the same thing, but that didn't work either. Even though I had disabled the VG, something was still looking at it.
AHEM! Inelegant? In the eye of the beholder! I *prefer* multiple partitions because of the flexibility it gives. I keep vg sizes closer to the anticipated max and then allocate more temporarily or permanently as needed. This is useful for snapshot volumes too as you can "cross snapshot" by putting the snapshot on a different physical unit.
Point taken, but in this case the SAN admin extended an existing volume rather than giving me a new one. I thought adding the "new" space to the volume as a separate physical volume would have been sub-optimal, like using two PVs on the same physical disk in the same volume group. Just my reasoning, but I've been wrong before!
On 18/10/06, Jay Leafey jay.leafey@mindless.com wrote:
William L. Maltby wrote:
It used to be possible to get the partition table re-read with
sfdisk -R /dev/XXX
Still can do?
Hmm, didn't try that. I did do a "blockdev --rereadpt /dev/XXX", which purports to do the same thing, but that didn't work either. Even though I had disabled the VG, something was still looking at it.
If I remember right (it's been a while), if your SCSI/HBA module (e.g. qlaXXXX) isn't in use for any other devices rmmod then modprobe-ing it will cause a re-read of the partition tables.
If you have multiple LUNs mounted via the same device I think there are scripts you can use to re-probe available devices and their partition tables.
Will.
On 10/18/06, Jay Leafey jay.leafey@mindless.com wrote:
I just finished playing this "game", though with a SAN volume. It took a couple of steps to take advantage of the extra space. First, the SAN manager extended the volume from 100G to 200G. I idled everything referencing the drive by disabling the volume group ( vgchange --available n vgname ). I then ran fdisk against the volume, noting the starting cylinder of the only partition on the drive. I deleted the partition, recreated it using the same starting cylinder and let fdisk figure out the last usable cylinder on the drive, reset the partition type to LVM, and wrote the partition table. At this point I had to reboot to get the system to re-read the partition table. Once it came back up I used pvresize to extend the physical volume to use the additional space in the partition. After that, vgdisplay showed the additional space as available for allocation. This was complicated by the fact that it was actually a two-node cluster (RHCS and GFS) so I had to reboot both nodes before running pvresize.
I was originally just going take the easy way out by creating a second partition/physical volume to use the additional space, but it seemed inelegant. I'm unlikely to ever extend this LUN more than once, but you just never know!
Hi and thanks for letting me know that I'm not the only one with this situation. And as it seems there is no straightforward and recommended way to do this.
I have experimented and practiced a little with a CentOS installation on VMware to see that both ways of doing this will work. During the last couple of days I've been backing up my volume for safety. Probably on Sunday I will use my practice and I will post my exact steps.
Although a bit "raw" I still prefer editing the original partition (as described above) than adding extended partitions when you grow you "exported disk" (and I most probably will add more disks to my RAID5 volume in the future). Then you always have a simple and clean partition table.
I also think this scenario should be mentioned in the LVM Howto "Common Tasks", and the two (as I know) ways of doing this. Or is this scenario very uncommon?
Regards, Christian
-- Jay Leafey - Memphis, TN jay.leafey@mindless.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Christian Wahlgren spake the following on 10/19/2006 11:22 PM:
On 10/18/06, Jay Leafey jay.leafey@mindless.com wrote:
I just finished playing this "game", though with a SAN volume. It took a couple of steps to take advantage of the extra space. First, the SAN manager extended the volume from 100G to 200G. I idled everything referencing the drive by disabling the volume group ( vgchange --available n vgname ). I then ran fdisk against the volume, noting the starting cylinder of the only partition on the drive. I deleted the partition, recreated it using the same starting cylinder and let fdisk figure out the last usable cylinder on the drive, reset the partition type to LVM, and wrote the partition table. At this point I had to reboot to get the system to re-read the partition table. Once it came back up I used pvresize to extend the physical volume to use the additional space in the partition. After that, vgdisplay showed the additional space as available for allocation. This was complicated by the fact that it was actually a two-node cluster (RHCS and GFS) so I had to reboot both nodes before running pvresize.
I was originally just going take the easy way out by creating a second partition/physical volume to use the additional space, but it seemed inelegant. I'm unlikely to ever extend this LUN more than once, but you just never know!
Hi and thanks for letting me know that I'm not the only one with this situation. And as it seems there is no straightforward and recommended way to do this.
I have experimented and practiced a little with a CentOS installation on VMware to see that both ways of doing this will work. During the last couple of days I've been backing up my volume for safety. Probably on Sunday I will use my practice and I will post my exact steps.
Although a bit "raw" I still prefer editing the original partition (as described above) than adding extended partitions when you grow you "exported disk" (and I most probably will add more disks to my RAID5 volume in the future). Then you always have a simple and clean partition table.
I also think this scenario should be mentioned in the LVM Howto "Common Tasks", and the two (as I know) ways of doing this. Or is this scenario very uncommon?
Regards, Christian
Maybe it was uncommon when the howto was written, but with newer raid controllers that allow you to expand the array by adding drives, this will become more common. I also was experimenting in VMWare, and I had a problem expanding the PV after I expanded the raid partition. But then I was experimenting by increasing the array with one larger drive at a time, and letting the array re-sync until I had 3 drives all larger (software raid). I used fdisk to expand the LVM partition on the raid drive, but couldn't expand the LV. Maybe I did it wrong, but then thats why I was using vmware. Next week, I'll go at it again.