Greetings -
I had a logical volume that was running out of space on a virtual machine. I successfully expanded the LV using lvextend, and lvdisplay shows that it has been expanded. Then I went to expand the filesystem to fill the new space (# resize2fs -p /dev/vde1) and I get the results that the filesystem is already xx blocks long, nothing to do. If I do a # df -h, I can see that the filesystem has not been extended. I could kick the users off the VM, reboot the VM using a GParted live CD and extend the filesystem that way, but I thought that it was possible to do this live and mounted? The RH docs say this is possible; the man page for resize2fs also says it is possible with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4 filesystem. The logical volumes are setup on the host system which is also a Centos 6.2 system.
Jeff Boyce Meridian Environmental
On Fri, Jun 15, 2012 at 12:10:09PM -0700, Jeff Boyce wrote:
Greetings -
I had a logical volume that was running out of space on a virtual machine. I successfully expanded the LV using lvextend, and lvdisplay shows that it has been expanded. Then I went to expand the filesystem to fill the new space (# resize2fs -p /dev/vde1) and I get the results that the filesystem is already xx blocks long, nothing to do. If I do a # df -h, I can see that the filesystem has not been extended. I could kick the users off the VM, reboot the VM using a GParted live CD and extend the filesystem that way, but I thought that it was possible to do this live and mounted? The RH docs say this is possible; the man page for resize2fs also says it is possible with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4 filesystem. The logical volumes are setup on the host system which is also a Centos 6.2 system.
Try resize4fs (assuming your FS is ext4).
Ray
On 06/15/2012 09:10 PM, Jeff Boyce wrote:
Greetings -
I had a logical volume that was running out of space on a virtual machine. I successfully expanded the LV using lvextend, and lvdisplay shows that it has been expanded. Then I went to expand the filesystem to fill the new space (# resize2fs -p /dev/vde1) and I get the results that the filesystem is already xx blocks long, nothing to do. If I do a # df -h, I can see that the filesystem has not been extended. I could kick the users off the VM, reboot the VM using a GParted live CD and extend the filesystem that way, but I thought that it was possible to do this live and mounted? The RH docs say this is possible; the man page for resize2fs also says it is possible with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4 filesystem. The logical volumes are setup on the host system which is also a Centos 6.2 system.
You didn't really specify your topology accurately so I assume you used lvextend on the host side. This will not be visible until you rebooted the guest.
The only way to resize without taking the system offline is to use lvm in the guest. Add a new virtual disk on the host side which results in a hot-plug event in the guest (i.e. you should see the new drive added in the guest). Now create a single partition on the drive (this is important!) and use pvcreate to turn it into a physical volume. Now add the new PV to the Volume Group. Finally you can lvextend the LV in the guest and resize the filesystem.
The partitioning of the new disk in the guest is important because if you use the disk directly as a PV then this PV will also be shown on the host. An alternative is to modify the LVM filters in /etc/lvm/lvm.conf on the host to specifically not scan the LV for the new disk. I find it easier to create a partition though (i.e. use /dev/vda1 instead of /dev/vda as the PV).
Regards, Dennis
On Sat, Jun 16, 2012 at 4:30 AM, Dennis Jacobfeuerborn < dennisml@conversis.de> wrote:
On 06/15/2012 09:10 PM, Jeff Boyce wrote:
Greetings -
I had a logical volume that was running out of space on a virtual
machine.
I successfully expanded the LV using lvextend, and lvdisplay shows that
it
has been expanded. Then I went to expand the filesystem to fill the new space (# resize2fs -p /dev/vde1) and I get the results that the
filesystem
is already xx blocks long, nothing to do. If I do a # df -h, I can see
that
the filesystem has not been extended. I could kick the users off the VM, reboot the VM using a GParted live CD and extend the filesystem that way, but I thought that it was possible to do this live and mounted? The RH
docs
say this is possible; the man page for resize2fs also says it is possible with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4 filesystem. The logical volumes are setup on the host system which is
also
a Centos 6.2 system.
You didn't really specify your topology accurately so I assume you used lvextend on the host side. This will not be visible until you rebooted the guest.
The only way to resize without taking the system offline is to use lvm in the guest. Add a new virtual disk on the host side which results in a hot-plug event in the guest (i.e. you should see the new drive added in the guest). Now create a single partition on the drive (this is important!) and use pvcreate to turn it into a physical volume. Now add the new PV to the Volume Group. Finally you can lvextend the LV in the guest and resize the filesystem.
The partitioning of the new disk in the guest is important because if you use the disk directly as a PV then this PV will also be shown on the host. An alternative is to modify the LVM filters in /etc/lvm/lvm.conf on the host to specifically not scan the LV for the new disk. I find it easier to create a partition though (i.e. use /dev/vda1 instead of /dev/vda as the PV).
Regards, Dennis
Not sure if this link would help, I used to refer to this now and then if
I needed to extend an online partition --> http://www.randombugs.com/linux/howto-extend-lvm-partition-online.html
Hi Dennis,
The partitioning of the new disk in the guest is important because if you use the disk directly as a PV then this PV will also be shown on the host. An alternative is to modify the LVM filters in /etc/lvm/lvm.conf on the host to specifically not scan the LV for the new disk. I find it easier to create a partition though (i.e. use /dev/vda1 instead of /dev/vda as the PV).
Thanks for your explanation. Until now I just filteres the guests' PVs on the host on the "human interface level" by simply ignoring them, but yours is definitely the cleaner and more secure way.
Maybe I missed something, but in what way is it easier to partition each and every LV one wants to use as a PV in a guest than to specify a proper filter in /etc/lvm/lvm.conf once?
I use a consistent naming scheme for the lv's like
/dev/vg_<number>/lv_virt_<hostname>
and use the filter
filter = [ "r|/dev/vg_\d+/lv_virt_.*|" ]
to ignore all the guest's PVs. Is there any downside in doing that, or are there any advantages in using partitions instead of raw 'devices' for the PVs?
Best regards,
Peter.
On 06/16/2012 10:59 AM, Peter Eckel wrote:
Hi Dennis,
The partitioning of the new disk in the guest is important because if you use the disk directly as a PV then this PV will also be shown on the host. An alternative is to modify the LVM filters in /etc/lvm/lvm.conf on the host to specifically not scan the LV for the new disk. I find it easier to create a partition though (i.e. use /dev/vda1 instead of /dev/vda as the PV).
Thanks for your explanation. Until now I just filteres the guests' PVs on the host on the "human interface level" by simply ignoring them, but yours is definitely the cleaner and more secure way.
Maybe I missed something, but in what way is it easier to partition each and every LV one wants to use as a PV in a guest than to specify a proper filter in /etc/lvm/lvm.conf once?
I use a consistent naming scheme for the lv's like
/dev/vg_<number>/lv_virt_<hostname>
and use the filter
filter = [ "r|/dev/vg_\d+/lv_virt_.*|" ]
to ignore all the guest's PVs. Is there any downside in doing that, or are there any advantages in using partitions instead of raw 'devices' for the PVs?
I don't think there are any meaningful advantages or disadvantages to either approach. The partition approach allows you to copy and use the disks on any system regardless of the filter configuration because LVM can never see the metadata directly but even then if you forget the filter this just makes things look a bit untidy until the filter is in place.
Hopefully the new virtio-scsi driver will allow the on-the-fly resizing of virtual disks and make the live extension of diskspace in virtual machines less cumbersome.
Regards, Dennis