On Tue, August 30, 2011 18:57, psprojectplanning@gmail.com wrote:
On 29/08/2011 15:46, James B. Byrne wrote:
I am experimenting with KVM and I wish to create a virtual machine image in a logical volume. I can create the new lv without problem but when I go to format its file system then I get these warnings:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
. . .
The favour of a direct copy of any reply to the mailing list is requested as I am a digest subscriber.
You do not need to reboot every time you adjust a Logical Volume. Do you also need to format the file system for a KVM guest's Logical Volume?
I formatted the new lv as ext4.
I'm currently juggling servers to try and get a free machine to test KVM on Centos 6, but i have recently found, with another RHEL clone I'm testing, that if you do not set up the LogVol with virsh or i suppose virt-manager you will have issues getting the guest machines to run.
I am using virt-manager to set up the vms
If you look at chapter 26.1.4.1 & 26.1.4.2 of the Red Hat Visualization Guide, for RHEL6, it explains how to use fdisk to create an partition for the Logical Volume, set it to a Linux LVM type and create the storage pool for the KVM guests (page 217 & 218).
I am using that guide and I thank you for the specific reference. Nonetheless, I had the same problems when I used fdisk.
On my current RHEL clone test system, to create the VolGroup / Storage pool i used the virsh commands on pages 222 & 223 of the Red Hat Visualization Guide (which were similar to the following):
# virsh pool-define-as guest_images_lvm logical - - /dev/cciss/c0d0p3 libvirt_lvm /dev/libvirt_lvm # virsh pool-build VolGroupGuests # virsh pool-start guest_images_lvm # virsh pool-autostart guest_images_lvm # virsh pool-list --all
Name State Autostart ----------------------------------------- guest_images_lvm active yes
To create the actual logical volume for the virtual machine I used the following command: # virsh --connect qemu:///system vol-create-as guest_images_lvm volume1 20G
I don't remember formatting a file system prior to installing the KVM guest, but new i am new to KVM and I'm experimenting as well.
jk
I believe that the main problem I experienced was due to a change in the behaviour of virt-manager from 5.6 to 6.0. A change that I consider a defect and have reported as Bug 734529.
Essentially, the parted error messages are meaningless insofar as the new lv is indeed properly formatted and found and mounted as is shown in the output of parted -l
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_guest01: 129GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 129GB 129GB ext4
I have no idea what is causing the errors to be reported by parted but it evidently has no impact on the result.
However, the behaviour of virtual machine manager has changed so that it no longer permits the operator to specific an alternate location and image file name, unless that file already exists. What happens is that if one chooses to navigate to an alternate location, say /var/vms/lv_guest_01, in the file browser; and if that location has no content, then the file browser enters an indefinite wait state which can only be ended by navigating to somewhere else in the file system that has content.
In 5.6, one could navigate to an empty directory and then supply a new file name which would be used to hold the new image. In 6.0 one must first create that file name in the desired location and only then can the virtual machine manager use it to save the new image because only then can it be selected in the file browser.
Otherwise, one has to enter the host's storage options and add storage volumes there. This appears at first blush to give equivalent functionality to the old behaviour but it is far from being obvious to the user.
It was the combination of the parted errors and encountering the unexpected behaviour of the virtual machine manager that had me confused. I inferred that the second issue was consequential to the first when in fact the first had no effect and neither had anything to do with the other.