I am experimenting with KVM and I wish to create a virtual machine image in a logical volume. I can create the new lv without problem but when I go to format its file system then I get these warnings:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
When I take a look at things using parted I see this:
# parted -l print Model: ATA WDC WD5000AAKS-0 (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext4 boot 2 525MB 500GB 500GB primary lvm
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_guest01: 129GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 129GB 129GB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_log: 1049MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 1049MB 1049MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_tmp: 8389MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 8389MB 8389MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_home: 4194MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 4194MB 4194MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_swap: 8321MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 8321MB 8321MB linux-swap(v1)
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_root: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 53.7GB 53.7GB ext4
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label
The host system is CentOS-6.0 with updates applied. I did a manual disc configuration on initial install but I do not recall specifically dealing with /dev/sr0 at any point.
Can anyone explain to me what is happening here and what I should do? Am I constrained to reboot the server each time that I make changes to an LV? Is there some configuration change I need make to the base system?
The favour of a direct copy of any reply to the mailing list is requested as I am a digest subscriber.
On Mon, August 29, 2011 10:46, James B. Byrne wrote:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label
I discover that this is caused by a piece of OEM software that is embedded in ROM in the LG DVD-RW drive that was 'formerly' installed on this system. This device has been replaced.
However, I am still concerned about what the rest of this message means and its implications:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
I have tried using partprobe and /sbin/blockdev -rereadpt /dev/sda and both report that the device /dev/sda/ is busy. Is this an artifact of using of SATA style disks or has something changed between CentOS-5.6 and CentOS-6.0 that specifically relates to this problem? On 5.6 I can create new lvms, mount and use them without a reboot. On 6.0 I cannot, for the moment at least, discover how this is done.
James B. Byrne wrote:
On Mon, August 29, 2011 10:46, James B. Byrne wrote:
<snip>
However, I am still concerned about what the rest of this message means and its implications:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
I have tried using partprobe and /sbin/blockdev -rereadpt /dev/sda and both report that the device /dev/sda/ is busy. Is this an artifact of using of SATA style disks or has something changed between CentOS-5.6 and CentOS-6.0 that specifically relates to this problem? On 5.6 I can create new lvms, mount and use them without a reboot. On 6.0 I cannot, for the moment at least, discover how this is done.
Were you doing it on /dev/sda?! If so, that was a *very* Bad Idea, since /dev/sda is normally your /boot and /; of *course* it's busy, it's your o/s, and doesn't want to be repartitioned, esp. while running.
mark
On 29/08/2011 15:46, James B. Byrne wrote:
I am experimenting with KVM and I wish to create a virtual machine image in a logical volume. I can create the new lv without problem but when I go to format its file system then I get these warnings:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
When I take a look at things using parted I see this:
# parted -l print Model: ATA WDC WD5000AAKS-0 (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext4 boot 2 525MB 500GB 500GB primary lvm
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_guest01: 129GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 129GB 129GB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_log: 1049MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 1049MB 1049MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_tmp: 8389MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 8389MB 8389MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_home: 4194MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 4194MB 4194MB ext4
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_swap: 8321MB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 8321MB 8321MB linux-swap(v1)
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_root: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 53.7GB 53.7GB ext4
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label
The host system is CentOS-6.0 with updates applied. I did a manual disc configuration on initial install but I do not recall specifically dealing with /dev/sr0 at any point.
Can anyone explain to me what is happening here and what I should do? Am I constrained to reboot the server each time that I make changes to an LV? Is there some configuration change I need make to the base system?
The favour of a direct copy of any reply to the mailing list is requested as I am a digest subscriber.
You do not need to reboot every time you adjust a Logical Volume. Do you also need to format the file system for a KVM guest's Logical Volume?
I'm currently juggling servers to try and get a free machine to test KVM on Centos 6, but i have recently found, with another RHEL clone I'm testing, that if you do not set up the LogVol with virsh or i suppose virt-manager you will have issues getting the guest machines to run.
If you look at chapter 26.1.4.1 & 26.1.4.2 of the Red Hat Visualization Guide, for RHEL6, it explains how to use fdisk to create an partition for the Logical Volume, set it to a Linux LVM type and create the storage pool for the KVM guests (page 217 & 218).
On my current RHEL clone test system, to create the VolGroup / Storage pool i used the virsh commands on pages 222 & 223 of the Red Hat Visualization Guide (which were similar to the following):
# virsh pool-define-as guest_images_lvm logical - - /dev/cciss/c0d0p3 libvirt_lvm /dev/libvirt_lvm # virsh pool-build VolGroupGuests # virsh pool-start guest_images_lvm # virsh pool-autostart guest_images_lvm # virsh pool-list --all
Name State Autostart ----------------------------------------- guest_images_lvm active yes
To create the actual logical volume for the virtual machine I used the following command: # virsh --connect qemu:///system vol-create-as guest_images_lvm volume1 20G
I don't remember formatting a file system prior to installing the KVM guest, but new i am new to KVM and I'm experimenting as well.
jk
On Tue, August 30, 2011 18:57, psprojectplanning@gmail.com wrote:
On 29/08/2011 15:46, James B. Byrne wrote:
I am experimenting with KVM and I wish to create a virtual machine image in a logical volume. I can create the new lv without problem but when I go to format its file system then I get these warnings:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
. . .
The favour of a direct copy of any reply to the mailing list is requested as I am a digest subscriber.
You do not need to reboot every time you adjust a Logical Volume. Do you also need to format the file system for a KVM guest's Logical Volume?
I formatted the new lv as ext4.
I'm currently juggling servers to try and get a free machine to test KVM on Centos 6, but i have recently found, with another RHEL clone I'm testing, that if you do not set up the LogVol with virsh or i suppose virt-manager you will have issues getting the guest machines to run.
I am using virt-manager to set up the vms
If you look at chapter 26.1.4.1 & 26.1.4.2 of the Red Hat Visualization Guide, for RHEL6, it explains how to use fdisk to create an partition for the Logical Volume, set it to a Linux LVM type and create the storage pool for the KVM guests (page 217 & 218).
I am using that guide and I thank you for the specific reference. Nonetheless, I had the same problems when I used fdisk.
On my current RHEL clone test system, to create the VolGroup / Storage pool i used the virsh commands on pages 222 & 223 of the Red Hat Visualization Guide (which were similar to the following):
# virsh pool-define-as guest_images_lvm logical - - /dev/cciss/c0d0p3 libvirt_lvm /dev/libvirt_lvm # virsh pool-build VolGroupGuests # virsh pool-start guest_images_lvm # virsh pool-autostart guest_images_lvm # virsh pool-list --all
Name State Autostart ----------------------------------------- guest_images_lvm active yes
To create the actual logical volume for the virtual machine I used the following command: # virsh --connect qemu:///system vol-create-as guest_images_lvm volume1 20G
I don't remember formatting a file system prior to installing the KVM guest, but new i am new to KVM and I'm experimenting as well.
jk
I believe that the main problem I experienced was due to a change in the behaviour of virt-manager from 5.6 to 6.0. A change that I consider a defect and have reported as Bug 734529.
Essentially, the parted error messages are meaningless insofar as the new lv is indeed properly formatted and found and mounted as is shown in the output of parted -l
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg_inet02-lv_guest01: 129GB Sector size (logical/physical): 512B/512B Partition Table: loop
Number Start End Size File system Flags 1 0.00B 129GB 129GB ext4
I have no idea what is causing the errors to be reported by parted but it evidently has no impact on the result.
However, the behaviour of virtual machine manager has changed so that it no longer permits the operator to specific an alternate location and image file name, unless that file already exists. What happens is that if one chooses to navigate to an alternate location, say /var/vms/lv_guest_01, in the file browser; and if that location has no content, then the file browser enters an indefinite wait state which can only be ended by navigating to somewhere else in the file system that has content.
In 5.6, one could navigate to an empty directory and then supply a new file name which would be used to hold the new image. In 6.0 one must first create that file name in the desired location and only then can the virtual machine manager use it to save the new image because only then can it be selected in the file browser.
Otherwise, one has to enter the host's storage options and add storage volumes there. This appears at first blush to give equivalent functionality to the old behaviour but it is far from being obvious to the user.
It was the combination of the parted errors and encountering the unexpected behaviour of the virtual machine manager that had me confused. I inferred that the second issue was consequential to the first when in fact the first had no effect and neither had anything to do with the other.