[CentOS] Question re: CentOS-6.0, KVM, and /dev/sr0

Tue Aug 30 22:57:09 UTC 2011
psprojectplanning at gmail.com <psprojectplanning at gmail.com>

On 29/08/2011 15:46, James B. Byrne wrote:
> I am experimenting with KVM and I wish to create a virtual machine
> image in a logical volume.  I can create the new lv without problem
> but when I go to format its file system then I get these warnings:
>
> Warning: WARNING: the kernel failed to re-read the partition table
> on /dev/sda (Device or resource busy).  As a result, it may not
> reflect all of your changes until after reboot.
> Warning: Unable to open /dev/sr0 read-write (Read-only file system).
>   /dev/sr0 has been opened read-only.
>
> When I take a look at things using parted I see this:
>
> # parted -l print
> Model: ATA WDC WD5000AAKS-0 (scsi)
> Disk /dev/sda: 500GB
> Sector size (logical/physical): 512B/512B
> Partition Table: msdos
>
> Number  Start   End    Size   Type     File system  Flags
>   1      1049kB  525MB  524MB  primary  ext4         boot
>   2      525MB   500GB  500GB  primary               lvm
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_guest01: 129GB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End    Size   File system  Flags
>   1      0.00B  129GB  129GB  ext4
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_log: 1049MB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End     Size    File system  Flags
>   1      0.00B  1049MB  1049MB  ext4
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_tmp: 8389MB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End     Size    File system  Flags
>   1      0.00B  8389MB  8389MB  ext4
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_home: 4194MB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End     Size    File system  Flags
>   1      0.00B  4194MB  4194MB  ext4
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_swap: 8321MB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End     Size    File system     Flags
>   1      0.00B  8321MB  8321MB  linux-swap(v1)
>
>
> Model: Linux device-mapper (linear) (dm)
> Disk /dev/mapper/vg_inet02-lv_root: 53.7GB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
>
> Number  Start  End     Size    File system  Flags
>   1      0.00B  53.7GB  53.7GB  ext4
>
>
> Warning: Unable to open /dev/sr0 read-write (Read-only file system).
>   /dev/sr0
> has been opened read-only.
> Error: /dev/sr0: unrecognised disk label
>
> The host system is CentOS-6.0 with updates applied.  I did a manual
> disc configuration on initial install but I do not recall
> specifically dealing with /dev/sr0 at any point.
>
> Can anyone explain to me what is happening here and what I should
> do?  Am I constrained to reboot the server each time that I make
> changes to an LV?  Is there some configuration change I need make to
> the base system?
>
> The favour of a direct copy of any reply to the mailing list is
> requested as I am a digest subscriber.
>
You do not need to reboot every time you adjust a Logical Volume. Do you 
also need to format the file system for a KVM guest's Logical Volume?

I'm currently juggling servers to try and get a free machine to test KVM 
on Centos 6, but i have recently found, with another RHEL clone I'm 
testing, that if you do not set up the LogVol with virsh or i suppose 
virt-manager you will have issues getting the guest machines to run.

If you look at chapter 26.1.4.1 & 26.1.4.2  of the Red Hat Visualization 
Guide, for RHEL6, it explains how to use fdisk to create an partition 
for the Logical Volume, set it to a Linux LVM type and create the 
storage pool for the KVM guests (page 217 & 218).

On my current RHEL clone test system, to create the VolGroup / Storage 
pool i used the virsh commands on pages 222 & 223 of the Red Hat 
Visualization Guide (which were similar to the following):

# virsh pool-define-as guest_images_lvm logical - - /dev/cciss/c0d0p3 
libvirt_lvm /dev/libvirt_lvm
# virsh pool-build VolGroupGuests
# virsh pool-start guest_images_lvm
# virsh pool-autostart guest_images_lvm
# virsh pool-list --all

     Name                 State      Autostart
     -----------------------------------------
     guest_images_lvm     active     yes

To create the actual logical volume for the virtual machine I used the 
following command:
# virsh --connect qemu:///system vol-create-as guest_images_lvm volume1 20G

I don't remember formatting a file system prior to installing the KVM 
guest, but new i am new to KVM and I'm experimenting as well.

jk