Hello,
I'm currently setting up a virtualization infrastructure based on CentOS 5.4 x86_64 and KVM.
I want to use LVM because of its flexibility, especially the ability to add more disk space to a guest if needed. We are considering using an hardware RAID1 with BBU as the lowest layer.
My question is: where is it recommended to put the LVM?
1. On the host only In that case, the guests would see logical volumes as hard drives and would be installed with "plain" partitions (ext3 / swap) When more space is needed the logical volume (from host perspective) / drive (from guest perspective) would simply become bigger. This feels simple, but I wonder whether there may be side effects, since real harddrive cannot grow, and maybe some parts of the system expect them to stay of a given size.
2. On the guest only On the host, there would only be "plain" raw partitions, which the guest would see as hard drives. The guest would be installed with an LVM and when more space is needed, an additional partition (from host perspective) / drive (from guest perspective) would be added to the guest volume group, which would then become bigger, allowing logical volumes to grow. I'm not too comfortable with that, since my understanding of the role of the host is to manage resources for the guests: memory, CPUs so I'd like to have some powerful abstraction there for disk space as well, which is one purpose of LVM. The guest would then need to be restarted when space is added.
3. On both host and guest LVM would be installed on the host, which would then provide the logical volumes as hard drives for the guest. The guest would use them as additional drives to be added to its own volume group when it needs to grow. This feels the most flexible, but I wonder whether it is not wrong to add LVM layers upon LVM layers, from a performance/stability/simplicity perspective. The guest would probably have to be restarted when more space is needed (I've not tested this extensively yet)
I would be very interested to have your opinion / experience about this before I go further in my testing!
Cheers,
Mathieu
----- "Mathieu Baudier" mbaudier@argeo.org wrote:
- On the host only
This always turns into a total mess when you have anything more than a tiny installation.
- On the guest only
This one just makes no sense. :)
- On both host and guest
This is what I always use and recommend. It doesn't have any side effects with modern software versions, except with layered MD RAID on the host. (I don't know if that has been fixed yet, but I don't think it has.) The only issue is that there is an extra step or two when you resize guest partitions and you're out of PV space in the guest. This also doesn't lock you into this way of doing things. You can always add more targets a la #1.
- On both host and guest
This is what I always use and recommend. It doesn't have any side effects with modern software versions, except with layered
Thanks!
I have tried this, but I don't see how to grow the guest file system without restarting the guest: - if I grow the underlying logical volume on the host, the guest still see the hard drive with the old size (checking with fdisk) - if I would add additional logical volumes (host) / drives (guest) in order to add them to the guest volume group, I will have to restart the guest
I am using virtio disks.
Is there a command so that the guest notices that its harddrive has grown? Is there a way to add new drives without restarting the guest?
Mathieu Baudier wrote:
- On both host and guest
This is what I always use and recommend. It doesn't have any side effects with modern software versions, except with layered
Thanks!
I have tried this, but I don't see how to grow the guest file system without restarting the guest:
- if I grow the underlying logical volume on the host, the guest still
see the hard drive with the old size (checking with fdisk)
- if I would add additional logical volumes (host) / drives (guest) in
order to add them to the guest volume group, I will have to restart the guest
I am using virtio disks.
Is there a command so that the guest notices that its harddrive has grown? Is there a way to add new drives without restarting the guest?
yes, you can add / remove disks to a VM without restarting the guest. look at the xm block-attach / block-detach commands
yes, you can add / remove disks to a VM without restarting the guest. look at the xm block-attach / block-detach commands
My understanding is that xm is Xen specific (I'm using Qemu/KVM)
I tried with virsh:
virsh # attach-disk 6 /dev/mapper/vg_alma_fast-lv_test_virtlvm2 vdb Disk attached successfully
virsh # dumpxml 6 <domain type='kvm' id='6'> ... <devices> <emulator>/usr/libexec/qemu-kvm</emulator> ... <disk type='block' device='disk'> <source dev='/dev/mapper/vg_alma_fast-lv_test_virtlvm'/> <target dev='vda' bus='virtio'/> </disk> <disk type='block' device='disk'> <driver name='phy'/> <source dev='/dev/mapper/vg_alma_fast-lv_test_virtlvm2'/> <target dev='vdb' bus='virtio'/> </disk> ... </devices> </domain>
But I still cannot see the disk using fdisk: there is no /dev/vdb.
Please note that I'm testing with a minimal CentOS installation (without even the Base group). So maybe it lacks some required deamons (there is no ACPI deamon for exmaple).
I will try again with an install including the Base group.
Mathieu Baudier wrote:
yes, you can add / remove disks to a VM without restarting the guest. look at the xm block-attach / block-detach commands
My understanding is that xm is Xen specific (I'm using Qemu/KVM)
I tried with virsh:
virsh # attach-disk 6 /dev/mapper/vg_alma_fast-lv_test_virtlvm2 vdb Disk attached successfully
virsh # dumpxml 6
<domain type='kvm' id='6'> ... <devices> <emulator>/usr/libexec/qemu-kvm</emulator> ... <disk type='block' device='disk'> <source dev='/dev/mapper/vg_alma_fast-lv_test_virtlvm'/> <target dev='vda' bus='virtio'/> </disk> <disk type='block' device='disk'> <driver name='phy'/> <source dev='/dev/mapper/vg_alma_fast-lv_test_virtlvm2'/> <target dev='vdb' bus='virtio'/> </disk> ... </devices> </domain>
But I still cannot see the disk using fdisk: there is no /dev/vdb.
You also need to tell the guest that a new device exists... Unless it (the guest) has some hotswap abilities....
You also need to tell the guest that a new device exists... Unless it (the guest) has some hotswap abilities....
Do you know how I can do that?
I reinstalled the guest (CentOS 5.4 x86_64, just as the host) with the default non-desktop groups, but it still doesn't see when I attach a disk.
I also try to disable SELinux, to no effect.
I've googling intensively around the concept of hotplug, hotswap, PCI, HAL, etc. in relation to virsh/KVM/virtio but withotu success.
On the guest, lspci only shows on drive (the initial one I guess):
[root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Qumranet, Inc. Virtio network device 00:04.0 SCSI storage controller: Qumranet, Inc. Virtio block device 00:05.0 RAM memory: Qumranet, Inc. Virtio memory balloon
Mathieu Baudier wrote:
You also need to tell the guest that a new device exists... Unless it (the guest) has some hotswap abilities....
Do you know how I can do that?
something along
echo - - - > /sys/class/scsi_host/hostX/scan // yes, the "-" must be there !
might help
I reinstalled the guest (CentOS 5.4 x86_64, just as the host) with the default non-desktop groups, but it still doesn't see when I attach a disk.
I also try to disable SELinux, to no effect.
I've googling intensively around the concept of hotplug, hotswap, PCI, HAL, etc. in relation to virsh/KVM/virtio but withotu success.
On the guest, lspci only shows on drive (the initial one I guess):
[root@localhost ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Qumranet, Inc. Virtio network device 00:04.0 SCSI storage controller: Qumranet, Inc. Virtio block device 00:05.0 RAM memory: Qumranet, Inc. Virtio memory balloon _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
something along
echo - - - > /sys/class/scsi_host/hostX/scan // yes, the "-" must be there !
Unfortunately there is nothing under scsi:
[root@localhost ~]# ll /sys/class/scsi_* /sys/class/scsi_device: total 0
/sys/class/scsi_disk: total 0
/sys/class/scsi_host: total 0
I also tried kudzu (http://linux.die.net/man/8/kudzu), but it still shows only the original drive:
[root@localhost ~]# kudzu [root@localhost ~]# kudzu -p ... - class: HD bus: VIRTIO detached: 0 device: vda driver: virtio_blk desc: "Virtio Block Device" - ...
While looking around I found this comment (in a bug not directly related):
Yaniv Kaul 2009-05-19 06:53:14 EDT Hot-add is not supported for RHEV 2.1.
https://bugzilla.redhat.com/show_bug.cgi?id=501468#c1
So maybe it actually cannot work?
Got it!
As per: http://www.linux-kvm.org/page/Hotadd_pci_devices You need to load the acpiphp kernel module:
[root@localhost ~]# modprobe acpiphp
(I could not find pci_hotplug, but it worked without it)
If you then add a new disk from virsh:
virsh # attach-disk 9 /dev/mapper/vg_alma_fast-lv_test_virtlvm2 vdb Disk attached successfully
It is automatically detected:
[root@localhost ~]# ll /dev/vdb brw-r----- 1 root disk 253, 16 Feb 10 16:26 /dev/vdb
I could then extend the guest's LVM:
[root@localhost ~]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created [root@localhost ~]# vgextend VolGroup00 /dev/vdb Volume group "VolGroup00" successfully extended [root@localhost ~]# lvextend -l +100%FREE /dev/VolGroup00/LogVol00 Extending logical volume LogVol00 to 6.84 GB Logical volume LogVol00 successfully resized
Thanks a lot for your help!!
For reference, here are two interesting posts from the CentOS mailing list which are related to this topic: - Similar procedure (with Xen): LVM on both host and guest and adding additional space as disks: http://lists.centos.org/pipermail/centos-virt/2009-September/001161.html - Extending LVMs (a bit dated): http://lists.centos.org/pipermail/centos/2005-November/013471.html