James -
My additional comments / questions are interleafed within your comments below to specifically address certain points you raised.
----- Original Message ----- From: "James B. Byrne" byrnejb@harte-lyne.ca To: "Jeff Boyce" jboyce@meridianenv.com Cc: centos-virt@centos.org Sent: Tuesday, December 20, 2011 1:41 PM Subject: Re: [CentOS-virt] Confusion over steps to add new logical volume to guest VM
On Mon, December 19, 2011 18:04, Jeff Boyce wrote:
Greetings -
I am hoping someone can confirm for me the steps that I am using to add an LV to an existing Guest in KVM, and what I am seeing as I do some of these steps.
I think that you will find it easier to create guest storage volumes entirely from within virt-manager or virsh and not try and manipulate them directly on the host. I have done so in the past but it adds a layer of complexity to the process that yields no discernible benefits.
I considered that option and since I was not that familiar with virsh, I find that I am more comfortable using the LVM gui on the host to create the volumes I want, then importing them into the storage pool for the guests. I have one VG on my host that I am carving up for a few guests. I am planning on using only 2 or 3 LVs for each guest and have a naming system setup tied to the guest name so that it is easy to keep track and manage.
Here is what I have hit upon in my own explorations of kvm:
- Create a virtual storage pool and add it to the host.
I use an lv on the host.
- Create initial guest instance and allocate a new volume
from the storage pool using virt-manager -> details -> storage window through the guest storage browser. Name the new storage volume to something related to the vm guest name.
Complete creating the vm guest.
To add additional storage to an existing vm guest first
open the guest's -> details -> hardware menu tab and then select Add Storage.
- In the guest hardware storage window select VirtIO
type, raw format, and press the browse button.
- In the host storage window select the storage pool to
allocate storage from.
Select add a New Volume.
Assign a storage volume name (some variant of the base
storage volume such that all volumes assigned to a single guest appear together in the host storage volume window works best for me) and set the new volume size. Refresh the host storage display, select the new volume name, and return to the guest storage window.
- Push the Finish button. Restart the guest.
Yep, I am getting pretty comfortable going through steps 1-9 now.
- Now open the guest console, find the newly added
device (fdisk -l ), say /dev/vdb for example, and partition it using fdisk or parted. I always make one partition for the entire device. Refresh the devices using parted.
I leave my LVs as one partition, and put the filesystem (ext4) on it when I created it.
- Now add the newly partitioned device to the guest's
own vg using the normal lvm tools.
I don't have a seperate VG for/within the guest. Only one VG for all LVs, and that is on the host. I chose this option in order to provide flexibility in managing the LVs from the host system. The only potential drawback to this is managing the number of LVs on the host, but as I mentioned above I think I have that covered.
- Now create new or expand existing lvs on the guest
using lvm.
The only trouble I had, well towards the end the only trouble that I had left, was discovering that a VirtIO storage volume is not automatically partitioned when created. Until it had a partition I could not add it to the guest's vg even though I could see the device.
I initially thought that I might have had this issue early in going through my process to get the additional LV added to the guest, but that is not the case. Since I created the LV on the host using the LVM gui, it created a files system on it also.
This helps a little, since I am flying solo in my office without anyone to bounce my thoughts and ideas off of. I am still trying to understand what I saw after I manually mounted the new LV (/dev/vdb) within my guest and how to move my /var directory over to /dev/vdb, as I described in points 4, 5, and 6 below from my original message (listed again below). This is the task I have in front of my now that I am concerned about and need to complete. (I know it is a test environment, but I want to have a little knowledge about what I am doing before I do it). Any additional advice about this concern would be great.
4. At this point I was going to move /var to /mnt/var but decided to check the mount first [root@disect mnt]# cd /mnt/var [root@disect var]# ls cache lib lock log lost+found run
5. I am wondering about the source of these directories and all the subsequent files under them. My assumption is that these were created during the creation of the LVM and are part of its file system. Is that a correct assumption? How could I confirm this assumption?
6. Can I just add a /var directory to this and move my /var from /vda2 over to /vdb?
HTH.
-- *** E-Mail is NOT a SECURE channel *** James B. Byrne mailto:ByrneJB@Harte-Lyne.ca
Thanks for your input. Jeff Boyce