Hi Virtualizers,
I just setup a CentOS 6 box (at home) to run as a KVM host. It's replacing an absolutely ancient CentOS 5 server that's running Xen. I have one OS drive, and two drives in RAID 1 with LVM on top which is being used as the KVM storage pool.
I created a KVM that will run OpenMediaVault (OMV). OMV requires an OS drive (which is really a LVM), and a separate drive(s) to put all the media on. This is where I'm a little unsure on how to proceed. I think I have two options:
1. Let the KVM host manage the drives (i.e. RAID with LVM on top) and just assign the single volume to OMV. OMV will see it as one HD. 2. Assign the individual drives to the OMV KVM, and let OMV manage the RAID creation, management, etc.
I'm not sure which one will perform better. My hunch is if the RAID management is left at the host level, I'll see better overall performance. Performance isn't exactly my number one goal here, but I don't want to kill it completely either by going the wrong way.
On the other hand, if I let OMV do the RAID management for the media storage disks, I'll gain future flexibility because it'll be much easier to move OMV to bare metal.
Which way should I go? What would you guys do?
Regards,
Ranbir
On Fri, Aug 10, 2012 at 10:39:18AM -0400, Kanwar Ranbir Sandhu wrote:
- Let the KVM host manage the drives (i.e. RAID with LVM on top) and just
assign the single volume to OMV. OMV will see it as one HD. 2. Assign the individual drives to the OMV KVM, and let OMV manage the RAID creation, management, etc.
I recommend option 1 simply because of recovery methodology. If you lose a disk and replace it, if the host controls the RAID then you have one point of repair and the VMs don't even notice. If, however, each VM does RAID itself then _each_ VM will need to perform disk replace and rebuild, which is a lot of admin overhead. Also that could cause a lot of disk contention and slow down the rebuild.
Today you only have one VM. Tomorrow? :-)
I agree with Stephen. Option #1 is the way to go.
On all of the KVM nodes I've personally built, I use a hardware RAID controller and let it manage the array. You could use software RAID on the host OS, but there are advantages of using hardware RAID (background array initialization, battery backed).
Keep the RAID management at the hardware _or_ host OS (software raid) level and it will simplify administration.
---~~.~~--- Mike // SilverTip257 //
On Fri, Aug 10, 2012 at 11:00 AM, Stephen Harris lists@spuddy.org wrote:
On Fri, Aug 10, 2012 at 10:39:18AM -0400, Kanwar Ranbir Sandhu wrote:
- Let the KVM host manage the drives (i.e. RAID with LVM on top) and just
assign the single volume to OMV. OMV will see it as one HD. 2. Assign the individual drives to the OMV KVM, and let OMV manage the RAID creation, management, etc.
I recommend option 1 simply because of recovery methodology. If you lose a disk and replace it, if the host controls the RAID then you have one point of repair and the VMs don't even notice. If, however, each VM does RAID itself then _each_ VM will need to perform disk replace and rebuild, which is a lot of admin overhead. Also that could cause a lot of disk contention and slow down the rebuild.
Today you only have one VM. Tomorrow? :-)
--
rgds Stephen _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 8/10/12, Kanwar Ranbir Sandhu m3freak@thesandhufamily.ca wrote:
- Let the KVM host manage the drives (i.e. RAID with LVM on top) and just
assign the single volume to OMV. OMV will see it as one HD. 2. Assign the individual drives to the OMV KVM, and let OMV manage the RAID creation, management, etc.
I usually go with #1 now because it makes the VM simpler and allows add further VM easily.
I'm not sure which one will perform better. My hunch is if the RAID management is left at the host level, I'll see better overall performance. Performance isn't exactly my number one goal here, but I don't want to kill it completely either by going the wrong way.
On the other hand, if I let OMV do the RAID management for the media storage disks, I'll gain future flexibility because it'll be much easier to move OMV to bare metal.
You could probably plan for this by setting things up in advanced to make it easier to move in the future.
Right now, for want of a better/simpler solution, I'm setting up degraded mdadm raid 1 within the VM. The idea being that anytime I want to move the VM to bare metal or another host, I could just add a drive (or map one), let it sync, then shut it down, shift the drive and theoretically boot it up on the new machine.