In our development lab, I am installing 4 new servers, that I want to
use for hosting KVM. each server will have its own direct attached
raid. I'd love to be able to 'pool' this storage, but over gigE, I
probably shouldn't even try.
most of the VM's will be running CentOS 5 and 6, some of the VM's will
be postgresql database dev/test servers, others will be running java
messaging workloads, and various test jobs.
to date, my experience with KVM is bringing up one c6 VM on a c6 host,
manually w/ virt-install and virsh...
stupid questions ...
whats the best storage setup for KVM when using direct attached raid?
surely using disk image files on a parent ext4/xfs file system isn't the
best performance? Should I use host lvm logical volumes as guest
vdisks? we're going to be running various database servers in dev/test
and wanting at least one or another at a time to really be able to get
some serious iops.
its virt-manager worth using, or is it too simplistic/incomplete ?
will virt-manager or some other tool 'unify' management of these 4 VM
hosts, or will it be pretty much, me-the-admin keeps track of what vm is
on what host and runs the right virt-manager and manages it all fairly
manually?
"That may be the easy way, but
its not the Cowboy Way"
--
john r pierce 37N 122W
somewhere on the middle of the left coast