On 20/10/13 02:22, John R Pierce wrote:
In our development lab, I am installing 4 new servers, that I want to use for hosting KVM. each server will have its own direct attached raid. I'd love to be able to 'pool' this storage, but over gigE, I probably shouldn't even try.
I've build DRBD-backed shared storage using 1 Gbit network for replication for years and the network has not been an issue. So long as your apps can work with ~110 MB/sec max throughput, you're fine.
Latency is not effected because the average seek time of a platter, even 15krpm SAS drives, it higher than the network latency (assuming decent equipment).
most of the VM's will be running CentOS 5 and 6, some of the VM's will be postgresql database dev/test servers, others will be running java messaging workloads, and various test jobs.
to date, my experience with KVM is bringing up one c6 VM on a c6 host, manually w/ virt-install and virsh...
stupid questions ...
whats the best storage setup for KVM when using direct attached raid? surely using disk image files on a parent ext4/xfs file system isn't the best performance? Should I use host lvm logical volumes as guest vdisks? we're going to be running various database servers in dev/test and wanting at least one or another at a time to really be able to get some serious iops.
What makes the most difference is not the RAID configuration but having batery-backed (or flash-backed) write caching. With multiple VMs having high disk IO, it will get random in a hurry. The caching allows for keeping the systems responsive even under these highly random writes.
As for the storage type; I use clustered LVM (with DRBD as the PVs) and give each VM a dedicated LV, as you mentioned above. This takes the FS overhead out of the equation.
its virt-manager worth using, or is it too simplistic/incomplete ?
I use it from my laptop, via an ssh tunnel, to the hosts all the time. I treat it as a "remote KVM" switch as it gives me access to the VMs regardless of their network state. I don't use it for anything else.
will virt-manager or some other tool 'unify' management of these 4 VM hosts, or will it be pretty much, me-the-admin keeps track of what vm is on what host and runs the right virt-manager and manages it all fairly manually?
Depends what you mean by "manage" it. You can use 'virt-manager' on your main computer to connect to the four hosts (and even set them to auto-connect on start). From there, it's trivial to boot/connect/shut down the guests.
If you're looking for high-availability of your VMs (setting up your servers in pairs), this might be of interest;
https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial