Greetings -
I will be getting a new server for my company in the next few months and am trying to get a better understanding of some of the basic theoretical approaches to using Centos KVM on a server. I have found plenty of things to read that discuss how to install KVM on the host and how to install a guest and setup the network bridge, and all the other rudimentary tasks. But all of these how to's make the assumption that I have a philosophical understanding of how KVM is incorporated into the design of how I use my server, or network of virtual servers as the case may be. So in general I am looking for some opinions or guidance from KVM experts or system administrators to help direct me along my path. Pointers to other HowTo's or Blogs are appreciated, but I have run out of Google search terms over the last month and haven't really found enough information to address my specific concerns. I suspect my post might get a little long, so before I give more details of my objectives and specific questions, please let me know if there is a better forum for me to post my question(s).
The basic questions I am trying to understand are, (1) what functions in my network should my host be responsible for, (2) what functions should logically be separated to different VMs, and (3) how should I organize disks, raid, LVM, partitions, etc to make best use of how my system will function? Know I know these questions are wide open without any context to what the purpose(s) of my new server are, and my existing background and knowledge, so that is the next part.
I am an ecologist by education, but I manage all the computer systems for my company with a dozen staff. I installed my current Linux server about 7 years ago (RHEL3) as primarily a Samba file server. Since then the functions of this server have expanded to include VPN access for staff and FTP access for staff and clients. Along the way I have gained knowledge and implement various other functions that are primarily associated with managing the system, such as tape backups, UPS shutdown configuration, Dell's OMSA hardware monitoring, and network time keeping. I am certainly not a Linux expert and my philosophy is to learn as I go, and document it so that I don't have to relearn it again. My current server is a Dell (PE2600) with 1 GB RAM and 6 drives in a RAID 5 configuration, without LVM. I have been blessed with a very stable system with only a few minor hiccups in 7 years. My new server will be a Dell (T610) with 12 GB RAM, 4 drives in a RAID 5 configuration, and an iDRAC6 Enterprise Card.
The primary function of my new server hardware will be as the Samba file server for the company. It may also provide all, or a subset of, the functions my existing server provides. I am considering adding a new gateway box (ClearOS) to my network and could possibly move some functions (FTP, VPN, etc.) to it if appropriate. There are also some new functions that my server will probably be responsible for in the near future (domain controller, groupware, open calendar, client backup system [BackupPC]). I am specifically planning on setting up at least one guest VM as a space to test and setup configurations for new functions for the server before making them available to all the staff.
So to narrow down my first two questions how should these functions be organized between the host system and any guest VMs? Should the host be responsible just for hardware maintenance and monitoring (OMSA, APC shutdown), or should it include the primary function of the hardware (Samba file server)? Should remote access type functions (FTP & VPN) be segregated off the host and onto a guest? Or should these be put on the gateway box?
I have never worked with LVM yet, and I am trying to understand how I should setup my storage space and allocate it to the host and any guests. I want to use LVM, because I see the many benefits it brings for flexible management of storage space. For my testing guest VM I would probably use an image file, but if the Samba file server function is in a guest VM I think I would rather have that as a raw LV partition (I think?). The more I read the more confused I get about understanding the hierarchy of the storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be looking at organizing and managing the file system for my functions. With this I don't even understand it enough to ask a more specific question.
Thanks to anyone that has had the patients for reading this much. More thanks to anyone that provides a constructive response.
Jeff Boyce Meridian Environmental
On 14.09.2011, at 18:51, Jeff Boyce wrote:
The primary function of my new server hardware will be as the Samba file server for the company.
I would put this in a VM. Use the dom0 (or whatever it is called in KVM terminology) purely for managing hardware and virtualization, don't put anything else there.
The main reasoning for this is that a compromise (hack) on dom0 would provide access to all your VMs. A compromised VM has no access to dom0 or other VMs. Less services on dom0 = less chance of successful attack.
It may also provide all, or a subset of, the functions my existing server provides. I am considering adding a new gateway box (ClearOS) to my network and could possibly move some functions (FTP, VPN, etc.) to it if appropriate.
VPN is probably a good thing to move there, depending on how your network is configured of course. Not sure why you would put FTP there, but you probably have your reasons.
There are also some new functions that my server will probably be responsible for in the near future (domain controller, groupware, open calendar, client backup system [BackupPC]).
I would put these all inside virtual machine(s).
I have never worked with LVM yet, and I am trying to understand how I should setup my storage space and allocate it to the host and any guests.
My recommendation would be to use LVM on the whole disk, with a small partition for your dom0 OS. This gives you maximum flexibility to change things later on.
I want to use LVM, because I see the many benefits it brings for flexible management of storage space. For my testing guest VM I would probably use an image file, but if the Samba file server function is in a guest VM I think I would rather have that as a raw LV partition (I think?).
With LVM, you can mount a LV on your dom0 and put an image file in it, or you can connect the VM directly to your LV. See the flexibility it gives you?
Personally I would attach the VMs directly to logical volumes, but this is a personal preference and is what I'm comfortable with. As you have discovered, there are dozens of ways to achieve the same goal and not one of them is the "right" way!
The more I read the more confused I get about understanding the hierarchy of the storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be looking at organizing and managing the file system for my functions. With this I don't even understand it enough to ask a more specific question.
Some examples might help. This is what I do:
Disks -> RAID -> PV -> VG -> LV -> VM filesystem / Disks -> RAID -> PV -> VG -> LV -> VM filesystem /boot Disks -> RAID -> PV -> VG -> LV -> VM filesystem swap
I like this method because it makes it easy to take a snapshot of the VM filesystem and run a backup from dom0 (yields a consistent backup with no interruption to the running VM)
or
Disks -> RAID -> PV -> VG -> LV -> VM's partition map -> VM filesystems (/, /boot, swap, etc)
or
Disks -> RAID -> PV -> VG -> LV -> VM's partition map -> PV -> VG -> LV -> VM filesystems
Which setup you choose depends on how much flexibility you want, and whether you want to manage LVM inside the VM or not. LVM inside the guest allows more flexibility inside the VM...
HTH
-- Chris Wik Anu Internet Services Ltd www.cwik.ch | www.anu.net
On 9/14/11 1:41 PM, Chris Wik wrote:
On 14.09.2011, at 18:51, Jeff Boyce wrote:
[...]
The more I read the more confused I get about understanding the hierarchy of the storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be looking at organizing and managing the file system for my functions. With this I don't even understand it enough to ask a more specific question.
Some examples might help. This is what I do:
Disks -> RAID -> PV -> VG -> LV -> VM filesystem / Disks -> RAID -> PV -> VG -> LV -> VM filesystem /boot Disks -> RAID -> PV -> VG -> LV -> VM filesystem swap
I like this method because it makes it easy to take a snapshot of the VM filesystem and run a backup from dom0 (yields a consistent backup with no interruption to the running VM)
or
Disks -> RAID -> PV -> VG -> LV -> VM's partition map -> VM filesystems (/, /boot, swap, etc)
or
Disks -> RAID -> PV -> VG -> LV -> VM's partition map -> PV -> VG -> LV -> VM filesystems
Which setup you choose depends on how much flexibility you want, and whether you want to manage LVM inside the VM or not. LVM inside the guest allows more flexibility inside the VM...
I do host LVs as guest whole disk block devices (second alternative); in my experience guest installs in anaconda+kickstart are easier on single drive vs. multiple LVs passed (first alternative). I don't think KVM can direct LVs to partitions like Xen can, aka lv_root->xvda1, lv_var->xvda2, so I think your filesystems would have their own vda, vdb, vdc etc ... just a hunch though). Also it's a lot of LVs to manage in the host depending on how many partitions you do for systems (4 or 5 VMs with half dozen partitions each...)
Chris, do you fsync your guest before you snapshot the LV guest partition to make sure your filesystem is clean?
First alternative provides *very* easy access to filesystems from the host. For second alternative, "kpartx" from the host lets you access the partitions from the host if you need to. Third alternative is very confusing if you want to mount guest volumes from host for any sort of emergency data needs, because the LVM vg names can conflict (vg_main in guest can't coexist with vg_main in host).
Eric
On 09/14/2011 09:51 AM, Jeff Boyce wrote:
Greetings -
I will be getting a new server for my company in the next few months and am trying to get a better understanding of some of the basic theoretical approaches to using Centos KVM on a server. I have found plenty of things to read that discuss how to install KVM on the host and how to install a guest and setup the network bridge, and all the other rudimentary tasks. But all of these how to's make the assumption that I have a philosophical understanding of how KVM is incorporated into the design of how I use my server, or network of virtual servers as the case may be. So in general I am looking for some opinions or guidance from KVM experts or system administrators to help direct me along my path. Pointers to other HowTo's or Blogs are appreciated, but I have run out of Google search terms over the last month and haven't really found enough information to address my specific concerns. I suspect my post might get a little long, so before I give more details of my objectives and specific questions, please let me know if there is a better forum for me to post my question(s).
The basic questions I am trying to understand are, (1) what functions in my network should my host be responsible for, (2) what functions should logically be separated to different VMs, and (3) how should I organize disks, raid, LVM, partitions, etc to make best use of how my system will function? Know I know these questions are wide open without any context to what the purpose(s) of my new server are, and my existing background and knowledge, so that is the next part.
I am an ecologist by education, but I manage all the computer systems for my company with a dozen staff. I installed my current Linux server about 7 years ago (RHEL3) as primarily a Samba file server. Since then the functions of this server have expanded to include VPN access for staff and FTP access for staff and clients. Along the way I have gained knowledge and implement various other functions that are primarily associated with managing the system, such as tape backups, UPS shutdown configuration, Dell's OMSA hardware monitoring, and network time keeping. I am certainly not a Linux expert and my philosophy is to learn as I go, and document it so that I don't have to relearn it again. My current server is a Dell (PE2600) with 1 GB RAM and 6 drives in a RAID 5 configuration, without LVM. I have been blessed with a very stable system with only a few minor hiccups in 7 years. My new server will be a Dell (T610) with 12 GB RAM, 4 drives in a RAID 5 configuration, and an iDRAC6 Enterprise Card.
The primary function of my new server hardware will be as the Samba file server for the company. It may also provide all, or a subset of, the functions my existing server provides. I am considering adding a new gateway box (ClearOS) to my network and could possibly move some functions (FTP, VPN, etc.) to it if appropriate. There are also some new functions that my server will probably be responsible for in the near future (domain controller, groupware, open calendar, client backup system [BackupPC]). I am specifically planning on setting up at least one guest VM as a space to test and setup configurations for new functions for the server before making them available to all the staff.
So to narrow down my first two questions how should these functions be organized between the host system and any guest VMs? Should the host be responsible just for hardware maintenance and monitoring (OMSA, APC shutdown), or should it include the primary function of the hardware (Samba file server)? Should remote access type functions (FTP& VPN) be segregated off the host and onto a guest? Or should these be put on the gateway box?
I have never worked with LVM yet, and I am trying to understand how I should setup my storage space and allocate it to the host and any guests. I want to use LVM, because I see the many benefits it brings for flexible management of storage space. For my testing guest VM I would probably use an image file, but if the Samba file server function is in a guest VM I think I would rather have that as a raw LV partition (I think?). The more I read the more confused I get about understanding the hierarchy of the storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be looking at organizing and managing the file system for my functions. With this I don't even understand it enough to ask a more specific question.
Thanks to anyone that has had the patients for reading this much. More thanks to anyone that provides a constructive response.
Jeff Boyce Meridian Environmental
Hey Jeff,
Chris and Eric bring up interesting options, which deserve consideration and understanding. I don't necessarily disagree with anything they've said, but I've taken a different tact with virtualization of SMB servers. Disclaimer: while I've been using VMware Server2 in the past, I'll be migrating to KVM/CentOS soon, and what I'd like to write about here is independent of platform.
Let me begin by agreeing with Chris that the host machine should do only what's needed to be done by the host. The only service (per se) that I run on a host is ntp. VMs typically have difficulties keeping track of time, so I run an ntp server on the host, and all VMs use the host time server for synchronization.
Generally speaking, virtualization brings a certain freedom to how various pieces software are brought together to act in concert. Think of it this way. If there were no limits to the number of servers you could have, how would you do things then?
One might organize servers by network topology, in which case there would be a network server (gateway, firewall, vpn, dns, dhcp), a WAN server (web, ftp, mail), and a LAN server (file services, domain controller). This fits nicely with the IPCop model of a firewall.
One might then subdivide these servers further into roles, in which case the WAN server could be replaced by separate web, mail, and ftp servers. The LAN server could be split into a separate data server and domain controller.
Now, take all of your ideal logical servers (and the networking which ties them all together), and make them VMs on your host. I've done this, and these are the VMs I presently have (the list is still evolving): .) net (IPCop distro, provides network services, WAN/DMZ/LAN) .) web (DMZ/STOR) .) ftp (DMZ/STOR) .) mail (DMZ/STOR) .) domain control (LAN/STOR) .) storage (LAN/STOR)
One aspect that we haven't touched on is network topology. I have 2 nics in the host, one for WAN and one for LAN. These are both bridged to the appropriate subnet. I also have host-only subnets for DMZ and STORage. The DMZ is used with IPCop port forwarding giving access to services from the internet. The STOR subnet is sort of a backplane, used by servers to access the storage VM, which provides access to user data via SMB, NFS, AFP, and SQL. All user data is accessed via this storage VM, which has access to raw (non-virtual) storage.
Which brings us to storage. With the size, speed and affordability of hard drives these days, I believe that raid-5 and LVM are not well suited for a SMB server.
Raid-5 requires specialized hardware to do efficiently, and I don't like the idea of storage being tied to a single vendor if not absolutely necessary. You can't simply take the drives from a hardware raid array and put them on another manufacturer's controller. With software raid you can, but software raid-5 can be burdensome on the cpu. So I prefer raid-1 (or raid-10 if needed), which I always use, exclusively.
LVM provides flexible storage in a single host environment, but with virtualization, I fail to see the point/benefit of it. Raid-10 allows a filesystem to grow beyond the limits of a single disk. Other than that, what does LVM really do for you? Would you rather have a VM reside on the host in its own LV, or simply in its own directory? I'll take the directory, thank you. In a virtualized environment, I see LVM as complexity with no purpose and have ceased using it, both in the host and (especially) in the VMs.
In the configuration of VMs described above, I use 2 (small, 80G would overdo it) drives for the hosts and system software (raid-1), and all other (as big as you need) drives for data storage (raid-10). You can think of the data storage server as a virtualized NAS machine, and you wouldn't be far off. This way, space requirements are easily managed. Allowing 8G for each machine (host plus each VM) is more than ample. If you don't have Xwindows on a machine, 4G would be sufficient. Your user data space can be as big as you need. If/when you outgrow the capacity of internal drives, you can move the storage VM external to a separate host that has adequate capacity.
Which brings us to another aspect to this configuration: it scales well. As the demands on the server approach its capacity (after appropriate tuning is done), the resource intensive parts can be implemented on additional hosts. If/when you reach that point will depend on the number of users and types of demand that are put to it, as well has how well your servers are tuned.
This all may be a bit more complicated than you intend to get, but you can take it for what it's worth. If you'd like to discuss the possibilities further regarding your situation, please feel free to subscribe to the list at https://lists.tagcose.com/mailman/listinfo/list for help with such a server. I hope to 'see' you there. :)
On Fri, 2011-09-16 at 10:46 -0700, Eric Shubert wrote:
... Now, take all of your ideal logical servers (and the networking which ties them all together), and make them VMs on your host. I've done this, and these are the VMs I presently have (the list is still evolving): .) net (IPCop distro, provides network services, WAN/DMZ/LAN) .) web (DMZ/STOR) .) ftp (DMZ/STOR) .) mail (DMZ/STOR) .) domain control (LAN/STOR) .) storage (LAN/STOR)
One aspect that we haven't touched on is network topology. I have 2 nics in the host, one for WAN and one for LAN. These are both bridged to the appropriate subnet. I also have host-only subnets for DMZ and STORage. The DMZ is used with IPCop port forwarding giving access to services from the internet. The STOR subnet is sort of a backplane, used by servers to access the storage VM, which provides access to user data via SMB, NFS, AFP, and SQL. All user data is accessed via this storage VM, which has access to raw (non-virtual) storage. ...
If I'm understanding you, if you split this out to multiple physical hosts, you would need to convert DMZ and STOR from virtual to physical segments; increasing the number of required network interfaces in each host to 4.
Are you concerned that your hosts are connected to WAN without a firewall? I assume you bridge the interface without assigning IP address?
What software do you use for storage. I'd think having the host handle integrated storage would be simpler, but, of course, that doesn't scale to multiple hosts...
On 09/16/2011 11:11 AM, Ed Heron wrote:
On Fri, 2011-09-16 at 10:46 -0700, Eric Shubert wrote:
... Now, take all of your ideal logical servers (and the networking which ties them all together), and make them VMs on your host. I've done this, and these are the VMs I presently have (the list is still evolving): .) net (IPCop distro, provides network services, WAN/DMZ/LAN) .) web (DMZ/STOR) .) ftp (DMZ/STOR) .) mail (DMZ/STOR) .) domain control (LAN/STOR) .) storage (LAN/STOR)
One aspect that we haven't touched on is network topology. I have 2 nics in the host, one for WAN and one for LAN. These are both bridged to the appropriate subnet. I also have host-only subnets for DMZ and STORage. The DMZ is used with IPCop port forwarding giving access to services from the internet. The STOR subnet is sort of a backplane, used by servers to access the storage VM, which provides access to user data via SMB, NFS, AFP, and SQL. All user data is accessed via this storage VM, which has access to raw (non-virtual) storage. ...
If I'm understanding you, if you split this out to multiple physical hosts, you would need to convert DMZ and STOR from virtual to physical segments; increasing the number of required network interfaces in each host to 4.
Correct. I have done this with DMZ to provide wireless access (putting a wireless router on the DMZ).
Are you concerned that your hosts are connected to WAN without a firewall?
I am not concerned. The only machine connected/accessible to WAN is the IPCop VM. Everything from/to the WAN goes through IPCop.
I assume you bridge the interface without assigning IP address?
Right, there is no IP address (169.254.x.x or 0.0.0.0) on the WAN interface of the host. The WAN interface on the host is not accessible, only bridged to IPCop red/wan interface.
What software do you use for storage. I'd think having the host handle integrated storage would be simpler, but, of course, that doesn't scale to multiple hosts...
I simply use a linux host, with nfs, samba, netatalk and mysql. Whatever you prefer would do.
Although the host handles the physical i/o, I still like having a separate storage VM. I think it simplifies things a bit when it comes to monitoring and tuning, and it's better security-wise too. I don't think it's a good idea to have any more services than needed running on the host.
Thanks for the questions. I'm sure I left out a few things. ;)
I've been considering this type of setup for a distributed virtualization setup. I have several small locations and we would be more comfortable having a host in each.
I was nervous about running the firewall as a virtual machine, though if nobody screams bloody murder, I'll start exploring it further as it could reduce machine count at each location by 2 (backup fw).
I'm not as paranoid about the host providing storage to the VM's directly, for booting.
I'm considering using DRBD to replicate storage on 2 identical hosts to allow fail-over in the case of a host hardware failure.
What kind of VM management tool do you use; VMM or something else?
On 09/16/2011 01:10 PM, Ed Heron wrote:
I've been considering this type of setup for a distributed virtualization setup. I have several small locations and we would be more comfortable having a host in each.
I was nervous about running the firewall as a virtual machine, though if nobody screams bloody murder, I'll start exploring it further as it could reduce machine count at each location by 2 (backup fw).
I've been running IPCop as a VM for a few years now. Works like a charm. You can set up VPNs between IPCop VMs as well if you like, effectively bridging LANs at each location. Just be sure that subnets are distinct at each location.
I like less hardware. Fewer points of failure means more reliability (with the exception of redundant parts of course) as well as cost savings.
I'm not as paranoid about the host providing storage to the VM's directly, for booting.
There might be a good reason for doing so that hasn't occurred to me. I wouldn't lose much sleep over it. Whatever works. ;)
I'm considering using DRBD to replicate storage on 2 identical hosts to allow fail-over in the case of a host hardware failure.
A fine idea, if you can swing it. To be honest though, with the HDDs on raid-1, the likelihood of failure is rather small. Depending on your cost of down time, it might do just as well to have spare parts (or a spare machine) standing by cold. Depends on the business need though. I do like having spare hardware at hand in any case.
What kind of VM management tool do you use; VMM or something else?
As I said, I've been using VMware Server up to this point, so I've been using that web interface primarily, with cli configuration editing where needed.
As I'll be migrating to KDE/CentOS very soon, does anyone have recommendations? TIA.