Hi,
Up until now my main production server has been a "bare metal" installation of CentOS 7.9 hosting a variety of stuff.
* DNS server with BIND for eight domains
* IMAP mail server with Postfix and Dovecot for these domains, with about two dozen mail accounts
* Webmail with Roundcube for all the mail accounts
* Various WordPress-based websites and blogs
* Several instances of the management software Dolibarr
* The learning platform GEPI for our local school
* One instance of OwnCloud for half a dozen users
The hardware has no problems to deal with all that performance-wise. But managing all this in one big bulk has become a bit of a problem, since the LAMP-based PHP applications (WordPress, Dolibarr, GEPI, OwnCloud) increasingly cultivate their idiosyncrasies, so this feels more and more like herding cats.
My main goal in migrating all this stuff preogressively to a series of neat VMs hosted on a KVM hypervisor is clarity and ease of maintenance.
Now I wonder what could be a smart subdivision of all these VMs. After a bit of brainstorming, here's what I can come up with.
1. It would make sense to regroup all the applications, e. g. one VM for all the Dolibarr hostings, and then a different VM for WordPress, and a third VM for OwnCloud.
2. It's tempting to have a lot of small VMs for clarity's sake. On the other hand, it's maybe better to have one single VM for all the mail stuff.
3. Should I put all the Roundcube instances in a separate VM? Or does that go with the Postfix/Dovecot mail VM?
4. DNS is a bit of a special case, a bit of a catch 22. I would be tempted to setup an extra (bare-metal) machine for just handling this. Since BIND provides the DNS information about the hypervisor and the backup server themselves this becomes a bit of a chicken-and-egg situation.
5. Even if it's tempting to multiply VMs, let's not forget that I have to keep an eye on hardware resources, not to forget I have to pay for every extra IPv4 address.
I'd be curious to have your input, since I'm fairly new to this sort of approach.
Cheers,
Niki
Hi,
Up until now my main production server has been a "bare metal" installation of CentOS 7.9 hosting a variety of stuff.
DNS server with BIND for eight domains
IMAP mail server with Postfix and Dovecot for these domains, with about
two dozen mail accounts
Webmail with Roundcube for all the mail accounts
Various WordPress-based websites and blogs
Several instances of the management software Dolibarr
The learning platform GEPI for our local school
One instance of OwnCloud for half a dozen users
The hardware has no problems to deal with all that performance-wise. But managing all this in one big bulk has become a bit of a problem, since the LAMP-based PHP applications (WordPress, Dolibarr, GEPI, OwnCloud) increasingly cultivate their idiosyncrasies, so this feels more and more like herding cats.
My main goal in migrating all this stuff preogressively to a series of neat VMs hosted on a KVM hypervisor is clarity and ease of maintenance.
Now I wonder what could be a smart subdivision of all these VMs. After a bit of brainstorming, here's what I can come up with.
- It would make sense to regroup all the applications, e. g. one VM for
all the Dolibarr hostings, and then a different VM for WordPress, and a third VM for OwnCloud.
- It's tempting to have a lot of small VMs for clarity's sake. On the
other hand, it's maybe better to have one single VM for all the mail stuff.
- Should I put all the Roundcube instances in a separate VM? Or does that
go with the Postfix/Dovecot mail VM?
I'd suggest to have it on one VM. I guess Webmail and the other parts don't disturb each other and they really belong together, so why not put them into one instance.
- DNS is a bit of a special case, a bit of a catch 22. I would be tempted
to setup an extra (bare-metal) machine for just handling this. Since BIND provides the DNS information about the hypervisor and the backup server themselves this becomes a bit of a chicken-and-egg situation.
If the backup server and the KVM host are two hardware servers, then why not put one DNS server on each of them? Primary on one and secondary on the other hardware so as long as one of these hosts are up, you have working DNS.
- Even if it's tempting to multiply VMs, let's not forget that I have to
keep an eye on hardware resources, not to forget I have to pay for every extra IPv4 address.
Why not have some hosts with only internal addresses? I don't think all of the hosts will need public addresses, right?
Regards, Simon
I'd be curious to have your input, since I'm fairly new to this sort of approach.
Cheers,
Niki
-- Microlinux - Solutions informatiques durables 7, place de l'église - 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32 Mob. : 06 51 80 12 12 _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On Sat, Apr 10, 2021 at 12:13 PM Nicolas Kovacs info@microlinux.fr wrote:
I'd be curious to have your input, since I'm fairly new to this sort of approach.
This is the whole pets VS cattle choice.
IMO each VM should have a singular use/purpose/app. VMs are effectively free. And also prevents unintended negative upgrade interactions.
Think through this to the logical end as each process is it’s own environment/container/(docker) or each user execution is a unique instance (serverless).
On Tue, Apr 13, 2021 at 6:15 AM Steven Tardy sjt5atra@gmail.com wrote:
On Sat, Apr 10, 2021 at 12:13 PM Nicolas Kovacs info@microlinux.fr wrote:
I'd be curious to have your input, since I'm fairly new to this sort of approach.
This is the whole pets VS cattle choice.
IMO each VM should have a singular use/purpose/app. VMs are effectively free. And also prevents unintended negative upgrade interactions.
Think through this to the logical end as each process is it’s own environment/container/(docker) or each user execution is a unique instance (serverless).
While my services are used by fewer people (and fewer in number), this is where my most recent server rebuild took me. i have been trying to use containers exclusively as it reduces the surface I have to maintain, assuming I can't find a trusted container source. Additionally, I am thinking ahead to the future where I hope that the OS will become available in a container-runner form where the surface is further reduced. My needs don't rise to the level of an OpenShift, but it sounds like yours may, especially given the WP instances.
regards,
bex
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
--On Tuesday, April 13, 2021 1:15 AM -0400 Steven Tardy sjt5atra@gmail.com wrote:
IMO each VM should have a singular use/purpose/app. VMs are effectively free. And also prevents unintended negative upgrade interactions.
Think through this to the logical end as each process is it's own environment/container/(docker) or each user execution is a unique instance (serverless).
My sense is that all the mail apps that touch the same data on disk should share a VM. But RoundCubeMail is really an MUA so it can be in a separate VM. One VM can hold a caching DNS and the rest can resolve to it. Each web server/domain/app should be in its own VM to sandbox it from other domains.
The tricky part with DNS is that outside caching servers (like Google) handle short-lived (low TTL) records better (some records have lifetimes of seconds!) but mail block lists refuse access from Google because they charge for large users, so small mail servers need their own caching DNS. Hence, one might split DNS into two servers, one just for mail and one for everything else.
https://blog.apnic.net/2019/11/12/stop-using-ridiculously-low-dns-ttls/
On 4/10/21 6:13 PM, Nicolas Kovacs wrote:
I'd be curious to have your input, since I'm fairly new to this sort of approach.
I would only separate things that for some reasons are "dirty", e.g. require non packaged installation.
All the rest (like bind, postfix, dovecot) can happily live in the same machine.
Splitting things too much will increase the maintenance effort, every stupid detail like new kernel installation, clock syncing, log rotation, security patching, etc. gets duplicated. Not to mention the need to now maintain a network connecting the pieces.
Same considerations when using containers instead of VMs, you only gain some performance by not dragging entire kernels for each service.
Start by isolating the service that is giving you most troubles. Then with a bit of experience, you can evaluate if proceeding along that road.
Best regards.
On 4/13/21 11:48 AM, Roberto Ragusa wrote:
On 4/10/21 6:13 PM, Nicolas Kovacs wrote:
I'd be curious to have your input, since I'm fairly new to this sort of approach.
I would only separate things that for some reasons are "dirty", e.g. require non packaged installation.
All the rest (like bind, postfix, dovecot) can happily live in the same machine.
Splitting things too much will increase the maintenance effort, every stupid detail like new kernel installation, clock syncing, log rotation, security patching, etc. gets duplicated. Not to mention the need to now maintain a network connecting the pieces.
This is where what I do in jails on FreeBSD is different from what you describe. All jails in FreeBSD have same base system. Thus, no extra overhead for base system: it is updated for all jails in a single go.
Separate jails have only what is necessary for particular jail. Therefore, I only put in the same jail "inseparable things (e.g. mailman has to have web interface and postfix or sendmail, so this is minimal sufficient bundle that has to be together). Services that do not have to live in the same jail run in different jails. The separation of services into different jails brings a lot of convenience:
1. If service "a" has to be worked on, only other services living in the same jail may potentially be affected, nothing else
2. If service "a" and service "b" need incompatible dependencies, there is no problem when they run in different jails
3. If you do upgrade (as in upgrade of base system), you can upgrade one jail at a time, hence it is much smaller set of things that has to be dealt with as a result of upgrade; the last helps to diminish downtime of every service caused by upgrade
4. Suppose you have compromise (no one is guaranteed from that), that came through some service, but then only that jail is affected, no mess bad guys can do to other services.
5. And one more important thing: base system in jail is mounted read-only: any mess due to compromise does not affect base system of jail (any one of jails)
And the list can continue.
I hope, experts in Linux virtualization will chime in and outline how similar (common for all virtual systems read-only base, etc) can be done with one of Linux virtualization solutions, because I'm certain in must be possible. And I for one would love to learn about that.
I hope, this helps.
Valeri
Same considerations when using containers instead of VMs, you only gain some performance by not dragging entire kernels for each service.
Start by isolating the service that is giving you most troubles. Then with a bit of experience, you can evaluate if proceeding along that road.
Best regards.
Le 13/04/2021 à 18:48, Roberto Ragusa a écrit :
Splitting things too much will increase the maintenance effort, every stupid detail like new kernel installation, clock syncing, log rotation, security patching, etc. gets duplicated. Not to mention the need to now maintain a network connecting the pieces.
I'd like to say a simple "thank you" for all your valuable input in this thread and others. There's so much purely technical documentation out there, and so little about what I'd like to call "best practices".
Cheers,
Niki