I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
Thanks - Trey
On 09/29/11 6:22 PM, Trey Dockendorf wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
afaik, its only possible to use multiple instances of apache if you have multiple IP addresses, each one bound to a different address, or use different ports for each site (which would require specifying the :port as part of the URL)
I'd strongly question the rationale behind this request. sounds like half-thinking to me.
On 09/30/2011 03:31 AM, John R Pierce wrote:
On 09/29/11 6:22 PM, Trey Dockendorf wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
afaik, its only possible to use multiple instances of apache if you have multiple IP addresses, each one bound to a different address, or use different ports for each site (which would require specifying the :port as part of the URL)
I'd strongly question the rationale behind this request. sounds like half-thinking to me.
I wonder if SELinux/sVirt can be used for something like this. sVirt was created to isolate running virtual machine instances from one another. Something similar should be possible for virtual hosts at least in theory.
Regards, Dennis
On Fri, Sep 30, 2011 at 2:22 AM, Trey Dockendorf treydock@gmail.com wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
Is there a specific requirement to run different http servers? Because if there is not then you can just use Suexec+fastcgi. Otherwise, just use Apache to proxy stuff to backend servers (can be anything from apache to nginx).
HTH
On 30 September 2011 02:22, Trey Dockendorf treydock@gmail.com wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure,
Easily doable with an other instance of Apache acting as the proxy. This Apache can be yet an other "can't do anything"-style locked-down instance which only proxies virtual hosts to separate Apache instances.
You can set up as many Apaches running on separate internal ports (i.e. 127.0.0.1:8881, 127.0.0.1:8882 etc). and then use proxypass to forward virtual servers. I use a similar setup at home where locked-down virtual machines run all by themselves and the front-facing Apache simply matches the VirtualHost name and passes it down. The only thing I can't do is using a separate certificate for HTTPS for every one of them.
On Fri, 2011-09-30 at 10:47 +0100, Hakan Koseoglu wrote:
On 30 September 2011 02:22, Trey Dockendorf treydock@gmail.com wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure,
Easily doable with an other instance of Apache acting as the proxy. This Apache can be yet an other "can't do anything"-style locked-down instance which only proxies virtual hosts to separate Apache instances.
---- absolutely ----
You can set up as many Apaches running on separate internal ports (i.e. 127.0.0.1:8881, 127.0.0.1:8882 etc). and then use proxypass to forward virtual servers. I use a similar setup at home where locked-down virtual machines run all by themselves and the front-facing Apache simply matches the VirtualHost name and passes it down.
---- absolutely ----
The only thing I can't do is using a separate certificate for HTTPS for every one of them.
---- probably not with CentOS 5.x - possibly with CentOS 6.x but I haven't installed it to check.
I know with Ubuntu 10.04 LTS, I have no problem whatsoever with SSL virtual hosts and different certificates on the same IP but that does rely upon users only using SNI compliant web browsers. Not the sort of thing I would do for a commercial site but I do this for internal and/or employee only web sites. The thing to note is that all the current web browsers are SNI compliant/capable and anyone using an old web browser at this point have some serious security issues. Just about all the web browsers < 2 years old are SNI capable.
Craig
On Thu, Sep 29, 2011 at 08:22:59PM -0500, Trey Dockendorf wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
Used to do that a lot on FreeBSD. It was just a virtual host. We used separate IPs for each virtual host, but there are ways to do it with name based virtual hosts. I think name based VH didn't work with https though.
I don't know if CentOS can do it though.
////jerry
Thanks
- Trey
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Jerry McAllister wrote:
On Thu, Sep 29, 2011 at 08:22:59PM -0500, Trey Dockendorf wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
Used to do that a lot on FreeBSD. It was just a virtual host. We used separate IPs for each virtual host, but there are ways to do it with name based virtual hosts. I think name based VH didn't work with https though.
I think Trey needs to push back - *IF* I understand him correctly, it sounds like duplicate websites, but running as different users. That, to me, literally makes no sense...mmmm, unless a) the source of the request doesn't understand what he wants, or b) there's something illegal going on, and users going to a different site have different things happening, based on data/database content.
Clarifications would be helpful.
mark
On Fri, Sep 30, 2011 at 10:06 AM, m.roth@5-cent.us wrote:
I had a recent request to improve security on my web servers by having each website use a different user to run the hosting service. So example1.comhas it's own Apache instance running as apache1 and then example2.com has its own instance of Apache as apache2. Is this even possible or realistic? I understand the idea of how that would be secure, much like creating a virtual machine to segregate services. The only way I can think how this is done is to chroot each website. What makes this request even stranger is that each website will be managed by the same CMS and code base. So with that being the case, I don't see how this is possible. Any ideas or insight are very welcome.
Used to do that a lot on FreeBSD. It was just a virtual host. We used separate IPs for each virtual host, but there are ways to do it with name based virtual hosts. I think name based VH didn't work with https though.
I think Trey needs to push back - *IF* I understand him correctly, it sounds like duplicate websites, but running as different users. That, to me, literally makes no sense...mmmm, unless a) the source of the request doesn't understand what he wants, or b) there's something illegal going on, and users going to a different site have different things happening, based on data/database content.
Clarifications would be helpful.
Yes, a real 'user' oriented concept could use the public_html directory out of their home directory. But since a CMS is mentioned, the data may in fact all live in a database with the link controlling permissions based on the web server's configuration where the db login/password is set up. So besides the reverse proxy to multiple web servers it might also need multiple databases set up, each with a different name and credentials.
I think Trey needs to push back - *IF* I understand him correctly, it sounds like duplicate websites, but running as different users. That, to me, literally makes no sense...mmmm, unless a) the source of the request doesn't understand what he wants, or b) there's something illegal going on, and users going to a different site have different things happening, based on data/database content.
The way I interpreted it he want's it setup so each domain (example1.com, example2.com, etc) to each runs it's own Apache server under an unprivileged login (apache1, apache2, etc). Chroot's should accomplish that easy enough. He then wants to use the same CMS (Joomla, Wordpress, etc) on each site. My assumption is he's hosting several CMS sites and want's each isolated so a compromise of one won't compromise the others.
What is confusing is what he means by 'codebase?' Does he want each chroot to have it's own independent copy? Or does he want to share the CMS core files across all instances?
On Sep 30, 2011 10:58 AM, "Drew" drew.kay@gmail.com wrote:
I think Trey needs to push back - *IF* I understand him correctly, it sounds like duplicate websites, but running as different users. That,
to
me, literally makes no sense...mmmm, unless a) the source of the request doesn't understand what he wants, or b) there's something illegal going on, and users going to a different site have different things happening, based on data/database content.
The way I interpreted it he want's it setup so each domain (example1.com, example2.com, etc) to each runs it's own Apache server under an unprivileged login (apache1, apache2, etc). Chroot's should accomplish that easy enough. He then wants to use the same CMS (Joomla, Wordpress, etc) on each site. My assumption is he's hosting several CMS sites and want's each isolated so a compromise of one won't compromise the others.
What is confusing is what he means by 'codebase?' Does he want each chroot to have it's own independent copy? Or does he want to share the CMS core files across all instances?
-- Drew _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Sorry if my question is confusing, I really dont fully understand the request myself.
So a single codebase would be only one set of PHP files of the CMS to manage each subdomain. The problem with this request I think is a lack of understanding on what they want vs how it should be done in Apache. The goal I think is to keep each site from being effected by one another. So if one is compromised then it wont threaten all the sites. However they also want to have the CMS write to the .htaccess files to dynamically control which users can access the dowloads portion of the sites. That Im strongly against.
Really I think this would be overkill once standard security measures are used with a good IDS ( OSSEC) and thorough penetration testing. I also need to be able to implement this all with Puppet which is my requirement. Things like a chroot cant easily be done with Puppet yet, or at least that Im aware.
Could SElinux isolate sites while still allowing Apache access? I have little knowledge of how to do this with SElinux but I know I could do it with Puppet.
- Trey
On 09/30/11 9:26 AM, Trey Dockendorf wrote:
However they also want to have the CMS write to the .htaccess files to dynamically control which users can access the dowloads portion of the sites. That Im strongly against.
CMS systems almost always use their own authentication and downloading mechanisms, they don't rely on .htaccess for anything other than possibily configuring whatever specific apache settings they need (cgi-bin, etc)
On Sep 30, 2011 11:43 AM, "John R Pierce" pierce@hogranch.com wrote:
On 09/30/11 9:26 AM, Trey Dockendorf wrote:
However they also want to have the CMS write to the .htaccess files to dynamically control which users can access the dowloads portion of the sites. That Im
strongly
against.
CMS systems almost always use their own authentication and downloading mechanisms, they don't rely on .htaccess for anything other than possibily configuring whatever specific apache settings they need (cgi-bin, etc)
-- john r pierce N 37, W 122 santa cruz ca mid-left coast
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I agree, unfortunately my role is the sysadmin for this project, not the developer. Im running dozens of instances using Drupal, Wordpress and Mediawiki all very successfully and securely without ever having to think about these types of security measures. Once I get through the red tape of being allowed to pen test my own servers, then I'll have a better idea how well I've done.
- Trey
I'm not sure why you would want each website on its own Apache process (as that just isn't needed), but some of the ideas here are a bit... over-the-top.
There are a few options of improving the security of your Apache setup. You can use something like FastCGI based PHP applications or suPHP; both FastCGI and suPHP will enable Apache to drop down to a lower privileged user when accessing a website. This basically eliminates the chance that one website being hacked means all your websites being hacked. The reason for this is because the ownership of each website will be the user who owns the website. So in an example example1.com would be owned by example_user_1 and as such, the ownership of the files would be something like: example_user_1:example_user_1 and rw-r--r--.
You don't really need to go beyond this to "secure" each site.
I hope this helps.
On 30 September 2011 19:15, Trey Dockendorf treydock@gmail.com wrote:
On Sep 30, 2011 11:43 AM, "John R Pierce" pierce@hogranch.com wrote:
On 09/30/11 9:26 AM, Trey Dockendorf wrote:
However they also want to have the CMS write to the .htaccess files to dynamically
control
which users can access the dowloads portion of the sites. That Im
strongly
against.
CMS systems almost always use their own authentication and downloading mechanisms, they don't rely on .htaccess for anything other than possibily configuring whatever specific apache settings they need (cgi-bin, etc)
-- john r pierce N 37, W 122 santa cruz ca mid-left coast
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I agree, unfortunately my role is the sysadmin for this project, not the developer. Im running dozens of instances using Drupal, Wordpress and Mediawiki all very successfully and securely without ever having to think about these types of security measures. Once I get through the red tape of being allowed to pen test my own servers, then I'll have a better idea how well I've done.
- Trey
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sep 30, 2011 1:49 PM, "Michael Crilly" mrcrilly@gmail.com wrote:
I'm not sure why you would want each website on its own Apache process (as that just isn't needed), but some of the ideas here are a bit... over-the-top.
There are a few options of improving the security of your Apache setup.
You
can use something like FastCGI based PHP applications or suPHP; both
FastCGI
and suPHP will enable Apache to drop down to a lower privileged user when accessing a website. This basically eliminates the chance that one website being hacked means all your websites being hacked. The reason for this is because the ownership of each website will be the user who owns the
website.
So in an example example1.com would be owned by example_user_1 and as
such,
the ownership of the files would be something like: example_user_1:example_user_1 and rw-r--r--.
You don't really need to go beyond this to "secure" each site.
I hope this helps.
On 30 September 2011 19:15, Trey Dockendorf treydock@gmail.com wrote:
On Sep 30, 2011 11:43 AM, "John R Pierce" pierce@hogranch.com wrote:
On 09/30/11 9:26 AM, Trey Dockendorf wrote:
However they also want to have the CMS write to the .htaccess files to dynamically
control
which users can access the dowloads portion of the sites. That Im
strongly
against.
CMS systems almost always use their own authentication and downloading mechanisms, they don't rely on .htaccess for anything other than possibily configuring whatever specific apache settings they need (cgi-bin, etc)
-- john r pierce N 37, W 122 santa cruz ca mid-left coast
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I agree, unfortunately my role is the sysadmin for this project, not the developer. Im running dozens of instances using Drupal, Wordpress and Mediawiki all very successfully and securely without ever having to
think
about these types of security measures. Once I get through the red tape
of
being allowed to pen test my own servers, then I'll have a better idea
how
well I've done.
- Trey
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That does thanks!
- Trey