Hey guys,
I tend to work on small production environments for a large enterprise.
Never more than 15 web servers for most sites.
But most are only 3 to 5 web servers. Depends on the needs of the client.I actually like to install Apache and PHP from source and by hand. Although I know that's considered sacrilege in some shops.
I do this because on RH flavored systems like CentOS the versions of Apache, php and most other software are a little behind the curve in terms of versions.
And that's intentionally so! Because the versions that usually go into the various repos are tested and vetted thoroughly before going into the repos.
I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
Thanks, Tim
Sent from my iPhone
On 4/26/2016 3:27 PM, Tim Dunphy wrote:
I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
I would setup your own private yum repo, with RPMs built from source, ideally built to run in /opt/yourstuff or /usr/local or something, as you prefer, so they don't collide with any system packages.. once you've got the rpm build down, unless there's major architectural changes in the package, it shouldn't take more than fetching the latest tarball and run your rpm build script, then test it on a staging platform, when it meets your requirements, post it on your repo, and have your sites update via yum...
I've never gotten into the puppet/chef/etc stuff cuz every one of the 35 servers and VMs in the development lab at work is a different custom configuration, so I config them by hand, its not that much work in my environment. For CentOS VMs, I generally install from the minimal ISO, then copypasta a few yum commands to get all my favorite tools onboard, and past that its a custom configuration of this java plus that database server and whatall user accounts this app environment needs, doesn't take a half hour to build a new system this way, and I don't have to build them that often (maybe a couple a month at most?).
If you need more recent versions checkout softwarecollections.org. It has more recent rebuilds of the big package suites that install under /opt and don't collide with the system installed packages. There is a CentOS specific channel in there somewhere.
On 04/26/2016 03:27 PM, Tim Dunphy wrote:
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice?
Your tools should save you time.
Building packages should involve three steps: download the source, update the version number in your spec file, mock build / sign / publish (the last set should be a small shell script). Building in mock means that the package is predictable. Every time it builds, it'll detect the same available libraries during ./configure, so your build is consistent.
Is it always better to use puppet, chef, ansible etc even if the environment is small?
Again, your tools should save you time.
If your configuration manager takes more effort than configuring a system by hand, you should probably look for a better tool. Personally, I like bcfg2. And yes, I use it for everything. I use templates extensively so that anything that varies from site to site or host to host is easy to adjust, and I can apply a configuration far more quickly and reliably than I can configure a system manually.
On 04/26/2016 03:27 PM, Tim Dunphy wrote:
Hey guys,
I tend to work on small production environments for a large enterprise.
Never more than 15 web servers for most sites.
But most are only 3 to 5 web servers. Depends on the needs of the client.I actually like to install Apache and PHP from source and by hand. Although I know that's considered sacrilege in some shops.
I do this because on RH flavored systems like CentOS the versions of Apache, php and most other software are a little behind the curve in terms of versions.
And that's intentionally so! Because the versions that usually go into the various repos are tested and vetted thoroughly before going into the repos.
I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
Thanks, Tim
I don't have php 7 but I do have 5.6.20 (latest in 5.6 branch), Apache 2.4.20, etc. at https://librelamp.com/
The purpose of that repo is LAMP stack built against LibreSSL opposed to OpenSSL.
I prefer LibreSSL over OpenSSL but I like CentOS so to use LibreSSL in CentOS I had to make that repo.
I've been told the php 7 RPMs maintained by Remi work just fine with it if you really need php 7 (php 7 breaks some web apps I run so I stick to 5.6 branch)
A lot of of the RPMs are tweaked rebuilds of Fedora source RPMs
On 26 Apr 2016 23:28, "Tim Dunphy" bluethundr@gmail.com wrote:
Hey guys,
I tend to work on small production environments for a large enterprise.
Never more than 15 web servers for most sites.
But most are only 3 to 5 web servers. Depends on the needs of the client.I actually like to install Apache and PHP from source and by hand. Although I know that's considered sacrilege in some shops.
I do this because on RH flavored systems like CentOS the versions of Apache, php and most other software are a little behind the curve in terms of versions.
And that's intentionally so! Because the versions that usually go into the various repos are tested and vetted thoroughly before going into the repos.
I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
Unless you are explicitly tracking upstream and religiously providing builds as upstream release them taking upstream sources and building from them is a disservice to your customers.
This goes doubly for just installing from source without making packages as then it's impossible to audit the system for what is installed or properly clean up after it.
You need to be aware that it's not only about "vetting" but rather that auditing for a CVE becomes as simple as rpm -q --changelog | grep CVE ... Security updates from RH don't alter functional behaviour reducing the need for regression testing.
Unless you have a very specific requirement for a very bleeding edge feature it's fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver ... And when you do you absolutely need to have the mechanisms in place to efficiently and swiftly build and deploy new versions, and deal with any fallout yourself.
Finally keep in mind the CentOS project can only viably support what we ship and not $random source. When you do need help and head to #centos on irc or report something on the mailing list keep that in mind.
As for CM? Doesn't take any significant effort or time to knock together a playbook to cover what you did by hand. Doesn't need to be high quality and distro agnostic ready for galaxy (or forge or whatever chef does) but it does mean you have "documentation in code" of how that system is without having to maintain info on how to rebuild it anyway. And assume every system may need a rebuild at some point - having CM in place makes that trivial rather than "oh what was the special thing on this one" scenarios.
On 04/27/2016 12:30 AM, James Hogarth wrote: *snip*
Unless you have a very specific requirement for a very bleeding edge feature it's fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver ...
I use to believe that.
However I no longer.
First of all, advancements in TLS happen too quickly.
The RHEL philosophy of keeping API stability for as long as the release is supported means you end up running old protocols and old cipher suites and don't have the new protocols and cipher suites available.
That's a problem.
With respect to Apache and PHP -
There is a lot of benefit to HTTP/2 but you can't get that with the stock Apache in RHEL / CentOS 7. You just can't.
The PHP in stock RHEL / CentOS is so old that web application developers largely are not even using it anymore, resulting in some web applications that just simply don't work unless you update the PHP to something more modern.
It's a nice idealistic philosophy to want to keep the same versions and backport security fixes and keep everything API compatible but in real world practice, it makes your server stale.
On 04/27/2016 12:41 AM, Alice Wonder wrote:
On 04/27/2016 12:30 AM, James Hogarth wrote: *snip*
Unless you have a very specific requirement for a very bleeding edge feature it's fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver ...
I use to believe that.
However I no longer.
First of all, advancements in TLS happen too quickly.
The RHEL philosophy of keeping API stability for as long as the release is supported means you end up running old protocols and old cipher suites and don't have the new protocols and cipher suites available.
That's a problem.
With respect to Apache and PHP -
There is a lot of benefit to HTTP/2 but you can't get that with the stock Apache in RHEL / CentOS 7. You just can't.
The PHP in stock RHEL / CentOS is so old that web application developers largely are not even using it anymore, resulting in some web applications that just simply don't work unless you update the PHP to something more modern.
It's a nice idealistic philosophy to want to keep the same versions and backport security fixes and keep everything API compatible but in real world practice, it makes your server stale.
Another example outside of LAMP
Postfix -
The postfix that ships with CentOS 7 does not have the ability to enforce DANE.
If you are not sure what that is -
On mt DNS server, I can (and do) post a fingerprint of the TLS keys used by my smtp server.
When other mail servers want to send an e-mail to my server, they can do a DNS query and if I have a DANE record, then they can require that that the TLS connection they make to my SMTP server uses a certificate with a fingerprint that matches.
That is the only reliable way to avoid MITM with SMTP.
It's easy to set up in postfix -
smtp_dns_support_level = dnssec smtp_host_lookup = dns
But with the postfix that comes with CentOS 7 - it is too old for that, so Postfix with CentOS 7 will never even try to verify the TLS certificate of the servers it connects to.
It's a stale version of postfix and people running postfix on CentOS 7 should use a newer version.
On Wed, Apr 27, 2016 at 12:50 AM, Alice Wonder alice@domblogger.net wrote:
That is the only reliable way to avoid MITM with SMTP.
Except I can just strip STARTTLS and most MTAs will continue to connect.
Brandon Vincent
On 04/27/2016 12:59 AM, Brandon Vincent wrote:
On Wed, Apr 27, 2016 at 12:50 AM, Alice Wonder alice@domblogger.net wrote:
That is the only reliable way to avoid MITM with SMTP.
Except I can just strip STARTTLS and most MTAs will continue to connect.
No you can't.
Not with a smtp that enforces DANE.
If my postfix sees that your SMTP publishes a DANE record then it will refuse to connect unless it is a secure connection with a certificate that matches the fingerprint in the TLSA record.
See RFC 7672
But the postfix in RHEL / CentOS 7 does not support that.
On Wed, Apr 27, 2016 at 1:04 AM, Alice Wonder alice@domblogger.net wrote:
Not with a smtp that enforces DANE.
I'm aware of how DANE works.
The only problem is no MTA outside of Postfix implements it.
You can thank the hatred of DNSSEC for that.
Brandon Vincent
On 04/27/2016 01:06 AM, Brandon Vincent wrote:
On Wed, Apr 27, 2016 at 1:04 AM, Alice Wonder alice@domblogger.net wrote:
Not with a smtp that enforces DANE.
I'm aware of how DANE works.
The only problem is no MTA outside of Postfix implements it.
You can thank the hatred of DNSSEC for that.
I never understood the hatred for DNSSEC.
When I first read about it, it was like a beautiful epiphany.
But DNSSEC adoption is increasing. I keep seeing the green DNSSEC icon in my browser more and more often, when I first started using it was rare.
But the point is, other mail servers may not have implemented yet but Postfix has implemented it, and the stock version in RHEL / CentOS is too old. Barely too old, but too old.
Thus better security it achieved by running a newer version.
Especially since adoption is in fact increasing.
On 04/27/2016 01:19 AM, Alice Wonder wrote:
On 04/27/2016 01:06 AM, Brandon Vincent wrote:
On Wed, Apr 27, 2016 at 1:04 AM, Alice Wonder alice@domblogger.net wrote:
Not with a smtp that enforces DANE.
I'm aware of how DANE works.
The only problem is no MTA outside of Postfix implements it.
You can thank the hatred of DNSSEC for that.
I never understood the hatred for DNSSEC.
When I first read about it, it was like a beautiful epiphany.
But DNSSEC adoption is increasing. I keep seeing the green DNSSEC icon in my browser more and more often, when I first started using it was rare.
But the point is, other mail servers may not have implemented yet but Postfix has implemented it, and the stock version in RHEL / CentOS is too old. Barely too old, but too old.
Thus better security it achieved by running a newer version.
Especially since adoption is in fact increasing.
comcast is a major ISP that publishes TLSA records for their MX servers.
It appears the TLSA records for IPv6 are broken but I was told that was intentional, they can tell what mail servers don't enforce DANE by which ones continue to connect to IPv6 anyway.
The IPv4 records are good and valid.
So when any of my mail servers send e-mail to users at a comcast address, it is extremely unlikely there a MITM would be successful.
But only because I updated the postfix from stock.
On 04/27/2016 07:50 PM, Alice Wonder wrote:
On 04/27/2016 12:41 AM, Alice Wonder wrote:
On 04/27/2016 12:30 AM, James Hogarth wrote: *snip*
Unless you have a very specific requirement for a very bleeding edge feature it's fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver ...
I use to believe that.
However I no longer.
First of all, advancements in TLS happen too quickly.
The RHEL philosophy of keeping API stability for as long as the release is supported means you end up running old protocols and old cipher suites and don't have the new protocols and cipher suites available.
That's a problem.
With respect to Apache and PHP -
There is a lot of benefit to HTTP/2 but you can't get that with the stock Apache in RHEL / CentOS 7. You just can't.
The PHP in stock RHEL / CentOS is so old that web application developers largely are not even using it anymore, resulting in some web applications that just simply don't work unless you update the PHP to something more modern.
It's a nice idealistic philosophy to want to keep the same versions and backport security fixes and keep everything API compatible but in real world practice, it makes your server stale.
Another example outside of LAMP
Postfix -
The postfix that ships with CentOS 7 does not have the ability to enforce DANE.
If you are not sure what that is -
On mt DNS server, I can (and do) post a fingerprint of the TLS keys used by my smtp server.
When other mail servers want to send an e-mail to my server, they can do a DNS query and if I have a DANE record, then they can require that that the TLS connection they make to my SMTP server uses a certificate with a fingerprint that matches.
That is the only reliable way to avoid MITM with SMTP.
It's easy to set up in postfix -
smtp_dns_support_level = dnssec smtp_host_lookup = dns
Sounds good, but how many domain MX servers have set up these fingerprint keys - 1%, maybe 2%, so how do you code for that? I guess I'm thinking it uses it if available. So even if you do post it on your DNS, how many clients out there are using DANE on their set up? By the time it becomes more than a tiny % and generally useful, it will be in CentOS 8. It also requires certificates to be implemented more ubiquitously than at present - although we do now have affordable solutions, so this one may resolve more quickly.
But with the postfix that comes with CentOS 7 - it is too old for that, so Postfix with CentOS 7 will never even try to verify the TLS certificate of the servers it connects to.
It's a stale version of postfix and people running postfix on CentOS 7 should use a newer version.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On Wed, Apr 27, 2016 at 1:10 AM, Rob Kampen rkampen@kampensonline.com wrote:
Sounds good, but how many domain MX servers have set up these fingerprint keys - 1%, maybe 2%, so how do you code for that? I guess I'm thinking it uses it if available. So even if you do post it on your DNS, how many clients out there are using DANE on their set up? By the time it becomes more than a tiny % and generally useful, it will be in CentOS 8. It also requires certificates to be implemented more ubiquitously than at present - although we do now have affordable solutions, so this one may resolve more quickly.
I hope my prior comments weren't too off topic but a lot of people don't seem to understand the purpose for an enterprise distribution.
DANE is a perfect example of this. Go poll the SMTP servers for any company on the S&P 500 and I can almost guarantee that 99.9% of them will not have TLSA records for DANE. It's a new/emerging technology. The same is true with DNSSEC (which is actually quite old).
Enterprises are typically behind in the technology they adopt. Stability and reliability are paramount. This is where RHEL and CentOS come in.
I know of a few companies listed on the S&P 500 who still have SSLv3 turned on to allow customers with old versions of Internet Explorer on Windows XP to connect. You can't simply assume everyone is using the latest technology.
This is the reason IBM loves System z.
Brandon Vincent
On 04/27/2016 01:21 AM, Brandon Vincent wrote:
On Wed, Apr 27, 2016 at 1:10 AM, Rob Kampen rkampen@kampensonline.com wrote:
Sounds good, but how many domain MX servers have set up these fingerprint keys - 1%, maybe 2%, so how do you code for that? I guess I'm thinking it uses it if available. So even if you do post it on your DNS, how many clients out there are using DANE on their set up? By the time it becomes more than a tiny % and generally useful, it will be in CentOS 8. It also requires certificates to be implemented more ubiquitously than at present - although we do now have affordable solutions, so this one may resolve more quickly.
I hope my prior comments weren't too off topic but a lot of people don't seem to understand the purpose for an enterprise distribution.
DANE is a perfect example of this. Go poll the SMTP servers for any company on the S&P 500 and I can almost guarantee that 99.9% of them will not have TLSA records for DANE. It's a new/emerging technology. The same is true with DNSSEC (which is actually quite old).
Last poll I saw, 2% of the top 500 did in fact have DNSSEC.
TLSA is just a record like any other DNS record, it is just meaningless without DNSSEC.
Enterprises are typically behind in the technology they adopt. Stability and reliability are paramount. This is where RHEL and CentOS come in.
Stability though should not come at the cost of halting progress.
Security and Privacy on the Internet are both severely broken.
If you read the white papers from when the Internet was first being designed, security was rarely even mentioned.
Look at how many "secure" web servers still use SSLv2 and SSLv3 - this is because the "stable" Enterprise UNIX distributions were slow to progress.
DNS is a severely insecure system, and so is SMTP.
Hell - security of SMTP is so sloppy that quite often, the TLS certificate doesn't even match the hostname.
Cipher suites that we know to be insecure are often still supported by mail servers because they take the flawed attitude that weak ciphers are better than plain and the opportunistic nature of SMTP allows for plain.
It was that same mindset that resulted in a lot of mail servers supporting SSLv2 resulting in capture of the private key in DROWN attack.
When it comes to security, we can't be stale. We have to progress because what we currently have is not good enough.
We need to embrace DNSSEC and we need to promote DNSSEC. Trust is easy to exploit, DNSSEC provides a means to verify so that trust is not needed.
Using "enterprise" as an excuse to not move forward with security progress is just plain foolish.
Enterprise or not, DNSSEC should be a top priority to deploy in your DNS zone.
Enterprise or not, if you run a mail server, you really need to publish an accurate TLSA record for TCP port 25 of your MX mail servers.
Enterprise or not, your mail servers should look for a TLSA record on port 25 of the receiving server, and if found, only connect to that server if the connection is secure and the TLS certificate matches the TLSA record.
The Internet is broken security-wise, and a big part of the solution is available now and free to deploy.
If that means upgrading software in an "Enterprise" distribution, then that's what you do.
It's called taking responsibility for the security and privacy of your users. It's called using intelligence. It's called doing the job right.
Another way i choose is install what i need in opt a php cli and configure apache. What is the different? I drive php 5.3, 5.6 side by side. It always depends of your needs.
How configure this stuff on my virtual host? ISP-Config make it easy for me.
Can be a solution for you. RPM isn’t that bad and hold the configuration in a spec file is handy. You can take a name for a package like php-7 and will be never overwritten by an update. There are many ways to track down problems. It’s up to you.
Am 27.04.2016 um 09:30 schrieb James Hogarth james.hogarth@gmail.com:
On 26 Apr 2016 23:28, "Tim Dunphy" <bluethundr@gmail.com mailto:bluethundr@gmail.com> wrote:
Hey guys,
I tend to work on small production environments for a large enterprise.
Never more than 15 web servers for most sites.
But most are only 3 to 5 web servers. Depends on the needs of the client.I actually like to install Apache and PHP from source and by hand. Although I know that's considered sacrilege in some shops.
I do this because on RH flavored systems like CentOS the versions of Apache, php and most other software are a little behind the curve in terms of versions.
And that's intentionally so! Because the versions that usually go into the various repos are tested and vetted thoroughly before going into the repos.
I like to use the latest, stable versions of apache and php for my clients without having to create a custom RPM every time a new version comes out.
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice? Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
Unless you are explicitly tracking upstream and religiously providing builds as upstream release them taking upstream sources and building from them is a disservice to your customers.
This goes doubly for just installing from source without making packages as then it's impossible to audit the system for what is installed or properly clean up after it.
You need to be aware that it's not only about "vetting" but rather that auditing for a CVE becomes as simple as rpm -q --changelog | grep CVE ... Security updates from RH don't alter functional behaviour reducing the need for regression testing.
Unless you have a very specific requirement for a very bleeding edge feature it's fundamentally a terrible idea to move away from the distribution packages in something as exposed as a webserver ... And when you do you absolutely need to have the mechanisms in place to efficiently and swiftly build and deploy new versions, and deal with any fallout yourself.
Finally keep in mind the CentOS project can only viably support what we ship and not $random source. When you do need help and head to #centos on irc or report something on the mailing list keep that in mind.
As for CM? Doesn't take any significant effort or time to knock together a playbook to cover what you did by hand. Doesn't need to be high quality and distro agnostic ready for galaxy (or forge or whatever chef does) but it does mean you have "documentation in code" of how that system is without having to maintain info on how to rebuild it anyway. And assume every system may need a rebuild at some point - having CM in place makes that trivial rather than "oh what was the special thing on this one" scenarios. _______________________________________________ CentOS mailing list CentOS@centos.org mailto:CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos https://lists.centos.org/mailman/listinfo/centos
On Tue, 26 Apr 2016, Tim Dunphy wrote:
So what I'd like to know is it better in your opinion to install from repos than to install by source as a best practice?
"Better" all depends on your workflow and your customers' concerns.
If you are always available to update all your customers' installations, esp. when there's a security update, then installing from source may allow you to roll out new features more quickly than stock CentOS.
OTOH, if you go on vacation, or get injured, or whatever -- then your clients may be left exposed when a new exploit is released. Someone at Red Hat (and from there CentOS) will be dealing with it, and your customers get the benefit of that work with a simple "yum update".
At the very least, I'd inform the clients of the benefits and risks of both approaches and see what best matches their concerns.
Is it always better to use puppet, chef, ansible etc even if the environment is small? I'm sure this is a matter preference, but I would like to know what your preferences are.
Personally, I've found the break-even point to be three to four systems. That is, once I'm managing four systems, I'll spend less time over the life-cycle of those hosts spinning up puppet or cfengine than I will managing those systems by hand. Other admins may have a different opinion, but that's what I've discovered.