Ever since someone told me that one of my servers might have been hacked (not the most recent instance) because I wasn't applying updates as soon as they became available, I've been logging in and running "yum update" religiously once a week until I found out how to set the yum-updatesd service to do the equivalent automatically (once per hour, I think).
Since then, I've leased dedicated servers from several different companies, and on all of them, I had to set up yum-updatesd to run and check for updates -- by default it was off. Why isn't it on by default? Or is it being considered to make it the default in the future?
Power users can always change it if they want; the question is what would be better for the vast majority of users who don't change defaults. In that case it would seem better to have updates on, so that they'll get patched if an exploit is released but a patch is available.
If the risk is that a buggy update might crash the machine, then that has to be weighed against the possibility of *not* getting updates, and getting hacked as a result -- usually the latter being worse.
After all, if users are exhorted to log in to their machines and check for updates and apply them, that implies that the risk of getting hosed by a buggy update is outweighed by the risk of getting hacked by not applying updates. If that's true for updates that are applied manually, it ought to be true for updates that are downloaded and applied automatically, shouldn't it?
Bennett
On Wed, Dec 28, 2011 at 4:04 PM, Bennett Haselton bennett@peacefire.org wrote:
Power users can always change it if they want; the question is what would be better for the vast majority of users who don't change defaults. In that case it would seem better to have updates on, so that they'll get patched if an exploit is released but a patch is available.
If the risk is that a buggy update might crash the machine, then that has to be weighed against the possibility of *not* getting updates, and getting hacked as a result -- usually the latter being worse.
IMHO, the risk of applying patches blindly outweight the benefit of automatic update. Yum-updatesd would not only fixes security bug, but also other things that may not be good for our system. Consider a database server that got automatically updated and the sysadmin is so contemplate that it's only after a month or so he realized the update have caused a corruption in the database. I don't think his boss would be happy.
If a sysadmin is concern of the security of the servers, he should subscribe to security advisory mailing list and do any required update in time. Laziness is not an excuse. Anyway, should he decides, he can always easily activate the automatic updates.
On 12/28/2011 02:04 AM, Bennett Haselton wrote:
Ever since someone told me that one of my servers might have been hacked (not the most recent instance) because I wasn't applying updates as soon as they became available, I've been logging in and running "yum update" religiously once a week until I found out how to set the yum-updatesd service to do the equivalent automatically (once per hour, I think).
Since then, I've leased dedicated servers from several different companies, and on all of them, I had to set up yum-updatesd to run and check for updates -- by default it was off. Why isn't it on by default? Or is it being considered to make it the default in the future?
Power users can always change it if they want; the question is what would be better for the vast majority of users who don't change defaults. In that case it would seem better to have updates on, so that they'll get patched if an exploit is released but a patch is available.
If the risk is that a buggy update might crash the machine, then that has to be weighed against the possibility of *not* getting updates, and getting hacked as a result -- usually the latter being worse.
After all, if users are exhorted to log in to their machines and check for updates and apply them, that implies that the risk of getting hosed by a buggy update is outweighed by the risk of getting hacked by not applying updates. If that's true for updates that are applied manually, it ought to be true for updates that are downloaded and applied automatically, shouldn't it?
The first part of your question is answered simply as ... it defaults to do what the upstream distro does. If they (the upstream provider) set their distro to automatically run updates by default, then so will CentOS. I do not think they will do that though.
The last question (does the security risk of not applying auto updates quickly outweigh the risk of the system breaking because of a bad update) depends on the situation.
If you are doing some things, auto updates are probably fine. I build and release these packages for CentOS and I fully trust them ... however, even I do not auto update my production servers at work.
Each of my servers is a unique and complex system of several 3rd party applications/repos as well as the CentOS operating system. So while the CentOS updates almost always "just work", the 3rd party apps (or 3rd party repos) might need looking at after the update to verify everything is still functioning properly.
Now, we do have some servers that are just create and teardown for extra work load and these do auto update ... but I would never do that (auto update) for things that I consider critical.
Over the years there have been updates where permissions issues prevented DNS servers from restarting, etc. ... it is just too important to me that my machines run to trust pushing auto updates to critical servers. At least that is my take. But, then again, I have test servers for my most critical stuff and I push the updates there for a couple of days to verify that they work before I move the updates into production.
All that being said, if your server is a LAMP machine with MYSQL and Apache from CentOS and other standard CentOS packages like dhcp, bind, etc., then auto updates will likely never cause you problems.
On Wednesday, December 28, 2011, Johnny Hughes johnny@centos.org wrote:
On 12/28/2011 02:04 AM, Bennett Haselton wrote:
Ever since someone told me that one of my servers might have been hacked (not the most recent instance) because I wasn't applying updates as soon
as
they became available, I've been logging in and running "yum update" religiously once a week until I found out how to set the yum-updatesd service to do the equivalent automatically (once per hour, I think).
Since then, I've leased dedicated servers from several different
companies,
and on all of them, I had to set up yum-updatesd to run and check for updates -- by default it was off. Why isn't it on by default? Or is it being considered to make it the default in the future?
Power users can always change it if they want; the question is what would be better for the vast majority of users who don't change defaults. In that case it would seem better to have updates on, so that they'll get patched if an exploit is released but a patch is available.
If the risk is that a buggy update might crash the machine, then that has to be weighed against the possibility of *not* getting updates, and
getting
hacked as a result -- usually the latter being worse.
After all, if users are exhorted to log in to their machines and check
for
updates and apply them, that implies that the risk of getting hosed by a buggy update is outweighed by the risk of getting hacked by not applying updates. If that's true for updates that are applied manually, it ought
to
be true for updates that are downloaded and applied automatically, shouldn't it?
The first part of your question is answered simply as ... it defaults to do what the upstream distro does. If they (the upstream provider) set their distro to automatically run updates by default, then so will CentOS. I do not think they will do that though.
The last question (does the security risk of not applying auto updates quickly outweigh the risk of the system breaking because of a bad update) depends on the situation.
If you are doing some things, auto updates are probably fine. I build and release these packages for CentOS and I fully trust them ... however, even I do not auto update my production servers at work.
Each of my servers is a unique and complex system of several 3rd party applications/repos as well as the CentOS operating system. So while the CentOS updates almost always "just work", the 3rd party apps (or 3rd party repos) might need looking at after the update to verify everything is still functioning properly.
Now, we do have some servers that are just create and teardown for extra work load and these do auto update ... but I would never do that (auto update) for things that I consider critical.
Over the years there have been updates where permissions issues prevented DNS servers from restarting, etc. ... it is just too important to me that my machines run to trust pushing auto updates to critical servers. At least that is my take. But, then again, I have test servers for my most critical stuff and I push the updates there for a couple of days to verify that they work before I move the updates into production.
All that being said, if your server is a LAMP machine with MYSQL and Apache from CentOS and other standard CentOS packages like dhcp, bind, etc., then auto updates will likely never cause you problems.
This would not be a good idea in general. (just my opinion). I think back to one update (can't remember which update - 5.x something) where it swapped the eth0 and eth1 on all our dells. So every server was taken down after update and then required the nics to be reconfigured (or cables swapped) to get proper connectivity.
D
The 'E' in CentOS stands for Enterprise. Enterprises use change control. Servers do not update themselves whenever they see an update. Updates are tested (not so much), approved and scheduled, hopefully in line with a maintenance window. In most enterprises that I've been in, a server can't even contact the default repo servers. And remember that for a RHEL server, it has to be registered with RHN before it can officially receive updates. Defaulting yum-updatesd to on will be a no-op in almost every 'enterprise' case.
Enterprises also don't hang servers directly off the Internet. There are many layers betwixt the wild web and the OS.
In the decade plus that I've been running RHEL, I've seen 1 update that was worthy of an emergency change to push it out RIGHT NOW to the servers. And even that one didn't really need to be done.
---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.net "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Wed, Dec 28, 2011 at 11:33 AM, Jim Wildman jim@rossberry.com wrote:
The 'E' in CentOS stands for Enterprise. Enterprises use change control. Servers do not update themselves whenever they see an update. Updates are tested (not so much), approved and scheduled, hopefully in line with a maintenance window. In most enterprises that I've been in, a server can't even contact the default repo servers. And remember that for a RHEL server, it has to be registered with RHN before it can officially receive updates. Defaulting yum-updatesd to on will be a no-op in almost every 'enterprise' case.
Enterprises also don't hang servers directly off the Internet. There are many layers betwixt the wild web and the OS.
In the decade plus that I've been running RHEL, I've seen 1 update that was worthy of an emergency change to push it out RIGHT NOW to the servers. And even that one didn't really need to be done.
Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.net "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
To be more clear, I wasn't saying that for the particular people on this list, of whom many are professional sysadmins, that it would be the best option.
I'm talking about the majority of users who have leased a dedicated server or a VPS for $5-$50 per month, and cannot ever be realistically expected to change much of the defaults. In that situation, you're weighing the likelihood, and the undesirability, of two outcomes: either (1) the machine ends up going down temporarily because of a bad update, or (2) the machine ends up being hacked and attacking other networks because it wasn't receiving updates.
(Side note: my friend replied to clarify that the "kernel exploit" he was talking about that was found in March of this year, was one that allowed a local user to gain root privilege, not one that allowed a remote user to get in through the webserver or sshd. So let's say it really is true that running automatic "yum" updates is not the most important thing to keep out remote users, and that the majority of webserver hacks do occur through out-of-date web apps. Then replace everything I said with "update the web apps" instead of installing the "yum update" patches.)
Would it not be best for the vast majority of those users to have updates turned on by default? If not, why not? (Power users can always turn them off, after all.)
Look, one may think that root access to dedicated servers (and virtual/dedicated servers, which are almost as powerful/dangerous) should never be given out to people who haven't been professionally trained. (Some people still say that about net-connected computers generally!) But that can never be rolled back now, as long as hosting companies can legally sell unmanaged dedicated/VPS machines to the public, they will. So what can be done to reduce the risks?
Or look at it this way: Suppose the government or some foundation offered a $1 million prize for any proposal that permanently lowered the rate at which CentOS servers were compromised. If you actually come up with a solution that lowers the rate, you get the money, but if you say that all end users "should" do such-and-such (and they don't), then you get nothing. What would your proposal be?
My suggestion would be: 1) Implement an API call on the OS for "send this message to the machine owner". When the OS is installed on the machine, the person installing it decides how the "notify" call would be implemented -- send an email to an address, send a SMS message, whatever. If a hosting company sets it up, they could implement the call so that it automatically opens a new support ticket waiting for the customer's attention. The reason for #1 is that if the OS wants to notify the machine admin that there's a problem, then -- at least in the case of a remotely hosted cheap server or VPS -- you can't rely on the admin logging in and seeing the message. You have to proactively grab their attention somehow. Then you could use this function call for lots of things, but most importantly for #2: 2) Implement some sort of scanner program (enabled by default) that would regularly scan the machine, not just for known viruses, but for *anything* that was known to be a frequent vector for attacks, that was not configured to update itself automatically. And: - If the scanner finds an app that is not configured to update itself automatically, it sends a low-priority message (using #1) saying "There are no known exploits for this thing right now, but you really ought to turn on updates for it." - If the scanner finds a web app like WordPress that *cannot* update itself automatically, say "This app can't auto-update itself, so you're taking a certain risk just by having it at all. Just sayin'." - If the scanner finds an out-of-date component for which there is a known exploit, send a high-priority message saying "This component needs to be updated or you will be hacked." (If the hosting company implements this by opening a support ticket, they can also see if the customer doesn't respond to the ticket, and threaten to disconnect them if they don't handle it.)
What would your proposal be? (Remembering that you can't change human nature, so if it relies on the majority of end users devoting time that you think they "should" do, it won't happen :) )
Bennett
On Thu, Dec 29, 2011 at 2:01 AM, Bennett Haselton bennett@peacefire.org wrote:
(Side note: my friend replied to clarify that the "kernel exploit" he was talking about that was found in March of this year, was one that allowed a local user to gain root privilege, not one that allowed a remote user to get in through the webserver or sshd.
Look back through the changelogs if you want to see what vulnerabilities have existed for long intervals before being fixed - but perhaps not long after being found and published. If you have a web service running, I'd say it is a fairly safe bet that there is a vulnerability somewhere in the server, language(s), libraries, or the application itself that can be exploited to execute some arbitrary command. That turns what is classified as a local root exploit into something anyone on the internet can do. And I've seen some very sophisticated attempts show up in the logs...
So let's say it really is true that running automatic "yum" updates is not the most important thing to keep out remote users, and that the majority of webserver hacks do occur through out-of-date web apps.
I'm not convinced. Assume that some people will know the vulnerabilities before they are published (otherwise they obviously would never be published/fixed) and that a lot of other people will start attempting exploits immediately after publication. Look through your logs to see how many hits you are getting that are likely to be probes for vulnerabilities to get a feeling for how much of this is going on.
Would it not be best for the vast majority of those users to have updates turned on by default? If not, why not? (Power users can always turn them off, after all.)
If your service is important, then it is worth testing changes before making them on your important server. But no one else can tell you whether your server is that important or not... It's fairly trivial to run a 'yum update' on a lab server daily, and if anything updates, make sure that things still work before repeating it on the production box(es). The update checks can be scripted, but the "does it still work" test will be unique to your services.
What would your proposal be? (Remembering that you can't change human nature, so if it relies on the majority of end users devoting time that you think they "should" do, it won't happen :) )
Mine is to assume that there are very good reasons for 'Enterprise' distributions to go to the trouble of publishing updates. Install them. Always assume that there are still more vulnerabilities that you don't know about yet - and if you have to ask the question, you aren't going to do better than the developers and Red Hat at keeping up with them.
On Thu, Dec 29, 2011 at 10:49 AM, Les Mikesell lesmikesell@gmail.comwrote:
Would it not be best for the vast majority of those users to have updates turned on by default? If not, why not? (Power users can always turn
them
off, after all.)
If your service is important, then it is worth testing changes before making them on your important server. But no one else can tell you whether your server is that important or not... It's fairly trivial to run a 'yum update' on a lab server daily, and if anything updates, make sure that things still work before repeating it on the production box(es). The update checks can be scripted, but the "does it still work" test will be unique to your services.
But these are all considerations mainly for power users; I'm still talking just about the vast majority of hosting company customers who just lease a dedicated or virtual private server, and don't even have a "test server" and a "production server". Why wouldn't it be best for those servers just to pick up and install updates automatically?
What would your proposal be? (Remembering that you can't change human nature, so if it relies on the majority of end users devoting time that
you
think they "should" do, it won't happen :) )
Mine is to assume that there are very good reasons for 'Enterprise' distributions to go to the trouble of publishing updates. Install them. Always assume that there are still more vulnerabilities that you don't know about yet - and if you have to ask the question, you aren't going to do better than the developers and Red Hat at keeping up with them.
Yes this is good advice for the individual user; what I was asking is what set of *defaults* would improve security the most for the vast majority of users (who cannot be counted on to change defaults -- or, indeed, to follow any advice that anyone thinks "everyone" "should" do!).
Bennett Haselton wrote:
On Thu, Dec 29, 2011 at 10:49 AM, Les Mikesell lesmikesell@gmail.comwrote:
Would it not be best for the vast majority of those users to have
updates turned on by default? If not, why not? (Power users can always turn them off, after all.)
If your service is important, then it is worth testing changes before making them on your important server. But no one else can tell you whether your server is that important or not... It's fairly trivial to run a 'yum update' on a lab server daily, and if anything updates, make sure that things still work before repeating it on the production box(es). The update checks can be scripted, but the "does it still work" test will be unique to your services.
But these are all considerations mainly for power users; I'm still talking just about the vast majority of hosting company customers who just lease a dedicated or virtual private server, and don't even have a "test server" and a "production server". Why wouldn't it be best for those servers just
<snip> A. If you are a business, and don't have a test/development server, you're an idiot, and will be out of business shortly, broke, after too many errors in production. And before you say anything, in addition to huge companies, I've worked for companies as small as 12 and even 6, and *everyone* had a test/development servers.
B. Hosting providers, if you're not buying colo, do the testing and rollout of updates themselves, not trusting to the "vast majority of hosting company customers" to update with bug and security fixes.
mark
On Thu, Dec 29, 2011 at 1:10 PM, Bennett Haselton bennett@peacefire.org wrote:
If your service is important, then it is worth testing changes before making them on your important server. But no one else can tell you whether your server is that important or not... It's fairly trivial to run a 'yum update' on a lab server daily, and if anything updates, make sure that things still work before repeating it on the production box(es). The update checks can be scripted, but the "does it still work" test will be unique to your services.
But these are all considerations mainly for power users; I'm still talking just about the vast majority of hosting company customers who just lease a dedicated or virtual private server, and don't even have a "test server" and a "production server". Why wouldn't it be best for those servers just to pick up and install updates automatically?
There's a chance it will break your service. If that isn't important enough for you to test, then yes, you should update automatically, but you don't get to blame someone else when it does break. It has to be your choice. But you are pretty much guaranteed to have known vulnerabilities if you don't update. All you have to do is look at the changelogs to see that.
Mine is to assume that there are very good reasons for 'Enterprise' distributions to go to the trouble of publishing updates. Install them. Always assume that there are still more vulnerabilities that you don't know about yet - and if you have to ask the question, you aren't going to do better than the developers and Red Hat at keeping up with them.
Yes this is good advice for the individual user; what I was asking is what set of *defaults* would improve security the most for the vast majority of users (who cannot be counted on to change defaults -- or, indeed, to follow any advice that anyone thinks "everyone" "should" do!).
There is always a tradeoff between convenience and security and one size doesn't fit all. If everything on the site is public anyway then the most you have to lose is the service of the machine. If there is something valuable to steal then you should be prepared to do some extra work to protect it. In any case don't install or expose any services that aren't absolutely needed.