[CentOS] what percent of time are there unpatched exploits against default config?

Wed Dec 28 09:04:38 UTC 2011
夜神 岩男 <supergiantpotato at yahoo.co.jp>

On 12/28/2011 04:40 PM, Bennett Haselton wrote:
> On Tue, Dec 27, 2011 at 10:17 PM, Rilindo Foster<rilindo at me.com>  wrote:
>> On Dec 27, 2011, at 11:29 PM, Bennett Haselton<bennett at peacefire.org>
>>
>> What was the nature of the break-in, if I may ask?
>>
>
> I don't know how they did it, only that the hosting company had to take the
> server offline because they said it was sending a DOS attack to a remote
> host and using huge amounts of bandwidth in the process.  The top priority
> was to get the machine back online so they reformatted it and re-connected
> it, so there are no longer any logs showing what might have happened.
> (Although of course once the server is compromised, presumably the logs can
> be rewritten to say anything anyway.)

Stopping right there, it sounds like the hosting company doesn't know 
their stuff.

Logs should always be replicated remotely in a serious production 
environment, and I would say that any actual hosting company -- being a 
group whose profession it is to host things -- would define that category.

Yes, logs can get messed with. But everything up to the moment of 
exploit should be replicated remotely for later investigation, whether 
or not the specific, physical machine itself is wiped. The only way to 
get around that completely is to compromise the remote logger, and if 
someone is going to that much trouble, especially across custom setups 
and tiny spins (I don't know many people who use standard full-blown 
installs for remote logging machines...?) then they are good enough to 
have had your goose anyway.

My point is, I think server management is at least as much to blame as 
any specific piece of software involved here.

If that were not the case, why didn't my servers start doing the same thing?

> Well that's what I'm trying to determine.  Is there any set of default
> settings that will make a server secure without requiring the admin to
> spend more than, say, 30 minutes per week on maintenance tasks like reading
> security newsletters, and applying patches?  And if there isn't, are there
> design changes that could make it so that it was?
>
> Because if an OS/webserver/web app combination requires more than, say,
> half an hour per week of "maintenance", then for the vast majority of
> servers and VPSs on the Internet, the "maintenance" is not going to get
> done.  It doesn't matter what our opinion is about whose fault it is or
> whether admins "should" be more diligent.  The maintenance won't get done
> and the machines will continue to get hacked.  (And half an hour per week
> is probably a generous estimate of how much work most VPS admins would be
> willing to do.)
>
> On the other hand, if the most common causes of breakins can be identified,
> maybe there's a way to stop those with good default settings and automated
> processes.  For example, if exploitable web apps are a common source of
> breakins, maybe the standard should be to have them auto-update themselves
> like the operating system.  (Last I checked, WordPress and similar programs
> could *check* if updates were available, and alert you next time you signed
> in, but they didn't actually patch themselves.  So if you never signed in
> to a web app on a site that you'd forgotten about, you might never realize
> it needed patching.)

You just paraphrased the entire market position of professional hosting 
providers, the security community, China's (correct) assumptions for 
funding a cracking army, the reason browser security is impossible, etc.