[CentOS] an actual hacked machine, in a preserved state
lowen at pari.edu
Wed Jan 4 17:32:55 UTC 2012
On Tuesday, January 03, 2012 06:12:10 PM Bennett Haselton wrote:
> I'm not sure what their logic is for recommending 80. But 72 bits
> already means that any attack is so improbable that you'd *literally*
> have to be more worried about the sun going supernova.
I'd be more worried about Eta Carinae than our sun, as with it's mass it's likely to be a GRB. The probability of it happening in our lifetime is quite low; yet, if it does happen in our lifetime (actually, if it happened about 7,500 years ago!) it will be an extinction event. So we watch it over time (and we have plates of it going back into the late 1800's).
Likewise for security; the gaussian curve does have outliers, after all, and while it is highly unlikely for a brute-force attack to actually come up with anything against a single server it is still possible, partially due to the number of servers out there coupled with the sheer number of brute-forcers running. The odds are not 1 out of 4.7x10^21; they're much better than that since there isn't just a single host attempting the attack. If I have a botnet of 10,000,000 infected PC's available to attack 100,000,000 servers (close to the number), what are the odds of one of those attacks succeeding? (the fact is that it has happened already; see my excerpted 'in the wild' brute-forcer dictionary below).
> > The critical thing to remember is that in key auth the authenticating key never leaves the client system,...
> Actually, the top answer at that link appears to say that the server
> sends the nonce to the client, and only the client can successfully
> decrypt it. (Is that what you meant?)
That's session setup, not authentication. The server has to auth to the client first for session setup, but then client auth is performed. But either way the actual client authenticating key never traverses the wire and is unsniffable.
> Furthermore, when you're dealing with probabilities that ridiculously
> small, they're overwhelmed by the probability that an attack will be
> found against the actual algorithm (which I think is your point about
> possible weaknesses in the stream cipher).
This has happened; read some SANS archives. There have been and are exploits in the wild against SSH and SSL; even caused OpenBSD to have to back down from it's claim of never having a remotely exploitable root attack.
> However, *then* you have to take into account the fact that, similarly,
> the odds of a given machine being compromised by a man-in-the-middle
> attack combined with cryptanalysis of the stream cipher, is *also*
> overwhelmed by the probability of a break-in via an exploit in the
> software it's running. I mean, do you think I'm incorrect about that?
What you're missing is that low probability is not a preventer of an actual attack succeeding; people do win the lottery even with the odds stacked against them.
> Of the compromised machines on the Internet, what proportion do you
> think were hacked via MITM-and-advanced-crypto, compared to exploits in
> the services?
I don't have sufficient data to speculate. SANS or CERT may have that information.
> and if I hadn't stood my ground about that,
> the discussion never would have gotten around to SELinux, which, if it
> works in the manner described, may actually help.
The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.
> The problem with such "basic stuff" is that in any field, if there's no
> way to directly test whether something has the desired effect or not, it
> can become part of accepted "common sense" even if it's ineffective.
Direct testing of both SELinux and iptables effectiveness is doable, and is done routinely by pen-testers. EL6 has the tools necessary to instrument and control both, and by adding third-party repositories (in particular there is a security repo out there
> If your server does get broken into
> and a customer sues you for compromising their data, and they find that
> you used passwords instead of keys for example, they can hire an
> "expert" to say that was a foolish choice that put the customer's data
> at risk.
There is this concept called due diligence. If an admin ignores known industry standards and then gets compromised because of that, then that admin is negligent. Thus, risk analysis and management is done to weigh the costs of the security against the costs of exploit; or, to put in the words of a security consultant we had here (the project is, unfortunately, under NDA, so I can't drop the name of that consultant) "You will be or are compromised now; you must think and operate that way to mitigate your risks." Regardless of the security you think you have, you will be compromised at some point.
The due diligence is being aware of that and being diligent enough so that a server won't have been compromised for two months or longer before you find it. Secure passwords by themselves are not enough. Staying patched, by itself, is not enough. IPtables and other network firewalls by themselves are not enough. SELinux and other access controls (such as TOMOYO and others) by themselves are not enough. And none of the technologies are 'set it and forget it' technologies. You need awareness, multiple layers, and diligent monitoring.
And I would change your sentence slightly; it's not a matter of 'if' your server is going to get broken into, it's 'when.'
> Case in point: in the *entire history of the Internet*, do you think
> there's been a single attack that worked because squid was allowed to
> listen on a non-standard port, that would have been blocked if squid had
> been forced to listen on a standard port?
I'm sure there has been, whether it's documented or not. I'm not aware of a comprehensive study into all the possible avenues of access; but the various STIG's exist for valid reasons. (see Johnny's post to those standards and best practices). Best practices are things learned in the field by looking to see what is being used and has been used to break in; they're not just made up out of whole cloth. And if the theory says it shouldn't be possible, but it reality it has happened, then the theory has issues; emipirical data trumps all. It was impossible for Titanic to sink, but it did anyway.
But it's not just squid; SELinux implements mandatory access controls; this means that the policy is default deny for all services, all files, and all ports. It has nothing to do with squid only; it's about a consistently implemented security policy where programs only get the access that they have to have to do the job they need to do. Simply enforcing that rule consistently can help eliminate backdoor processes (which can easily be implemented over encrypted GRE tunnels through a loopback device, at least the couple I've seen were).
You don't seem to have much experience with dealing with today's metasploit-driven multilayer attacks.
> What's unique about security advice is that some of it can be rejected
> just on logical grounds, to move the discussion on to something more
> likely to help.
You need to learn more about what you speak before you say such unfounded speculative things, really. The attacks mentioned are real; we're not making this stuff up off the top of our heads, Bennett. Nor am I making up the contents of a brute-forcer dictionary I retrieved, about three years ago, from a compromised machine here which contains the following passwords in it:
How do you suppose those passwords (along with 68,887 others) got in that dictionary? Seriously: the passwords are there and they didn't just appear out of thin air. The people running those passwords thought they were secure enough, too, I'm sure.
The slow brute-forcers are at work, and are spreading. This is a fact; and some of those passwords are 12-character alphanumerics (some with punctuation symbols) with 72-bits of entropy, yet there they are, and not just one of them, either.
Facts are stubborn things.
More information about the CentOS