On 1/4/2012 9:32 AM, Lamar Owen wrote: > On Tuesday, January 03, 2012 06:12:10 PM Bennett Haselton wrote: >> I'm not sure what their logic is for recommending 80. But 72 bits >> already means that any attack is so improbable that you'd *literally* >> have to be more worried about the sun going supernova. > I'd be more worried about Eta Carinae than our sun, as with it's mass it's likely to be a GRB. The probability of it happening in our lifetime is quite low; yet, if it does happen in our lifetime (actually, if it happened about 7,500 years ago!) it will be an extinction event. So we watch it over time (and we have plates of it going back into the late 1800's). > > Likewise for security; the gaussian curve does have outliers, after all, and while it is highly unlikely for a brute-force attack to actually come up with anything against a single server it is still possible, partially due to the number of servers out there coupled with the sheer number of brute-forcers running. The odds are not 1 out of 4.7x10^21; they're much better than that since there isn't just a single host attempting the attack. If I have a botnet of 10,000,000 infected PC's available to attack 100,000,000 servers (close to the number), what are the odds of one of those attacks succeeding? (the fact is that it has happened already; see my excerpted 'in the wild' brute-forcer dictionary below). (1) Someone already raised the issue of what if you have 10 million infected machines instead of just 1; multiple people pointed out that it doesn't matter because the limiting factor is the speed at which sshd can accept/reject login requests, so it doesn't matter if the attacker has 10 million machines or 1. (2) If there are 100 million machines being attacked, that still doesn't make a brute force attack any more likely for my machine. It's not correct to say that if 10 million of those 100 million machines are likely to get compromised, then mine has a 10% chance of being compromised, because with a 12-char random password the odds are much lower for me than for others in the sample. If *everyone* used a 12-char random password, then the odds are that *none* of the 10 million machines attacking 100 million servers would hit on a success, not when there are 10^21 possible passwords to choose from. >>> The critical thing to remember is that in key auth the authenticating key never leaves the client system,... >> Actually, the top answer at that link appears to say that the server >> sends the nonce to the client, and only the client can successfully >> decrypt it. (Is that what you meant?) > That's session setup, not authentication. The paragraph I'm reading appears to say that the server sends the nonce to the client, even for *authentication* (after session setup): http://security.stackexchange.com/questions/3887/is-using-a-public-key-for-logging-in-to-ssh-any-better-than-saving-a-password "After the channel is functional and secure... the server has the public key of the user stored. What happens next is that the server creates a random value (nonce), encrypts it with the public key and sends it to the user. If the user is who is supposed to be, he can decrypt the challenge and send it back to the server". So that's what I meant... you'd said the client sends the nonce to the server whereas the page said the server sends the nonce to the client... just wanted to make sure I wasn't missing anything. > The server has to auth to the client first for session setup, but then client auth is performed. But either way the actual client authenticating key never traverses the wire and is unsniffable. >> Furthermore, when you're dealing with probabilities that ridiculously >> small, they're overwhelmed by the probability that an attack will be >> found against the actual algorithm (which I think is your point about >> possible weaknesses in the stream cipher). > This has happened; read some SANS archives. There have been and are exploits in the wild against SSH and SSL; even caused OpenBSD to have to back down from it's claim of never having a remotely exploitable root attack. > >> However, *then* you have to take into account the fact that, similarly, >> the odds of a given machine being compromised by a man-in-the-middle >> attack combined with cryptanalysis of the stream cipher, is *also* >> overwhelmed by the probability of a break-in via an exploit in the >> software it's running. I mean, do you think I'm incorrect about that? > What you're missing is that low probability is not a preventer of an actual attack succeeding; people do win the lottery even with the odds stacked against them. > >> Of the compromised machines on the Internet, what proportion do you >> think were hacked via MITM-and-advanced-crypto, compared to exploits in >> the services? > I don't have sufficient data to speculate. SANS or CERT may have that information. Well, what would you guess, based on what you think is likely? If I bet you that more machines were compromised by exploits in services, and I offered you 100-to-1 odds in your favor, would you take it? :) >> and if I hadn't stood my ground about that, >> the discussion never would have gotten around to SELinux, which, if it >> works in the manner described, may actually help. > The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website. Well every one of the thousands of features and functions of Linux is indexed by Google on the web *somewhere* :) The question is whether you'll get pointed to it if you ask for help. >> The problem with such "basic stuff" is that in any field, if there's no >> way to directly test whether something has the desired effect or not, it >> can become part of accepted "common sense" even if it's ineffective. > Direct testing of both SELinux and iptables effectiveness is doable, and is done routinely by pen-testers. EL6 has the tools necessary to instrument and control both, and by adding third-party repositories (in particular there is a security repo out there I didn't doubt that SELinux or iptables both do what they say they do, or that they reduce the risk of a break-in. My point was that other pieces of "lore" (like "ssh keys reduce the chance of a break-in more than 12-char passwords") have the potential to become part of "folk wisdom" despite not having been tested directly and despite not actually making any difference. >> If your server does get broken into >> and a customer sues you for compromising their data, and they find that >> you used passwords instead of keys for example, they can hire an >> "expert" to say that was a foolish choice that put the customer's data >> at risk. > There is this concept called due diligence. If an admin ignores known industry standards and then gets compromised because of that, then that admin is negligent. If they get compromised *because* they ignored industry standards, yes! But if they ignored industry standards that had no bearing on security, and they then get compromised some other way (which perhaps was no more likely to happen to them than to anyone else, but they were unlucky), then they're not negligent. > Thus, risk analysis and management is done to weigh the costs of the security against the costs of exploit; or, to put in the words of a security consultant we had here (the project is, unfortunately, under NDA, so I can't drop the name of that consultant) "You will be or are compromised now; you must think and operate that way to mitigate your risks." Regardless of the security you think you have, you will be compromised at some point. > > The due diligence is being aware of that and being diligent enough so that a server won't have been compromised for two months or longer before you find it. Secure passwords by themselves are not enough. Staying patched, by itself, is not enough. IPtables and other network firewalls by themselves are not enough. SELinux and other access controls (such as TOMOYO and others) by themselves are not enough. And none of the technologies are 'set it and forget it' technologies. You need awareness, multiple layers, and diligent monitoring. > > And I would change your sentence slightly; it's not a matter of 'if' your server is going to get broken into, it's 'when.' > >> Case in point: in the *entire history of the Internet*, do you think >> there's been a single attack that worked because squid was allowed to >> listen on a non-standard port, that would have been blocked if squid had >> been forced to listen on a standard port? > I'm sure there has been, whether it's documented or not. I'm not aware of a comprehensive study into all the possible avenues of access; but the various STIG's exist for valid reasons. (see Johnny's post to those standards and best practices). Best practices are things learned in the field by looking to see what is being used and has been used to break in; they're not just made up out of whole cloth. And if the theory says it shouldn't be possible, but it reality it has happened, then the theory has issues; emipirical data trumps all. It was impossible for Titanic to sink, but it did anyway. > > But it's not just squid; SELinux implements mandatory access controls; this means that the policy is default deny for all services, all files, and all ports. It has nothing to do with squid only; it's about a consistently implemented security policy where programs only get the access that they have to have to do the job they need to do. Simply enforcing that rule consistently can help eliminate backdoor processes (which can easily be implemented over encrypted GRE tunnels through a loopback device, at least the couple I've seen were). Yes, the totality of SELinux restrictions sounds like it could make a system more secure if it helps to guard against exploits in the services and the OS. My point was that some individual restrictions may not make sense. But it's not a big deal if there's an easy way to disable individual restrictions. > You don't seem to have much experience with dealing with today's metasploit-driven multilayer attacks. > >> What's unique about security advice is that some of it can be rejected >> just on logical grounds, to move the discussion on to something more >> likely to help. > You need to learn more about what you speak before you say such unfounded speculative things, really. The attacks mentioned are real; we're not making this stuff up off the top of our heads, Bennett. Nor am I making up the contents of a brute-forcer dictionary I retrieved, about three years ago, from a compromised machine here which contains the following passwords in it: > > ... > root:LdP9cdON88yW > root:u2x2bz > root:6e51R12B3Wr0 > root:nb0M4uHbI6M > root:c3qLzdl2ojFB > root:LX5ktj > root:34KQ > root:8kLKwwpPD > root:Bl95X1nU > root:3zSlRG73r17 > root:fDb8 > root:cAeM1KurR > root:MXf3RX7 > root:4jpk > root:j00U3bG1VuA > root:HYQ9jbWbgjz3 > root:Ex4yI8 > root:k9M0AQUVS5D > root:0U9mW4Wh > root:2HhF19 > root:EmGKf4 > root:8NI877k8d5v > root:K539vxaBR > root:5gvksF8g55b > root:TO553p9E > root:7LX66rL7yx1F > root:uOU8k03cK2P > root:l9g7QmC9ev0 > root:E8Ab > root:98WZ4C55 > root:kIpfB0Pr3fe2 > ... > > How do you suppose those passwords (along with 68,887 others) got in that dictionary? Well in that case it looks like we have the answer: http://pastebin.com/bnu6ZSvj Someone broke into a League of Legends server and posted a bunch of user account passwords. So someone added them to a dictionary hoping that if just one of the users they were attacking happened to be a League of Legends player and happened to use the same password on another system they were targeting, it would work. But to avoid that problem, just don't re-use your root password anywhere else. > Seriously: the passwords are there and they didn't just appear out of thin air. The people running those passwords thought they were secure enough, too, I'm sure. > > The slow brute-forcers are at work, and are spreading. This is a fact; and some of those passwords are 12-character alphanumerics (some with punctuation symbols) with 72-bits of entropy, yet there they are, and not just one of them, either. Well yes of course an attacker can try *particular* 12-character passwords, I never said they couldn't :) And in this case the attacker stole the passwords by grabbing them en masse from a database, not by brute-forcing them. To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit? Bennett