Hi,
Is there nice way to put back EC encryption on Centos?
RHEL disabled it due "patent issues", but is third party providing packages to EC enabled packages to centos ?
-- Eero
On 02/01/14 04:16 PM, Eero Volotinen wrote:
Hi,
Is there nice way to put back EC encryption on Centos?
RHEL disabled it due "patent issues", but is third party providing packages to EC enabled packages to centos ?
It would have to come from an external repo. The goal of CentOS is to be binary compatible with RHEL, warts and all. So if you know of someone with a repo that has packaged it, it's best to use that, I would think.
Eero Volotinen wrote:
Is there nice way to put back EC encryption on Centos?
RHEL disabled it due "patent issues", but is third party providing packages to EC enabled packages to centos ?
*Which* elliptic curve? I trust you've been reading the revelations from Snowdon about the NSA putting a backdoor in the common ones, esp. the POSIX ones.
mark
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
On 01/02/2014 01:22 PM, m.roth@5-cent.us wrote:
Eero Volotinen wrote:
Is there nice way to put back EC encryption on Centos?
RHEL disabled it due "patent issues", but is third party providing packages to EC enabled packages to centos ?
*Which* elliptic curve? I trust you've been reading the revelations from Snowdon about the NSA putting a backdoor in the common ones, esp. the POSIX ones.
- From what I've been able to find, this is a bit overstated.
There is *one* random number algorithm (Dual_EC_DRBG) associated with ECC that is believed to have been compromised. That it appeared vulnerable has long been known; Bruce Schneier wrote about it in 2007. It also happens to be inefficient and so is not widely used (but a few commercial products use it).
http://www.wired.com/politics/security/commentary/securitymatters/2007/11/se...
I was unable to find an associated vulnerability in Linux. I trust the OpenSSL folks would be on top of this faster than you can blink an eye if it were a current issue. They have not, from what I've seen, reacted to the revelations.
http://www.reuters.com/article/2013/12/20/us-usa-security-rsa-idUSBRE9BJ1C22...
- -- David Benfell see https://parts-unknown.org/node/2 if you don't understand the attachment
- From what I've been able to find, this is a bit overstated.
There is *one* random number algorithm (Dual_EC_DRBG) associated with ECC that is believed to have been compromised. That it appeared
is compromised: http://blog.0xbadc0de.be/archives/155
vulnerable has long been known; Bruce Schneier wrote about it in 2007. It also happens to be inefficient and so is not widely used (but a few commercial products use it).
Apache uses it on some rare cases like 'apache2 uses NID_X9_62_prime256v1 for the ECDH exchange' My idea is to enable EC on centos due to PFS and better encryption levels
-- Eero
2014/1/3 David Benfell benfell@parts-unknown.org
I was unable to find an associated vulnerability in Linux. I trust the OpenSSL folks would be on top of this faster than you can blink an eye if it were a current issue. They have not, from what I've seen, reacted to the revelations.
Interesting read on the openssl-announce list: http://www.mail-archive.com/openssl-announce@openssl.org/msg00127.html Turns out the openssl implementation of Dual_EC_DRBG was broken anyway...
- Jitse
On 01/03/2014 03:28 AM, Jitse Klomp wrote:
2014/1/3 David Benfell benfell@parts-unknown.org
I was unable to find an associated vulnerability in Linux. I trust the OpenSSL folks would be on top of this faster than you can blink an eye if it were a current issue. They have not, from what I've seen, reacted to the revelations.
Interesting read on the openssl-announce list: http://www.mail-archive.com/openssl-announce@openssl.org/msg00127.html Turns out the openssl implementation of Dual_EC_DRBG was broken anyway...
i was just blew away by this: "What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points."
i even don't have words to comment on this!!!
Adrian
On 01/03/2014 11:01 AM, Adrian Sevcenco wrote:
i was just blew away by this: "What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points."
i even don't have words to comment on this!!!
I tweeted about this exact point a few minutes ago; given the way and what is compromised in what manner, and then work back to what FIPS is, it helps dilute the shock. a bit. but then who's got the funds and resources to re-work the fips process with a new codebase ? Will Red Hat ?
- KB
On 01/03/2014 01:15 PM, Karanbir Singh wrote:
On 01/03/2014 11:01 AM, Adrian Sevcenco wrote:
i was just blew away by this: "What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points."
i even don't have words to comment on this!!!
I tweeted about this exact point a few minutes ago; given the way and what is compromised in what manner, and then work back to what FIPS is, it helps dilute the shock. a bit. but then who's got the funds and resources to re-work the fips process with a new codebase ? Will Red Hat ?
at this point i am thinking: why bother (with re-certification)? because of this (among other things) the trust in "fips process" or other "official" processes is in free fall.. IMHO underlying problem is not that a cipher/process/code was compromised but that the supervising _trustworthy_ entity is in fact not trustworthy at all!
Adrian
One thing you need to understand.
There is a huge difference between asymmetric encryption and cryptographically secure pseudo-random number generator. EC is secure, the default random number generator on Linux is /dev/urandom. It does not use the backdoored NSA PRNG.
On Fri, Jan 3, 2014 at 6:36 AM, Adrian Sevcenco Adrian.Sevcenco@cern.chwrote:
On 01/03/2014 01:15 PM, Karanbir Singh wrote:
On 01/03/2014 11:01 AM, Adrian Sevcenco wrote:
i was just blew away by this: "What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points."
i even don't have words to comment on this!!!
I tweeted about this exact point a few minutes ago; given the way and what is compromised in what manner, and then work back to what FIPS is, it helps dilute the shock. a bit. but then who's got the funds and resources to re-work the fips process with a new codebase ? Will Red Hat
? at this point i am thinking: why bother (with re-certification)? because of this (among other things) the trust in "fips process" or other "official" processes is in free fall.. IMHO underlying problem is not that a cipher/process/code was compromised but that the supervising _trustworthy_ entity is in fact not trustworthy at all!
Adrian
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ahmed Hassan said the following on 03/01/2014 13:47:
There is a huge difference between asymmetric encryption and cryptographically secure pseudo-random number generator. EC is secure, the default random number generator on Linux is /dev/urandom. It does not use the backdoored NSA PRNG.
The algorythm behind /dev/urandom is not robust (http://eprint.iacr.org/2013/338.pdf)
With headless and/or virtual servers the issue is even bigger because Linux could not be able to collect enough entropy to seed /dev/urandom
Some entropy generator daemon such as timer_entropyd (http://www.vanheusden.com/te/), haveged (http://www.issihosts.com/haveged/) or randomsound (http://www.digital-scurf.org/software/randomsound) can be used to generate more entropy
Ciao, luigi
Luigi Rosa wrote:
With headless and/or virtual servers the issue is even bigger because
Linux could not be able to collect enough entropy to seed /dev/urandom
Is this a meaningful statement? How do you measure the "entropy" of a seed (which I take to be a string)? And if you can, is it true that you can decrypt a string with low entropy?
Nb What you say may be perfectly valid, I'd just like to know exactly what it means, if indeed it has a mathematical meaning.
Timothy Murphy said the following on 03/01/2014 14:20:
Is this a meaningful statement? How do you measure the "entropy" of a seed (which I take to be a string)? And if you can, is it true that you can decrypt a string with low entropy?
The mathematic behind a PRNG (or DRNG to use NIST terminolgy) + Elliptic Curve falls beyond my comprehension, so I have to take for granted what experts say.
The link to PDF I qoted in my previous message goes deep in detail, you can refer to that paper if you need more informations.
Nb What you say may be perfectly valid, I'd just like to know exactly what it means, if indeed it has a mathematical meaning.
In essence it means that if an algorythm that builds its foundations on the fact that each new number of a sequence is not predictable, when that sequence generates predictable numbers, the algorythm fails.
There are some models that define or analyze if a sequence is "randomic" you can google around or take a look at http://www.issihosts.com/haveged/ais31.html
Mind that you can end up with a big headache :)
Ciao, luigi
Luigi Rosa wrote:
Is this a meaningful statement? How do you measure the "entropy" of a seed (which I take to be a string)? And if you can, is it true that you can decrypt a string with low entropy?
You deleted the statement I queried. Here it is "With headless and/or virtual servers the issue is even bigger because Linux could not be able to collect enough entropy to seed /dev/urandom"
The mathematic behind a PRNG (or DRNG to use NIST terminolgy) + Elliptic Curve falls beyond my comprehension, so I have to take for granted what experts say.
I don't believe in "proof by expertise". You used the work "entropy". I'm asking what you mean by it.
The link to PDF I qoted in my previous message goes deep in detail, you can refer to that paper if you need more informations.
You used the word. I'm asking what you meant by it.
There are some models that define or analyze if a sequence is "randomic" you can google around or take a look at http://www.issihosts.com/haveged/ais31.html
The nearest this comes to a definition of "empirical" entropy is "Accumulate the nearest predecessor distance between byte values in a 256000 + 2560 bit sequence and calculate the empirical entropy"
On this basis the digits of pi are random, in which case it would be easy to supply random numbers.
On 1/3/2014 8:37 AM, Timothy Murphy wrote:
I'm asking what you meant by it.
Entropy has a standard meaning in computer science, see http://en.wikipedia.org/wiki/Information_entropy for an introductory discussion with various references.
John R Pierce wrote:
Entropy has a standard meaning in computer science, see http://en.wikipedia.org/wiki/Information_entropy for an introductory discussion with various references.
Shannon entropy only makes sense when applied to a random variable. It cannot be applied to a single string, as in this case.
Algorithmic entropy (Kolmogorov complexity) can be applied to a single string, but it cannot be measured directly.
On 1/3/2014 4:25 PM, Timothy Murphy wrote:
Shannon entropy only makes sense when applied to a random variable. It cannot be applied to a single string, as in this case.
the seed of a algorithm like /dev/urandom is not a single variable, its a big array of variables. these have to be created with sufficiently random external events to achieve a reasonable level of entropy, and if you continue to generate pseudo-random-numbers from this when those random external events aren't ongoing at a high enough rate relative to your requirements for new random numbers, eventually the 'entropy' runs out, and the sequence becomes increasingly predictable.
John R Pierce wrote:
the seed of a algorithm like /dev/urandom is not a single variable, its a big array of variables. these have to be created with sufficiently random external events to achieve a reasonable level of entropy, and if you continue to generate pseudo-random-numbers from this when those random external events aren't ongoing at a high enough rate relative to your requirements for new random numbers, eventually the 'entropy' runs out, and the sequence becomes increasingly predictable.
According to Wikipedia "A random seed (or seed state, or just seed) is a number (or vector) used to initialize a pseudorandom number generator."
It is impossible to measure the entropy of a single number, or vector. If you think it is, tell me how you measure it.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
On 01/03/2014 03:36 AM, Adrian Sevcenco wrote:
IMHO underlying problem is not
that a cipher/process/code was compromised but that the supervising _trustworthy_ entity is in fact not trustworthy at all!
It will be interesting to see how this plays out. I have enough experience with government to know that there are indeed people who really care about what they do and I'm inclined to accept that some of them at NIST are indeed really, really upset about this.
But if I understood and am remembering correctly, NSA's involvement was mandated by statute.
Back to a more technical point: If indeed the compromised algorithm is *not* enabled in openssl (as a build option) by default, how would apache be able to use it, even in rare instances, unless somebody actually selected that option?
- -- David Benfell see https://parts-unknown.org/node/2 if you don't understand the attachment
Adrian Sevcenco wrote:
i was just blew away by this: "What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points."
I'm a complete innocent in this area, but is it necessary to be "FIPS 140-2 compliant" if you are not dealing the US (or other?) government?