Hello CentOS list, I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success. 1) generating RSA 2048 public/private key pair via ³openssl req -x509 -nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G. 2) generating GPG public/private key pair via ³gpg ‹gen-key² then encrypt with "gpg -e -u "backup" -r "backup² backup.tar². However, the gpg encryption stops at file size 50G and refuse to do more and the gpg process took over 48 hours. The server is very capable. It¹s 8 CPU Intel 2.33 GHz 16G RAM installing latest RHEL 5.11. Thought CentOS 5 is pretty much compatible in release with RHEL 5. I have searched google and found out a technique that utilizes the symmetric encryption. Then it needs to generate a symmetric key every day and uses public/private key pair to encrypt the symmetric key. However the drawback is that we don¹t know how to manage the symmetric key securely. We can¹t leave the un-encrypted symmetric key there on the server but we have to use the un-encrypted symmetric key for encryption process. Plus we¹ll need to manage the symmetric encryption key, public and private key pair 3 things securely. Has anyone had experience on managing the asymmetric encryption for very large file and what¹s the best practice for that? Thanks. - xinhuan
On Wed, Dec 17, 2014 at 05:14:21PM +0000, Xinhuan Zheng wrote:
used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G.
It seems likely that openssl hasn't been compiled with large-file support. Not so uncommon with RH5.
Can you send the output to stdout and redirect? Or if that fails then send to stdout and filter via "dd" to write to the file. Now at this point openssl is only writing to a pipe and won't hit the 2G limit.
On Wed, Dec 17, 2014 at 11:14 AM, Xinhuan Zheng xzheng@christianbook.com wrote:
I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G.
What happens if you use a pipeline or redirection instead of the -in and -out files? I regularly write large tapes with something like: openssl aes-256-cbc -salt -k password <input.tar.gz |dd bs=10240 obs=10240 of=/dev/nst0 Not quite the same, but there does not seem to be an inherent size limit in openssl as long as it is not handling files and it happens at a reasonable speed so it must be using the intel hardware support.
Am 17.12.2014 um 18:42 schrieb Les Mikesell lesmikesell@gmail.com:
On Wed, Dec 17, 2014 at 11:14 AM, Xinhuan Zheng xzheng@christianbook.com wrote:
I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G.
What happens if you use a pipeline or redirection instead of the -in and -out files? I regularly write large tapes with something like: openssl aes-256-cbc -salt -k password <input.tar.gz |dd bs=10240 obs=10240 of=/dev/nst0 Not quite the same, but there does not seem to be an inherent size limit in openssl as long as it is not handling files and it happens at a reasonable speed so it must be using the intel hardware support.
Furthermore - is there the need to use "one" big tar file? Despite having a capable workstation/server handling such big files, it has also advantages splitting such backups (e.g. man split) ...
-- LF
On 17/12/14 18:54, Leon Fauster wrote:
Am 17.12.2014 um 18:42 schrieb Les Mikesell lesmikesell@gmail.com:
On Wed, Dec 17, 2014 at 11:14 AM, Xinhuan Zheng xzheng@christianbook.com wrote:
I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G.
What happens if you use a pipeline or redirection instead of the -in and -out files? I regularly write large tapes with something like: openssl aes-256-cbc -salt -k password <input.tar.gz |dd bs=10240 obs=10240 of=/dev/nst0 Not quite the same, but there does not seem to be an inherent size limit in openssl as long as it is not handling files and it happens at a reasonable speed so it must be using the intel hardware support.
Furthermore - is there the need to use "one" big tar file? Despite having a capable workstation/server handling such big files, it has also advantages splitting such backups (e.g. man split) ...
-- LF
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Is it possible for you to use gpg? You could do something like: tar zcf /something - | gpg -e -r otherkey | cat - > backup.tgz
Regards
On Wed, Dec 17, 2014 at 06:58:40PM +0100, Markus wrote:
On 17/12/14 18:54, Leon Fauster wrote:
Am 17.12.2014 um 18:42 schrieb Les Mikesell lesmikesell@gmail.com:
On Wed, Dec 17, 2014 at 11:14 AM, Xinhuan Zheng xzheng@christianbook.com wrote:
I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G.
What happens if you use a pipeline or redirection instead of the -in and -out files? I regularly write large tapes with something like: openssl aes-256-cbc -salt -k password <input.tar.gz |dd bs=10240 obs=10240 of=/dev/nst0 Not quite the same, but there does not seem to be an inherent size limit in openssl as long as it is not handling files and it happens at a reasonable speed so it must be using the intel hardware support.
Furthermore - is there the need to use "one" big tar file? Despite having a capable workstation/server handling such big files, it has also advantages splitting such backups (e.g. man split) ... LF
<snip>
Is it possible for you to use gpg? You could do something like: tar zcf /something - | gpg -e -r otherkey | cat - > backup.tgz
Regards
or better yet: tar zcf /something - | gpg -e -r otherkey > backup.tgz
If gpg can write to stdout, then it can write to a redirected stdout. no need to use a superfluous cat in there.
and (without checking the man page) I'm not sure about the 'f' in the tar commandline... I thought the 'f' referred to the output file???
hi all
sorry my poor english..
but you need encrypt that large file in symetric way. use the asymetric way ( public/private key par) to encrypt the symetric key. Em 17/12/2014 15:58, "Markus" markus.scharitzer@gmail.com escreveu:
On 17/12/14 18:54, Leon Fauster wrote:
Am 17.12.2014 um 18:42 schrieb Les Mikesell lesmikesell@gmail.com:
On Wed, Dec 17, 2014 at 11:14 AM, Xinhuan Zheng xzheng@christianbook.com wrote:
I have a requirement that I need to use encryption technology to
encrypt
very large tar file on a daily basis. The tar file is over 250G size
and
those are data backup. Every night the server generated a 250G data
backup
and it¹s tar¹ed into one tarball file. I want to encrypt this big
tarball
file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command
and
uses the public key to encrypt the big tar file. The encryption
command I
used is "openssl smime -encrypt -aes256 -in backup.tar -binary
-outform
DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl
file is
only 2G then encryption process stops there and refuse to do more.
Cannot
get around 2G.
What happens if you use a pipeline or redirection instead of the -in and -out files? I regularly write large tapes with something like: openssl aes-256-cbc -salt -k password <input.tar.gz |dd bs=10240 obs=10240 of=/dev/nst0 Not quite the same, but there does not seem to be an inherent size limit in openssl as long as it is not handling files and it happens at a reasonable speed so it must be using the intel hardware support.
Furthermore - is there the need to use "one" big tar file? Despite having a capable workstation/server handling such big files, it has also advantages splitting such backups (e.g. man split) ...
-- LF
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Is it possible for you to use gpg? You could do something like: tar zcf /something - | gpg -e -r otherkey | cat - > backup.tgz
Regards _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I would rather work on single files or tars on directory basis. Using a single big file creates a very "large" single point of failure. Or use an encrypted file system (of course, also a single point of failure, but probably better handling).
Kai
Hello,
On Thu, 18 Dec 2014 16:51:31 +0100 Kai Schaetzl maillists@conactive.com wrote:
I would rather work on single files or tars on directory basis. Using a single big file creates a very "large" single point of failure. Or use an encrypted file system (of course, also a single point of failure, but probably better handling).
The bad points with using an encrypted fs maybe in the OT case, is that to move the encrypted file to somewhere else, you need to move the hardware containing the fs :-(. Also, it doesn't allow changing the encryption key very often. I think an encrypted fs addresses other security/confidentiality issues, but then the OT should be more precise about his needs/the context.
My 2 cents.
Regards,
On Thu, Dec 18, 2014 at 10:41 AM, wwp subscript@free.fr wrote:
I would rather work on single files or tars on directory basis. Using a single big file creates a very "large" single point of failure. Or use an encrypted file system (of course, also a single point of failure, but probably better handling).
The bad points with using an encrypted fs maybe in the OT case, is that to move the encrypted file to somewhere else, you need to move the hardware containing the fs :-(.
Which might be as simple as swapping a USB key or portable drive.
Also, it doesn't allow changing the encryption key very often. I think an encrypted fs addresses other security/confidentiality issues, but then the OT should be more precise about his needs/the context.
Yes, how the backup copies will be managed after encryption would have a lot to do with picking the most convenient approach. One thing that would be possible on an encrypted file system would be using a backup approach that stores multiple copies, de-dupinng unchanged files as you can do with rsync, rdiff-backup, backuppc, etc. Those can only work if the software involved sees the unencrypted files.
From: Kai Schaetzl maillists@conactive.com
I would rather work on single files or tars on directory basis. Using a single big file creates a very "large" single point of failure. Or use an encrypted file system (of course, also a single point of failure, but probably better handling).
Afio is supposed to have better error handling than tar, and other nice options like pgp.
JD
On Wed, Dec 17, 2014 at 12:14 PM, Xinhuan Zheng xzheng@christianbook.com wrote:
Hello CentOS list, I have a requirement that I need to use encryption technology to encrypt very large tar file on a daily basis. The tar file is over 250G size and those are data backup. Every night the server generated a 250G data backup and it¹s tar¹ed into one tarball file. I want to encrypt this big tarball file. So far I have tried two technologies with no success.
- generating RSA 2048 public/private key pair via ³openssl req -x509
-nodes -newkey rsa:2048 -keyout private.pem -out public.pem² command and uses the public key to encrypt the big tar file. The encryption command I used is "openssl smime -encrypt -aes256 -in backup.tar -binary -outform DEM -out backup.tar.ssl public.pem². The resulting backup.tar.ssl file is only 2G then encryption process stops there and refuse to do more. Cannot get around 2G. 2) generating GPG public/private key pair via ³gpg ‹gen-key² then encrypt with "gpg -e -u "backup" -r "backup² backup.tar². However, the gpg encryption stops at file size 50G and refuse to do more and the gpg process took over 48 hours. The server is very capable. It¹s 8 CPU Intel 2.33 GHz 16G RAM installing latest RHEL 5.11. Thought CentOS 5 is pretty much compatible in release with RHEL 5. I have searched google and found out a technique that utilizes the symmetric encryption. Then it needs to generate a symmetric key every day and uses public/private key pair to encrypt the symmetric key. However the drawback is that we don¹t know how to manage the symmetric key securely. We can¹t leave the un-encrypted symmetric key there on the server but we have to use the un-encrypted symmetric key for encryption process. Plus we¹ll need to manage the symmetric encryption key, public and private key pair 3 things securely. Has anyone had experience on managing the asymmetric encryption for very large file and what¹s the best practice for that? Thanks.
- xinhuan
GPG is really what you want to be using for this. OpenSSL is a general toolkit that provide a lot of good functions, but you need to cobble some things together yourself. GPG is meant to handle all of the other parts of dealing with files.
I will expand on what someone else mentioned -- asymmetric encryption is not meant for, and has very poor performance for encrypting data, and also has a lot of limitations. The correct way to handle this is to create a symmetric key and use that to encrypt the data, then use asymmetric encryption to encrypt only the symmetric key.
GPG takes care of this all internally, so that's what you should be using.
❧ Brian Mathis @orev
On Fri, Dec 19, 2014 at 2:40 PM, Brian Mathis brian.mathis+centos@betteradmin.com wrote:
GPG is really what you want to be using for this. OpenSSL is a general toolkit that provide a lot of good functions, but you need to cobble some things together yourself. GPG is meant to handle all of the other parts of dealing with files.
I will expand on what someone else mentioned -- asymmetric encryption is not meant for, and has very poor performance for encrypting data, and also has a lot of limitations. The correct way to handle this is to create a symmetric key and use that to encrypt the data, then use asymmetric encryption to encrypt only the symmetric key.
GPG takes care of this all internally, so that's what you should be using.
Will GPG use the intel aes hardware acceleration - in the version available for Centos5?
On Fri, Dec 19, 2014 at 3:48 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Fri, Dec 19, 2014 at 2:40 PM, Brian Mathis brian.mathis+centos@betteradmin.com wrote:
GPG is really what you want to be using for this. OpenSSL is a general toolkit that provide a lot of good functions, but you need to cobble some things together yourself. GPG is meant to handle all of the other parts
of
dealing with files.
I will expand on what someone else mentioned -- asymmetric encryption is not meant for, and has very poor performance for encrypting data, and
also
has a lot of limitations. The correct way to handle this is to create a symmetric key and use that to encrypt the data, then use asymmetric encryption to encrypt only the symmetric key.
GPG takes care of this all internally, so that's what you should be
using.
Will GPG use the intel aes hardware acceleration - in the version available for Centos5?
-- Les Mikesell
It doesn't appear to be available for any program running on CentOS 5. https://www.centos.org/forums/viewtopic.php?t=17713
❧ Brian Mathis @orev
On 12/19/2014 1:22 PM, Brian Mathis wrote:
It doesn't appear to be available for any program running on CentOS 5. https://www.centos.org/forums/viewtopic.php?t=17713
that article is only talking about openssl... openssh, gpg, and others use their own crypto implementations.
not centos/rhel specific, but.. Intel claims OpenSSL v1.0 has direct support, 0.9.8k+ has support via a patch.