Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up Martin Šťastný
On 09/04/2009 11:23 AM, happymaster23 wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
Thank you for reply,
because rsync is only synchronizing data (with all errors), this is not backup. If on main server will be some data corruption and backup server will connect and synchronize all data with errors, I have nothing :).
For example - rdiff-backup is working with increments, so you can restore data a year back...
2009/9/4 Johnny Hughes johnny@centos.org:
On 09/04/2009 11:23 AM, happymaster23 wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi, you're searching for a solution that makes snapshots with hardlinks 1) use rsync --delete over ssh 2) use cp -al to create generations 3) rotate the generations daily, just with mv
The generations use nearly no additional disk space, only changes in the file system consume space (i.e. additions), because of the usage of hardlinks for the rest of the files. With new files, rsync will overwrite the hardlink in the current generation of your backup. The hardlinks of the older generations stay intact, thus the older physical file stays intact. Remember, a file stays "alive" as long as there's at least one hardlink pointing to it. This mechanism is the relief to your worries - if a data corruption occurs on one of your files then your backups from the past 'n' days will contain a file version that is still good, whereas 'n' is the number of generations you'll keep.
An example: day #1: ======= * first rsync happens, lots of files will be created daily.0/abc (hardlink to file abc with inode 2235) daily.0/def (hardlink to file def with inode 2249) daily.0/ghi (hardlink to file ghi with inode 3456)
day #2: ======= * do a 'cp -al daily.0 daily.1' * do the new rsync on daily.0, modified file "abc" coming over * the hardlink daily.1/abc stays untouched (so is the file) * the hardlink faily.0/abc is a new one as the file is a new one daily.0/abc (NOTE: hardlink to file abc with inode 8877 ! ) daily.0/def (hardlink to file def with inode 2249) daily.0/ghi (hardlink to file ghi with inode 3456) daily.1/abc (hardlink to file abc with inode 2235) daily.1/def (hardlink to file def with inode 2249) daily.1/ghi (hardlink to file ghi with inode 3456)
Each of the files def and ghi consume only once the disk space, whereas abc from daily.0 and abc from daily.1 are different files with different inodes and they of course consume the double amount of disk space.
You may secure your ssh connection even more by not using root i.e. by using a non privileged user. In that case you'd have to use a sudo etry (via 'visudo') allowing the non privileged user to use /usr/bin/rsync as the super user, i.e. on EVERY file in the system. The sudo line would be: backupuser ALL=(root)NOPASSWD:/usr/bin/rsync
Of course you should only mount the partition you're backing up to as read write when you have to. Otherwise it should stay unmounted or at least mounted read only. The machine you're backing up to should be a single user machine. A user id 501 on machine A, named 'john', may be a different user on machine B, there named 'bill'. So if bill logs into machine B (and if he has user id 501) then he'll be able to see the files from user 'john', in case the backup partition is readable. (That's also why you should keep it unmounted). Data in backups may e.g. contain mysql passwords, smtp passwords, etc.
That's not THE ULTIMATE solution, but it works for me and it seems to be quite efficient. I think the main advantage of that solution is that you're independent of any backup software except for cp and rsync.
Contact me in case you've got further questions. Michael
happymaster23 wrote:
Thank you for reply,
because rsync is only synchronizing data (with all errors), this is not backup. If on main server will be some data corruption and backup server will connect and synchronize all data with errors, I have nothing :).
For example - rdiff-backup is working with increments, so you can restore data a year back...
2009/9/4 Johnny Hughes johnny@centos.org:
On 09/04/2009 11:23 AM, happymaster23 wrote:
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
Not sure if anyone mentioned this yet, but you might want to have a look at a product called BackupPC, which is based on rsync but puts a really nice front end on it.
Not sure if it can work over SSH though. Just read the fine manual to find out.
Alan McKay wrote:
Not sure if anyone mentioned this yet, but you might want to have a look at a product called BackupPC, which is based on rsync but puts a really nice front end on it.
Not sure if it can work over SSH though. Just read the fine manual to find out.
Yes, backuppc can work with or without ssh - and besides hard-linking the identical files it also compresses them.
David Suhendrik wrote:
Not sure if anyone mentioned this yet, but you might want to have a look at a product called BackupPC, which is based on rsync but puts a really nice front end on it.
Not sure if it can work over SSH though. Just read the fine manual to find out.
Yes, backuppc can work with or without ssh - and besides hard-linking the identical files it also compresses them.
May be anyone using Clustring metode an improving DRBD and HA for this case?
That's a somewhat different scenario. Backuppc and other hardlink backup tools maintain a history with snapshot copies at configurable intervals so you can restore things even if you don't notice a problem until later. DRBD is a live replication of only the current contents. If, for example, someone accidentally deletes your source code repository or an important database, it would be gone immediately on the copy too.
On Wed, 2009-09-09 at 14:19 +0200, Michael Kress wrote:
Hi, you're searching for a solution that makes snapshots with hardlinks
- use rsync --delete over ssh
- use cp -al to create generations
- rotate the generations daily, just with mv
The generations use nearly no additional disk space, only changes in the file system consume space (i.e. additions), because of the usage of hardlinks for the rest of the files. With new files, rsync will overwrite the hardlink in the current generation of your backup. The hardlinks of the older generations stay intact, thus the older physical file stays intact. Remember, a file stays "alive" as long as there's at least one hardlink pointing to it. This mechanism is the relief to your worries - if a data corruption occurs on one of your files then your backups from the past 'n' days will contain a file version that is still good, whereas 'n' is the number of generations you'll keep.
On another list someone recommended Duplicity. It will do incremental backups and encrypt them using GPG, so they will also be compressed as well as encrypt. You can then store and maintain your backup repository on a third party repository and not have to worry about security. It's encrypted at the point of origin, so it's compressed and secured before it ever leaves you system. Only downside that I can see is that it uses tar format so it can not preserve extended attributes. But it would have no trouble with hard links.
http://www.nongnu.org/duplicity/
== "Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server... In theory many protocols for connecting to a file server could be supported; so far ssh/scp, local file access, rsync, ftp, HSI, WebDAV, and Amazon S3 have been written." ==
An example: day #1: =======
- first rsync happens, lots of files will be created
daily.0/abc (hardlink to file abc with inode 2235) daily.0/def (hardlink to file def with inode 2249) daily.0/ghi (hardlink to file ghi with inode 3456)
day #2:
- do a 'cp -al daily.0 daily.1'
- do the new rsync on daily.0, modified file "abc" coming over
- the hardlink daily.1/abc stays untouched (so is the file)
- the hardlink faily.0/abc is a new one as the file is a new one
daily.0/abc (NOTE: hardlink to file abc with inode 8877 ! ) daily.0/def (hardlink to file def with inode 2249) daily.0/ghi (hardlink to file ghi with inode 3456) daily.1/abc (hardlink to file abc with inode 2235) daily.1/def (hardlink to file def with inode 2249) daily.1/ghi (hardlink to file ghi with inode 3456)
Each of the files def and ghi consume only once the disk space, whereas abc from daily.0 and abc from daily.1 are different files with different inodes and they of course consume the double amount of disk space.
You may secure your ssh connection even more by not using root i.e. by using a non privileged user. In that case you'd have to use a sudo etry (via 'visudo') allowing the non privileged user to use /usr/bin/rsync as the super user, i.e. on EVERY file in the system. The sudo line would be: backupuser ALL=(root)NOPASSWD:/usr/bin/rsync
Of course you should only mount the partition you're backing up to as read write when you have to. Otherwise it should stay unmounted or at least mounted read only. The machine you're backing up to should be a single user machine. A user id 501 on machine A, named 'john', may be a different user on machine B, there named 'bill'. So if bill logs into machine B (and if he has user id 501) then he'll be able to see the files from user 'john', in case the backup partition is readable. (That's also why you should keep it unmounted). Data in backups may e.g. contain mysql passwords, smtp passwords, etc.
That's not THE ULTIMATE solution, but it works for me and it seems to be quite efficient. I think the main advantage of that solution is that you're independent of any backup software except for cp and rsync.
Contact me in case you've got further questions. Michael
happymaster23 wrote:
Thank you for reply,
because rsync is only synchronizing data (with all errors), this is not backup. If on main server will be some data corruption and backup server will connect and synchronize all data with errors, I have nothing :).
For example - rdiff-backup is working with increments, so you can restore data a year back...
2009/9/4 Johnny Hughes johnny@centos.org:
On 09/04/2009 11:23 AM, happymaster23 wrote:
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Johnny Hughes wrote:
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
Backuppc is ideal for this - it not only uses rsync over ssh for the xfer (along with some other choices) but it keeps the files compressed and uses hardlinks to de-duplicate the storage so you can keep a much longer history on line than you would expect. There's an rpm in epel, home page is here: http://backuppc.sourceforge.net/
On 09/04/2009 11:45 AM, Les Mikesell wrote:
Johnny Hughes wrote:
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up
Why not just use rsync over ssh?
Backuppc is ideal for this - it not only uses rsync over ssh for the xfer (along with some other choices) but it keeps the files compressed and uses hardlinks to de-duplicate the storage so you can keep a much longer history on line than you would expect. There's an rpm in epel, home page is here: http://backuppc.sourceforge.net/
I agree with Les on this ... if you are looking to do those kind of backups, backuppc is very good. I use it in production at several places.
On Fri, Sep 4, 2009 at 10:11 AM, Johnny Hughesjohnny@centos.org wrote:
On 09/04/2009 11:45 AM, Les Mikesell wrote:
Johnny Hughes wrote:
Backuppc is ideal for this - it not only uses rsync over ssh for the xfer (along with some other choices) but it keeps the files compressed and uses hardlinks to de-duplicate the storage so you can keep a much longer history on line than you would expect. There's an rpm in epel, home page is here: http://backuppc.sourceforge.net/
I agree with Les on this ... if you are looking to do those kind of backups, backuppc is very good. I use it in production at several places.
There is also a CentOS wiki article on backuppc:
http://wiki.centos.org/HowTos/BackupPC
Akemi
happymaster23 wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
rsnapshot should be available.
You could also use VPN, though NFS over a WAN is horribly slow.
nate
Thank you,
I will look into.
2009/9/4 nate centos@linuxpowered.net:
happymaster23 wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
rsnapshot should be available.
You could also use VPN, though NFS over a WAN is horribly slow.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, 4 Sep 2009, nate wrote:
happymaster23 wrote:
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
rsnapshot should be available.
You could also use VPN, though NFS over a WAN is horribly slow.
Or you could use sshfs.
-steve
You could use Bacula (www.bacula.org)
On Fri, Sep 4, 2009 at 1:23 PM, happymaster23 happymaster23@gmail.comwrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up Martin Šťastný _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thank you for reply,
I will look into, but overally - I am looking for rockstable solution, because security is on the first place.
2009/9/4 Vinicius Coque vcoque@gmail.com:
You could use Bacula (www.bacula.org)
On Fri, Sep 4, 2009 at 1:23 PM, happymaster23 happymaster23@gmail.com wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up Martin Šťastný _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
happymaster23 wrote:
Thank you for reply,
I will look into, but overally - I am looking for rockstable solution, because security is on the first place.
If you have any questions, please post on the backuppc mail list - you'll probably find someone with experience with similar usage. The only real issue with the program that comes up regularly is when people want a 2nd copy of the data archive it creates and have a problem handling the large number of hardlinks it creates to pool the data.
happymaster23 wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
If you want something "like" rdiff-backup, you could just install rdiff-backup from the rpmforge repository.
Martin,
you may want to take a look on http://www.nongnu.org/storebackup/ I am using that program for some month now. It installs easily, runs over SSH connection, and saves a lot of space on the target machine by hard-linking identical files between various backups.
on Friday, September 4, 2009 at 18:23 you wrote:
Hello,
I want mount directory of one server to another over internet. I was looking to NFS4, but there are no security mechanisms. I need encrypted connection using private key (something like SFTP).
Or - if there is in CentOS repo (or EPEL) package, that can mount directory over internet using private key and make differential backup (like rdiff-backup).
Thank you very much for links or other resources work up Martin Štastný _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
best regards, Michael Schumacher ---- PAMAS Partikelmess- und Analysesysteme GmbH Dieselstr.10, D-71277 Rutesheim Tel +49-7152-99630 Fax +49-7152-996333 Geschäftsführer: Gerhard Schreck Handelsregister B Stuttgart HRB 252024