Sorin Srbu wrote:
Between compression and pooling, I get about 10x the raw data being archived with backuppc - it beats juggling tapes and you can let the users access the backups of their own machine through a web interface. There are some down sides to plan around though: the compression takes some CPU and is slower than a stock rsync run, and the pooling is done with hardlinks which forces the archive to be on a single filesystem and makes it hard to duplicate for offsite copies. There's an RPM in epel that is easy to install on Centos.
BackupPC over here - very happy with it for Linux and Windoze, at home and work
Alan McKay wrote:
BackupPC over here - very happy with it for Linux and Windoze, at home and work
I too have found BackupPC a marvelously simple-to-use program.
In fact it seems to me much better for backing up Windows XP than Windows own Backup program, which I have never completely understood. Eg how exactly does one recover a lost file or folder with it?
Whereas I have been saved several times in restoring lost material with BackupPC, including the contents of one entire damaged drive.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On
Behalf
work
My google searches would have me believe that Amanda is the more popular choice for backup on linux. On this list it seems Backuppc is. Strange... ;-)
Sound very interesting indeed!
I don't think the performance will be a problem, the server's a calculation machine that has now been scrapped running a dual-xeon@2,something GHz and some 4GB RAM IIRC. Do you think the software-raid5 array used, would be a problem in this case?
I've never had any problems with software raid5 in linux before, but you never know...
----- Original Message ----
One thing that made me not use BackupPC was that (from the doc): "The advantage of the mod_perl setup is that no setuid script is needed, and there is a huge performance advantage.... The typical speedup is around 15 times. To use mod_perl you need to run Apache as user __BACKUPPCUSER__. If you need to run multiple Apache's for different services then you need to create multiple top-level Apache directories, each with their own config file. You can make copies of /etc/init.d/httpd and use the -d option to httpd to point each http to a different top-level directory. Or you can use the -f option to explicitly point to the config file. Multiple Apache's will run on different Ports (eg: 80 is standard, 8080 is a typical alternative port accessed via http://yourhost.com:8080)."
Since I don't have a dedicated backup server, I did not want to mess up the existing apache configurations...
JD
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On
Behalf
existing
apache configurations...
Thank for the heads-up, that was actually quite an essential piece of information. In my case however, that won't be a problem as the server would be a dedicated backup-server with no other services running for the department. It might be at home though. If all this folds out well I'll implement it at home as well.
On 1/13/2010 9:04 AM, John Doe wrote:
You really don't spend any time in the web interface which is the only thing affected by this. And it is fast enough when run as a normal CGI anyway. Try it without mod_perl. You'd also have the option of running backuppc as apache, but that is less secure if other web admins have access to the machine.
On Wed, 2010-01-13 at 10:03 -0600, Les Mikesell wrote:
As a side note, the epel BackupPC package does NOT use mod_perl by default and the centos-testing package does use mod_perl by default.
I run the centos-testing package (with mod_perl) and the epel package with and without mod_perl usage and see no practical advantage of using BackupPC with mod_perl in terms of time/cycle usage.
So just use the stock epel package and you don't need to modify apache.
Steve