I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
My problem is that I've found no means of backing up that file system. Dump and tar both error out as exceeding the maximum size. Neither will backup just the video directory (the largest) even with compression.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
mw
On 27.05.2013 19:13, Mike Watson wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
My problem is that I've found no means of backing up that file system. Dump and tar both error out as exceeding the maximum size. Neither will backup just the video directory (the largest) even with compression.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
Have you tried good old rsync? Or, if you want incremental backups, check rdiff-backup. I'm sure our list colleagues will come up with even more solutions.
I've used rsync for remote file xfer of directory trees. It's been awhile, I'd forgotten about it.
mw
--
"Lose not thy airspeed, lest the ground rises up and smites thee." -- William Kershner http://crucis-court.com http://www.crucis.net/1632search
On 05/27/2013 01:24 PM, Nux! wrote:
On 27.05.2013 19:13, Mike Watson wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
My problem is that I've found no means of backing up that file system. Dump and tar both error out as exceeding the maximum size. Neither will backup just the video directory (the largest) even with compression.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
Have you tried good old rsync? Or, if you want incremental backups, check rdiff-backup. I'm sure our list colleagues will come up with even more solutions.
I'll check again, maybe NTSF. It's a singlepartition 1TB HD so it can't be FAT32.
mw
--
"Lose not thy airspeed, lest the ground rises up and smites thee." -- William Kershner http://crucis-court.com http://www.crucis.net/1632search
On 05/27/2013 01:43 PM, Robert Nichols wrote:
On 05/27/2013 01:13 PM, Mike Watson wrote:
Backup will be to an external (USB) removable HD.
What file system is on that external HD? FAT32 has a 4GB limit for file size.
On Mon, May 27, 2013 at 4:05 PM, Mike Watson mikew@crucis.net wrote:
I'll check again, maybe NTSF. It's a singlepartition 1TB HD so it can't be FAT32.
FAT32 can go to 2TB (you just can't format one that size in windows), but has the 4GB file size limit.
-- Les Mikesell lesmikesell@gmail.com
That's my problem. I hitting a file size limit with dump and tar.
mw
--
"Lose not thy airspeed, lest the ground rises up and smites thee." -- William Kershner http://crucis-court.com http://www.crucis.net/1632search
On 05/27/2013 06:51 PM, Les Mikesell wrote:
On Mon, May 27, 2013 at 4:05 PM, Mike Watson mikew@crucis.net wrote:
I'll check again, maybe NTSF. It's a singlepartition 1TB HD so it can't be FAT32.
FAT32 can go to 2TB (you just can't format one that size in windows), but has the 4GB file size limit.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, May 27, 2013 at 7:05 PM, Mike Watson mikew@crucis.net wrote:
That's my problem. I hitting a file size limit with dump and tar.
Reformatting is the obvious solution so you can use one of the rsync-based backups - but if you really had to keep the FAT format you could use tar |split -b with some reasonable size for output.
-- Les Mikesell lesmikesell@gmail.com
On Mon, May 27, 2013 at 9:41 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, May 27, 2013 at 7:05 PM, Mike Watson mikew@crucis.net wrote:
That's my problem. I hitting a file size limit with dump and tar.
Reformatting is the obvious solution so you can use one of the rsync-based backups - but if you really had to keep the FAT format you could use tar |split -b with some reasonable size for output.
+1 for reformatting and using a file system that supports large(r) files
Although not recommended... ... you could create a file with dd (that will take some time) on the NTFS drive, and then loopback mount it and format it ext3 (or whatever you choose). Then mount that and go on your merry way with your rsync backups.
*Disclaimer*: This solution is ugly and I'm almost certainly I will get tomatoes thrown at me for suggesting it. ;) Which includes converting FAT32 to NTFS if you don't already have NTFS on the drive.
There are more gotchas to what I suggested than just reformatting the drive ext3, etc and then sharing it out via Samba (or NFS) to your Windows clients (if you need to).
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Or, you can use http://sourceforge.net/projects/ext2read/ (or similar)
Copy&Paste:
Ext2Read is an explorer like utility to explore ext2/ext3/ext4 files. It now supports LVM2 and EXT4 extents. It can be used to view and copy files and folders. It can recursively copy entire folders. It can also be used to view and copy disk and file
How well does it run under cron?
mw
--
"Lose not thy airspeed, lest the ground rises up and smites thee." -- William Kershner http://crucis-court.com http://www.crucis.net/1632search
On 05/28/2013 11:54 AM, Diego Sanchez wrote:
Or, you can use http://sourceforge.net/projects/ext2read/ (or similar)
Copy&Paste:
Ext2Read is an explorer like utility to explore ext2/ext3/ext4 files. It now supports LVM2 and EXT4 extents. It can be used to view and copy files and folders. It can recursively copy entire folders. It can also be used to view and copy disk and file
On 2013-05-27, Mike Watson mikew@crucis.net wrote:
I'll check again, maybe NTSF. It's a singlepartition 1TB HD so it can't be FAT32.
Unless you have a desperate need for this disk to be readable by Windows, I would definitely make an ext3, ext4, or xfs filesystem on that USB drive. It is absolutely not worth the headache you'll have trying to restore properly from NTFS or FAT??.
--keith
On 2013-05-27, Mike Watson mikew@crucis.net wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
People have already suggested rsync and rdiff-backup; there's also rsnapshot which is built on top of rsync.
Another option could be mdadm if your RAID1 is already an mdadm array. You can add your USB drive to the array, wait for it to rebuild, then remove it from the array. I'd be wary of backing up an SQL database in that way, but I'd be wary of using rsync, dump, or tar too, so be sure to take a real backup (e.g., mysqldump, pg_dump) of your database first.
--keith
On Mon, May 27, 2013 at 2:55 PM, Keith Keller < kkeller@wombat.san-francisco.ca.us> wrote:
On 2013-05-27, Mike Watson mikew@crucis.net wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
People have already suggested rsync and rdiff-backup; there's also rsnapshot which is built on top of rsync.
Another option could be mdadm if your RAID1 is already an mdadm array. You can add your USB drive to the array, wait for it to rebuild, then remove it from the array. I'd be wary of backing up an SQL database in that way, but I'd be wary of using rsync, dump, or tar too, so be sure to take a real backup (e.g., mysqldump, pg_dump) of your database first.
--keith
-- kkeller@wombat.san-francisco.ca.us
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'd take the extra step of reformatting the USB drive into an ext3 filesystem, then just use rsync in a nightly cron job.
We do it with about a dozen or so workstations here with user data either on internal disks, or other external USB drives. Works great.
-- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphelps@cfa.harvard.edu, http://www.cfa.harvard.edu
On 05/27/2013 02:13 PM, Mike Watson wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
My problem is that I've found no means of backing up that file system. Dump and tar both error out as exceeding the maximum size. Neither will backup just the video directory (the largest) even with compression.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
mw
I use rsync to make a daily back up of my data on a second drive that is not normally mounted except when the backup is running. The drive is inside the same box so this backup is still subject to loss if the box is stolen or destroyed. I really should be backing up to an off site location.
If you have a fire, flood, or other general disaster your local backup on an external drive isn't going to buy you anything unless you store the external drive off site.
On 05/27/2013 11:06 PM, Mark LaPierre wrote:
On 05/27/2013 02:13 PM, Mike Watson wrote:
I've a small system that I use to support a number of churches. I provide web and email for them. My current server is running CentOS 6.3 with paired 1TB drives in a RAID1 configuration. It works well. One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
My problem is that I've found no means of backing up that file system. Dump and tar both error out as exceeding the maximum size. Neither will backup just the video directory (the largest) even with compression.
Backup will be to an external (USB) removable HD.
Can any suggest a prog or a method to back up this filesystem?
mw
I use rsync to make a daily back up of my data on a second drive that is not normally mounted except when the backup is running. The drive is inside the same box so this backup is still subject to loss if the box is stolen or destroyed. I really should be backing up to an off site location.
If you have a fire, flood, or other general disaster your local backup on an external drive isn't going to buy you anything unless you store the external drive off site.
You can place that HDD in a rack (eSATA?) and add another that is kept off-site, and replace/swap them every week?
Full off-site backup would require another box off-site with internet/wireless link for backups. Since you use rsync, data transfer should be fairly minimal.
On 5/27/2013 11:13 AM, Mike Watson wrote:
One filesystem is very large, >500GB, and contains numerous large files: SQL, docs, church libraries in ebook and digital form, plus stored videos of church services.
note that SQL database files generally can't be backed up safely while the SQL database server is active. either take a 'dump' or whatever backup of the database, and backup that dump rather than the raw SQL files, or stop the SQL service before making the backup, then restart it afterwards. Details vary per SQL server, of course.
what would work really well for your general backup requirements, ignoring the above issue, is BackupPC. build an onsite backupPC server with a file system sufficiently large to hold all backups you want kept online (I generally do 2 weeks of incrementals), and setup BackupPC archiving to move copies of completed backups to an offsite repository for disaster recovery.