[CentOS] recommendations for copying large filesystems

Rainer Duffner rainer at ultra-secure.de
Sat Jun 21 22:06:42 UTC 2008

Am 21.06.2008 um 23:44 schrieb Matt Morgan:

> O
> Then if you get the network sorted out, the fastest & most reliable  
> way I know to copy lots of files is
> star --copy
> You can get star with
> yum install star

Now that I know the details - I don' think this is going to work. Not  
with 100 TB of data. It kind-of-works with 1 TB.
Can anybody comment on the feasibility of rsync on 1 million files?
Maybe DRBD would be a solution.
If you can retrofit DRDB to an existing setup...

If not it's faster to move the drives physically - believe me, this  
will create far less problems.
In a SAN, you would have the possibility of synching the data outside  
of the filesystem, during normal operations.

100 TB is a lot of data.
How do you back that up, BTW?
What is your estimated time to restore it from the medium you  back  
it up to?

Rainer Duffner
rainer at ultra-secure.de

More information about the CentOS mailing list