[CentOS] Deleting Large Files

Tue Mar 3 00:07:04 UTC 2009
JohnS <jses27 at gmail.com>

On Mon, 2009-03-02 at 17:36 -0600, Kevin Krieser wrote:
> On Mar 2, 2009, at 2:34 AM, Kay Diederichs wrote:
> 
> > Joseph L. Casale schrieb:
> >> I have an issue with a busy CentOS server exporting iSCSI and NFS/ 
> >> SMB shares.
> >> Some of the files are very large, and when they get deleted IO  
> >> climbs to an
> >> unacceptable rate. Is there a way to purge a file with little to no  
> >> IO
> >> overhead on ext3?
> >>
> >> Thanks!
> >> jlc
> >
> > Have you tried to delete locally, instead of over NFS?
> >
> > Maybe by deleting over SMB from a Windows machine, the file is not
> > deleted but rather moved to a "Trash" folder on a different disk  
> > (which
> > would explain the I/O)? (Same could happen with a Unix desktop, like  
> > KDE)
> >
> > Have you tried the "unlink" command instead of "rm" ?
> >
> > Kay
> 
> 
> I've seen length delete times too when deleting a 30+ GB file on an  
> ext3 filesystem, locally.  I don't think that Windows (XP at least)  
> will move remote files to a "trash" folder, only local files.  Though  
> there is no telling what it might do if you have some 3rd party  
> application installed that may provide an "undelete" functionality.
> 
> I haven't researched it enough to check I/O stats. 

 This is mostly a problem with windows clients especially under Samba
Server. Read the following data from Intel where they tested this under
Samba. It's to do with the way windows accesses the file share. This is
not something that has just rose up from the dead.
http://software.intel.com/en-us/articles/windows-client-cifs-behavior-can-slow-linux-nas-performance/
I have indeed noticed the difference that Intel talks about with using
XFS and NTFS It has nothing to do with rm -f or unlink! It has
everything to do with the amount of file extents that are created for
the file (fragmentation). You guys can replicate this for your self in a
test machine. 

JohnStanley