Hi all,
Fewer days ago a CentOS box server suffered a manual and unexpected reset (too large to explain: there are silly people in everywhere).
The result was the system did not mount de root (/) partition and the boot process was stopped. I repair it easily: boot from LiveCD (Knoppix in my case), umount root partition and pass the e2fsck utility.
Because of that I've used several fs tools and my surprise was the e2defrag utility. Until the present day I think the ext2/3 fs has not defrag problem. But I've used defrag tools in root partition because it was really fragmented.
¿What do you know about this kind of "trouble"?
I supose the fragmentation is ext2/3 fs is lesser problem than in Win fs (FAT32, NTFS) but I'm not sure.
You wrote:
Hi all,
Fewer days ago a CentOS box server suffered a manual and unexpected reset (too large to explain: there are silly people in everywhere).
The result was the system did not mount de root (/) partition and the boot process was stopped. I repair it easily: boot from LiveCD (Knoppix in my case), umount root partition and pass the e2fsck utility.
Because of that I've used several fs tools and my surprise was the e2defrag utility. Until the present day I think the ext2/3 fs has not defrag problem. But I've used defrag tools in root partition because it was really fragmented.
What do you know about this kind of "trouble"?
I supose the fragmentation is ext2/3 fs is lesser problem than in Win fs (FAT32, NTFS) but I'm not sure.
DO NOT USE IT. Below is a message from Ted Tso to the ext3 mailing list:
Date: Tue, 31 Oct 2006 14:29:48 -0500 From: Theodore Tso tytso@mit.edu To: "Magnus [utf-8] M?nsson" magnusm@massive.se, ext3-users@redhat.com Cc: submit@bugs.debian.org, brederlo@informatik.uni-tuebingen.de Subject: Re: e2defrag - Unable to allocate buffer for inode priorities
Package: defrag Version: 0.73pjm1-8 Severity: grave
On Wed, Nov 01, 2006 at 01:10:50AM +0800, Andreas Dilger wrote:
So now it was time to defrag, I used this command: thor:~# e2defrag -r /dev/vgraid/data
This program is dangerous to use and any attempts to use it should be stopped. It hasn't been updated in such a long time that it doesn't even KNOW that it is dangerous (i.e. it doesn't check the filesystem version number or feature flags).
In fact we need to create a Debian bug report indicating that this package should *NOT* be included when the Debian etch distribution releases.
Goswin, I am setting the severity to grave (a release-critical severity) because defrag right now is almost guaranteed to corrupt the filesystem if used with modern ext3 filesystems leading to data loss, and this satisfies the definition of grave. I believe the correct answer is either to (a) make defrag refuse to run if any filesystem features are enabled (at the very least, resize_inode, but some of the other newer ext3 filesystem features make me nervous with respect to e2defrag, or (b) since (a) would make e2defrag mostly useless especially since filesystems with resize inodes are created by default in etch, and as far as I know upstream abandoned defrag a long time ago, that we should simply remove e2defrag from etch and probably from Debian altogether.
If you are interested in doing a huge amount of auditing and testing of e2defrag with modern ext3 (and soon ext4) filesystems, that's great, but I suspect that will not at all be trivial, and even making sure e2defrag won't scramble users' data probably can't be achievable before etch releases.
Regards,
- Ted
_______________________________________________ Ext3-users mailing list Ext3-users@redhat.com https://www.redhat.com/mailman/listinfo/ext3-users
Thanks for that good info Ted. I'll be aware of that in next times.
I've found this interesant mini-article about that: http://lwn.net/Articles/81357/
I hope it would be useful for somebody.
Quoting Jordi Espasa Clofent jordi.listas@multivia.com:
Because of that I've used several fs tools and my surprise was the e2defrag utility. Until the present day I think the ext2/3 fs has not defrag problem. But I've used defrag tools in root partition because it was really fragmented.
¿What do you know about this kind of "trouble"?
I supose the fragmentation is ext2/3 fs is lesser problem than in Win fs (FAT32, NTFS) but I'm not sure.
Normally, there's little need to defragment ext2/3 file system. Of course, there are always cases where type of usage of the file system will cause a lot of fragmentation, especially if you run your file systems 99% full (when there's limited space, there's not much file system can do to prevent excessive fragmentation).
Standard Unix file system model (on which ext2/3 was built) was designed to keep fragmentation under control. Actually, when fsck reports several percents of file fragmentation, that fragmentation is there intentionally (very simplified, Unix file systems will intentionally fragment large files, introducing some small fragmentation, however doing so will prevent bigger fragmentation of the files in long run and prevent large files to clog parts of the disk causing long seeks). On some Unix flavours (for example Solaris) you even have control over this "intentional" fragmentation via tunefs utility, so you could optimize file system either for storage of several huge files (by allowing them to use all space in cylinder groups), or countless small files (by limiting amount of space single file can allocate from a cylinder group before being forced to allocate space from next cylinder group).
As Tom replied already, e2defrag is kind of dangerous to use with more recent versions of ext2/3 file systems. And even if it wasn't, there are cases where total defragmentation of ext2 or ext3 file system would hurt performance. In short doing such thing would allow large files to clog parts of the file system, resulting in long seek times for small(er) files that live in the same directory as big file. Historically, Unix file systems will try to allocate space for files in same cylinder group in which directory entry lives, in order to avoid expensive long disk seeks. If you allow single file to eat all available space in cylinder group (by not fragmenting it), accessing all other files in that same directory will be slow (and you are not going to gain much performance by having 100% continuous huge file anyhow).
Thanks for the accurate explanation. ;)