Hi CentOS developers, first of all congrats for the CentOS 5 release! I'm using on several production servers xfs as file system and would like to migrate them from CentOS 4.4/FCx to CentOS 5. I tried for a low priority image server the module downloaded from the site below:
http://dev.centos.org/centos/5/testing/SRPMS/xfs-kmod-0.3-2.2.6.18_8.1.1.el5...
The results have been completely destroyed image files. It was quite strange that small files remaind fine whereas bigger files auf more than 1GB have been destroyed. Has anybody else made similar experiences using this module? Any hints how this problems could be solved? Thanks Gernot
first of all congrats for the CentOS 5 release! I'm using on several production servers xfs as file system and would like to migrate them from CentOS 4.4/FCx to CentOS 5. I tried for a low priority image server the module downloaded from the site below:
http://dev.centos.org/centos/5/testing/SRPMS/xfs-kmod-0.3-2.2.6.18_8.1.1.el5...
This is the source package for the module. Is this the file you installed from or did you just not link correctly?
On Tuesday 24 April 2007, Jim Perrin wrote:
http://dev.centos.org/centos/5/testing/SRPMS/xfs-kmod-0.3-2.2.6.18_8.1.1.el5...
This is the source package for the module. Is this the file you installed from or did you just not link correctly?
I recompiled the module from this src rpm with
rpmbuild --rebuild --target i686 xfs-kmod-0.3-2.2.6.18_8.1.1.el5.src.rpm
and the resultingmodule loaded without any problem. I could mount the filesystem but while accessing/copying-back-and-forth bigger files the filesystem became corrupted. Can you reproduce this behaviour? The same partition formated with ext3 can hold the images. Thanks, Gernot
On Tue, 24 Apr 2007 02:55:39 +0200 Gernot Stocker gernot.stocker@tugraz.at wrote:
and the resultingmodule loaded without any problem. I could mount the filesystem but while accessing/copying-back-and-forth bigger files the filesystem became corrupted. Can you reproduce this behaviour? The same partition formated with ext3 can hold the images.
Does the kernel output show any errors? It may also be useful to know if the filesystem is on some pseudo-device like LVM or kernel RAID.
-- Daniel
On Tuesday 24 April 2007, Daniel de Kok wrote:
Does the kernel output show any errors?
Apr 23 20:52:39 sz-linux02 kernel: Filesystem "md10": Disabling barriers, not supported by the underlying device Apr 23 20:52:39 sz-linux02 kernel: XFS mounting filesystem md10 Apr 23 20:52:39 sz-linux02 kernel: SELinux: initialized (dev md10, type xfs), uses xattr
Not really, just the regular mounting message....
It may also be useful to know if the filesystem is on some pseudo-device like LVM or kernel RAID.
You are right! I forgot to mention that the underlying disk volume is a software mirror (raid 1) which is in sync:
mdadm --detail /dev/md10 /dev/md10: Version : 00.90.03 Creation Time : Fri Apr 20 00:48:23 2007 Raid Level : raid1 Array Size : 57062784 (54.42 GiB 58.43 GB) Device Size : 57062784 (54.42 GiB 58.43 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 10 Persistence : Superblock is persistent
Update Time : Tue Apr 24 10:47:35 2007 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
UUID : 41c62a64:f68bee1f:ac11033d:e2be7f0c Events : 0.120
Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4
cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sdb4[1] sda4[0] 57062784 blocks [2/2] [UU]
That's a diference to the old installation. There we had just one single disk and in the course of the new installation we added a second disk for mirroring. And just to make it clear... we have already completely deleted the images mirror and have rebuilt it from scratch with newfs.xfs etc.
Thanks, Gernot