Hi,
Everytime I restart Centos 7 I receive a error saying…
metadata is corrupt
and then I need to go through the process of mount and unmount the disk uuid then run
xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more.
I’m running on a RAID 1 two identical drives
this has happened more then once and had to reinstall. Any way I can prevent this when I shutdown or restart? This happens when I reboot the machine.
Does the HD need to be completely zero wiped if I do a fresh install?
Best,
Steve
On 3/23/2015 1:24 PM, Stephen Drotar wrote:
Everytime I restart Centos 7 I receive a error saying…
metadata is corrupt
and then I need to go through the process of mount and unmount the disk uuid then run
xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more.
I’m running on a RAID 1 two identical drives
this has happened more then once and had to reinstall. Any way I can prevent this when I shutdown or restart? This happens when I reboot the machine.
sounds to me like there are hardware problems on this system, that the disks are corrupting data. bad ram can do this if there's no ECC to detect the hardware corruption. so can buggy/broken hardware RAID controllers.
Hi,
Can CENTOS be used with ext3 or ext4 partitioning?
Steve
On Mar 23, 2015, at 4:47 PM, John R Pierce pierce@hogranch.com wrote:
On 3/23/2015 1:24 PM, Stephen Drotar wrote:
Everytime I restart Centos 7 I receive a error saying…
metadata is corrupt
and then I need to go through the process of mount and unmount the disk uuid then run
xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more.
I’m running on a RAID 1 two identical drives
this has happened more then once and had to reinstall. Any way I can prevent this when I shutdown or restart? This happens when I reboot the machine.
sounds to me like there are hardware problems on this system, that the disks are corrupting data. bad ram can do this if there's no ECC to detect the hardware corruption. so can buggy/broken hardware RAID controllers.
-- john, recycling bits in santa cruz
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mar 23, 2015, at 3:23 PM, Stephen Drotar stephen@artifex360.com wrote:
Can CENTOS be used with ext3 or ext4 partitioning?
Better to speak of ext3 and ext4 as filesystems, rather than partition types. The partition type is 83 in both cases, which doesn’t distinguish them.
All current versions of CentOS support ext3, both in the installer and after installation. ext4 was only added to the installer in the most recent version, CentOS 7.
On 2015-03-23, Stephen Drotar stephen@artifex360.com wrote:
Can CENTOS be used with ext3 or ext4 partitioning?
Yes (as someone else said, they're filesystem types, not partition types), but if there is a hardware issue, as John noted, ext3 or ext4 won't solve the problem. xfs is usually fairly solid, so it is very unlikely that the filesystem type is a problem.
You should probably run memtest86+ for at least 24 hours to see if that's an issue. If that doesn't find anything, you should probably investigate your storage system (as John also mentioned, hard drives and drive controller would both be suspects).
--keith
On Mon, Mar 23, 2015 at 2:24 PM, Stephen Drotar stephen@artifex360.com wrote:
Hi,
Everytime I restart Centos 7 I receive a error saying…
metadata is corrupt
and then I need to go through the process of mount and unmount the disk uuid then run
xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more.
For future reference -L is a big hammer. If you use it without explicitly attempting a read-write mount (which a read only mount at boot time will not do because it's an ro mount by default), it will almost invariably corrupt the file system worse. My 20/20 hindsight advise is that if a normal mount fails, and xfs_repair fails, then follow this: http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_r... And report it to the XFS list straight away. There's a much lower chance someone on this list will know what to do, than the XFS list. And BTW neither list can answer your question without a complete dmesg (not just the trace, it's really annoying to get just the "cut here" stuff with drive related problems because almost certainly the real problem occurred before XFS got P.O.d.)
I’m running on a RAID 1 two identical drives
this has happened more then once and had to reinstall. Any way I can prevent this when I shutdown or restart? This happens when I reboot the machine.
Sorry, but you've given us absolutely no information at all. There's more than 8001 possibilities. So the least you can do is extract the rdsosreport.txt if you're dropped to a shell. Or boot from install media, with the rescue boot parameter option, and try mounting the volume, and if that fails, also xfs_repair and give that result. And then collect dmesg which will have mount failure info for sure, and maybe it will also have extra messages from xfs_repair.
Does the HD need to be completely zero wiped if I do a fresh install?
No the installer uses a combination of wipefs, and will also zero some important sections that sometimes cause problems down the road if they aren't zero'd.
On 2015-03-23, Chris Murphy lists@colorremedies.com wrote:
For future reference -L is a big hammer. If you use it without explicitly attempting a read-write mount (which a read only mount at boot time will not do because it's an ro mount by default)
...for the root filesystem, anyway. For nonroot filesystems it should use whatever flags are set in fstab. (Granted many boxes likely have / as the only on-disk fs.)
--keith
On Mon, Mar 23, 2015 at 7:14 PM, Keith Keller kkeller@wombat.san-francisco.ca.us wrote:
On 2015-03-23, Chris Murphy lists@colorremedies.com wrote:
For future reference -L is a big hammer. If you use it without explicitly attempting a read-write mount (which a read only mount at boot time will not do because it's an ro mount by default)
...for the root filesystem, anyway. For nonroot filesystems it should use whatever flags are set in fstab. (Granted many boxes likely have / as the only on-disk fs.)
Even the root ro to rw remount swicheroo is antiquated. Not ext3/4, XFS, nor Btrfs want you to run an fsck on an ro mounted volume. Both e2fsck and xfs_repair use strong wording saying not to do it, to the point I think it's crusty weirdness to keep the code allowing things like "dangerous" mode repair. The btrfs check tool on the other hand will neither check nor repair a mounted volume - it's actually a nearly last resort there. Usually normal mount fixes things, and if not the first option is to use -o recovery mount option.
Everytime I restart Centos 7 I receive a error saying…
metadata is corrupt
and then I need to go through the process of mount and unmount the disk uuid then run
xfs_repair {some uuid} or xfs_repair -L {some uuid} which ultimately corrupts even more.
I’m running on a RAID 1 two identical drives
Could be totally irrelevant but once I had serious fs corruption problems after hibernation on a Fedora laptop with Intel graphics.