I have a CentOS 5.6 system (recently installed) that, for some reason, has decided to mangle one of its drives, specifically /dev/hde1 ... No errors anywhere, just rebooted the machine over the weekend and it's gone. Up till the reboot, the drive was fine, I was writing to it without a problem.
fdisk tells me:
---------- # fdisk -l /dev/hde
Disk /dev/hde: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders Units = cylinders of 15120 * 512 = 7741440 bytes
Device Boot Start End Blocks Id System /dev/hde1 * 1 20673 156287848+ 83 Linux ----------
There are no hardware errors in the boot log (dmesg). The only error is that it can't find the ext3 fs that was on that drive. Unfortunately, it's not a drive I can simply reformat and call it a day. There's data on it I need.
When I try to mount it, I get: hfs: unable to find HFS+ superblock. Obviously that's not right as the drive was formatted as an ext3. So if I force it, I get this:
---------- mount -t ext3 /dev/hde1 /mnt/hde1 mount: wrong fs type, bad option, bad superblock on /dev/hde1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so ----------
So, is this just an indication that the partition table is hosed? Is there anything, any tool, any way of reading the data off of this drive and put it elsewhere?
On 05/10/2011 02:24 PM, Ashley M. Kirchner wrote:
I have a CentOS 5.6 system (recently installed) that, for some
reason, has decided to mangle one of its drives, specifically /dev/hde1 ... No errors anywhere, just rebooted the machine over the weekend and it's gone. Up till the reboot, the drive was fine, I was writing to it without a problem.
fdisk tells me:
# fdisk -l /dev/hde
Disk /dev/hde: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders Units = cylinders of 15120 * 512 = 7741440 bytes
Device Boot Start End Blocks Id System
/dev/hde1 * 1 20673 156287848+ 83 Linux
There are no hardware errors in the boot log (dmesg). The only
error is that it can't find the ext3 fs that was on that drive. Unfortunately, it's not a drive I can simply reformat and call it a day. There's data on it I need.
When I try to mount it, I get: hfs: unable to find HFS+
superblock. Obviously that's not right as the drive was formatted as an ext3. So if I force it, I get this:
mount -t ext3 /dev/hde1 /mnt/hde1 mount: wrong fs type, bad option, bad superblock on /dev/hde1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so
So, is this just an indication that the partition table is hosed?
Is there anything, any tool, any way of reading the data off of this drive and put it elsewhere?
Have you tried using an alternate superblock?
Or ddrecovery to another disk?
On Tue, May 10, 2011 at 2:28 PM, Steve Clark sclark@netwolves.com wrote:
On 05/10/2011 02:24 PM, Ashley M. Kirchner wrote:
I have a CentOS 5.6 system (recently installed) that, for some
reason, has decided to mangle one of its drives, specifically /dev/hde1 ... No errors anywhere, just rebooted the machine over the weekend and it's gone. Up till the reboot, the drive was fine, I was writing to it without a problem.
fdisk tells me:
# fdisk -l /dev/hde
Disk /dev/hde: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders Units = cylinders of 15120 * 512 = 7741440 bytes
Device Boot Start End Blocks Id System
/dev/hde1 * 1 20673 156287848+ 83 Linux
There are no hardware errors in the boot log (dmesg). The only
error is that it can't find the ext3 fs that was on that drive. Unfortunately, it's not a drive I can simply reformat and call it a day. There's data on it I need.
When I try to mount it, I get: hfs: unable to find HFS+
superblock. Obviously that's not right as the drive was formatted as an ext3. So if I force it, I get this:
mount -t ext3 /dev/hde1 /mnt/hde1 mount: wrong fs type, bad option, bad superblock on /dev/hde1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so
So, is this just an indication that the partition table is hosed?
Is there anything, any tool, any way of reading the data off of this drive and put it elsewhere?
Have you tried using an alternate superblock?
-- Stephen Clark NetWolves Sr. Software Engineer III Phone: 813-579-3200 Fax: 813-882-0209 Email: steve.clark@netwolves.com http://www.netwolves.com
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 05/10/2011 02:28 PM, Steve Clark wrote:
On 05/10/2011 02:24 PM, Ashley M. Kirchner wrote:
I have a CentOS 5.6 system (recently installed) that, for some
reason, has decided to mangle one of its drives, specifically /dev/hde1 ... No errors anywhere, just rebooted the machine over the weekend and it's gone. Up till the reboot, the drive was fine, I was writing to it without a problem.
fdisk tells me:
# fdisk -l /dev/hde
Disk /dev/hde: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders Units = cylinders of 15120 * 512 = 7741440 bytes
Device Boot Start End Blocks Id System
/dev/hde1 * 1 20673 156287848+ 83 Linux
There are no hardware errors in the boot log (dmesg). The only
error is that it can't find the ext3 fs that was on that drive. Unfortunately, it's not a drive I can simply reformat and call it a day. There's data on it I need.
When I try to mount it, I get: hfs: unable to find HFS+
superblock. Obviously that's not right as the drive was formatted as an ext3. So if I force it, I get this:
mount -t ext3 /dev/hde1 /mnt/hde1 mount: wrong fs type, bad option, bad superblock on /dev/hde1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so
So, is this just an indication that the partition table is hosed?
Is there anything, any tool, any way of reading the data off of this drive and put it elsewhere?
Have you tried using an alternate superblock?
http://www.cyberciti.biz/tips/surviving-a-linux-filesystem-failures.html
I have a CentOS 5.6 system (recently installed) that, for some
reason, has decided to mangle one of its drives, specifically /dev/hde1 ... No errors anywhere, just rebooted the machine over the weekend and it's gone. Up till the reboot, the drive was fine, I was writing to it without a problem.
mke2fs -n /dev/hde1 should list you the block addresses where copies of the superblock would be found.
Set ASB=any one of the alternate blocks listed. Then, try mount -t ext3 -o sb=$ASB /dev/hda2 /mnt
Additional reading: http://www.cyberciti.biz/tips/surviving-a-linux-filesystem-failures.html
Insert spiffy .sig here: Life is complex: it has both real and imaginary parts.
//me
******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On 5/10/2011 12:30 PM, Brunner, Brian T. wrote:
mke2fs -n /dev/hde1 should list you the block addresses where copies of the superblock would be found.
Set ASB=any one of the alternate blocks listed. Then, try mount -t ext3 -o sb=$ASB /dev/hda2 /mnt
Thanks! I will try that as soon as dd_rescue is done running ... in about 16 hours or so. Considering the data on this drive, I'm willing to risk and try to copy everything off before forging ahead with trying to mount the thing and possibly damaging it even more.
Ashley M. Kirchner wrote:
On 5/10/2011 12:30 PM, Brunner, Brian T. wrote:
mke2fs -n /dev/hde1 should list you the block addresses where copies of the superblock would be found.
Set ASB=any one of the alternate blocks listed. Then, try mount -t ext3 -o sb=$ASB /dev/hda2 /mnt
Thanks! I will try that as soon as dd_rescue is done running ...
in about 16 hours or so. Considering the data on this drive, I'm willing to risk and try to copy everything off before forging ahead with trying to mount the thing and possibly damaging it even more.
I will byte and actually say it: Use Backups for important data you can not afford to lose. rsync or similar tool can be used via cron to make sure important files are saved.
Ljubomir
On 5/10/2011 2:00 PM, Ljubomir Ljubojevic wrote:
I will byte and actually say it: Use Backups for important data you can not afford to lose. rsync or similar tool can be used via cron to make sure important files are saved.
And this is normally done, except I was in the middle of working on something when I needed to reboot for non-related reasons. So for the most part I have a backup, but it's about 24 hours behind where I was. That's 24 hours I don't necessarily want to lose.
On Tue, 10 May 2011, Ashley M. Kirchner wrote:
On 5/10/2011 2:00 PM, Ljubomir Ljubojevic wrote:
I will byte and actually say it: Use Backups for important data you can not afford to lose. rsync or similar tool can be used via cron to make sure important files are saved.
And this is normally done, except I was in the middle of working on
something when I needed to reboot for non-related reasons. So for the most part I have a backup, but it's about 24 hours behind where I was. That's 24 hours I don't necessarily want to lose.
If you finished your dd_rescue/ddrescue copy, you may want to look into the testdisk utility to see if somehow the partition-table was not tampered with. testdisk can provide you with different layouts based on filesystem patterns. And it also saves the original layout so you can restore that as well.
Also beware that a complete image includes the partition table, and loop-back mounting by default expects the filesystem image. So you may have to provide also an offset= option to tell mount where to look for the actual filesystem on the image !
If the files on the disk are a common format and the filesystem for some reason is nuked, photorec might help recover data from the disk. But beware, it may be very time-consuming to restore whatever photorec thinks it can identify. For simple digital camera media this works much better than a full disk with eg. operating system.
Before trying an fsck on a backup copy, first try an fsck -n and see if the output is only minimal or not. Possibly try with different superblocks as well. You don't want to have to make another copy just because the filesystem is so broken, it can never be restored using fsck.
Good luck, and provide feedback, we might learn a trick or two :)
On 11/05/11 02:52, Dag Wieers wrote:
If you finished your dd_rescue/ddrescue copy, you may want to look into the testdisk utility to see if somehow the partition-table was not tampered with. testdisk can provide you with different layouts based on filesystem patterns.
I've had good luck with testdisk before, it's well worth playing with (on a clone of the disk if possible).