Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root@server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root@server admin]# dmesg | grep md1
Kernel command line: ro root=/dev/md1 rhgb quiet md: created md1 raid1: raid set md1 active with 1 out of 2 mirrors md: md1 already running, cannot run hdc3 md: md1 already running, cannot run hdc3 EXT3-fs: md1: orphan cleanup on readonly fs EXT3-fs: md1: 4 orphan inodes deleted md: md1 already running, cannot run hdc3 EXT3 FS on md1, internal journal [root@server admin]#
Thanks for any help !!!
roberto
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root@server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU]
What happens with md1 ?
hdc3 appears to be the problem. Try using mdadm (man mdadm) query mode to find out more info about md1 and /dev/hdc3. Are you sure /dev/hdc3 is set to be linux raid autodetect (use fdisk to find out). You can also try adding hdc3 to the array with mdadm - see the man page for details.
cheers;
Alex ====
My dmesg output is:
[root@server admin]# dmesg | grep md1
Kernel command line: ro root=/dev/md1 rhgb quiet md: created md1 raid1: raid set md1 active with 1 out of 2 mirrors md: md1 already running, cannot run hdc3 md: md1 already running, cannot run hdc3 EXT3-fs: md1: orphan cleanup on readonly fs EXT3-fs: md1: 4 orphan inodes deleted md: md1 already running, cannot run hdc3 EXT3 FS on md1, internal journal [root@server admin]#
Thanks for any help !!!
roberto
-- Ing. Roberto Pereyra ContenidosOnline http://www.contenidosonline.com.ar _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- This message has been scanned for viruses and dangerous content by Avantel Systems, and is believed to be clean.
On 4/25/07, alex@avantel.ca alex@avantel.ca wrote:
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:
hdc3 appears to be the problem. Try using mdadm (man mdadm) query mode to find out more info about md1 and /dev/hdc3. Are you sure /dev/hdc3 is set to be linux raid autodetect (use fdisk to find out). You can also try adding hdc3 to the array with mdadm - see the man page for details.
I recently had a similar problem. One disk was reported to have failed. I ran mdadm to add it back. The reconstruction took a couple of hours (500GB) but it seems to be working now. So, as the above advice suggests, read man page for mdadm and try to recover the lost partition.
Akemi
2007/4/26, Akemi Yagi amyagi@gmail.com:
On 4/25/07, alex@avantel.ca alex@avantel.ca wrote:
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:
hdc3 appears to be the problem. Try using mdadm (man mdadm) query mode to find out more info about md1 and /dev/hdc3. Are you sure /dev/hdc3 is set to be linux raid autodetect (use fdisk to find out). You can also try adding hdc3 to the array with mdadm - see the man page for details.
I recently had a similar problem. One disk was reported to have failed. I ran mdadm to add it back. The reconstruction took a couple of hours (500GB) but it seems to be working now. So, as the above advice suggests, read man page for mdadm and try to recover the lost partition.
Akemi _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi !
Can I force the reconstruction without risk with this command ?
mdadm --assemble --force
roberto
On 4/26/07, Roberto Pereyra pereyra.roberto@gmail.com wrote:
2007/4/26, Akemi Yagi amyagi@gmail.com:
On 4/25/07, alex@avantel.ca alex@avantel.ca wrote:
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:
I recently had a similar problem. One disk was reported to have failed. I ran mdadm to add it back. The reconstruction took a couple of hours (500GB) but it seems to be working now. So, as the above advice suggests, read man page for mdadm and try to recover the lost partition.
Hi !
Can I force the reconstruction without risk with this command ?
mdadm --assemble --force
roberto
There is no such thing like "without risk" :)
In my particular case, after having made a full backup copy, I simply added the failed drive by:
mdadm /dev/md0 --add /dev/sdb1
(note that /dev/sdb1 was the one failing)
Akemi
Roberto Pereyra spake the following on 4/26/2007 5:08 AM:
2007/4/26, Akemi Yagi amyagi@gmail.com:
On 4/25/07, alex@avantel.ca alex@avantel.ca wrote:
On Wed, April 25, 2007 11:54 am, Roberto Pereyra wrote:
hdc3 appears to be the problem. Try using mdadm (man mdadm) query
mode to
find out more info about md1 and /dev/hdc3. Are you sure /dev/hdc3
is set
to be linux raid autodetect (use fdisk to find out). You can also try adding hdc3 to the array with mdadm - see the man page for details.
I recently had a similar problem. One disk was reported to have failed. I ran mdadm to add it back. The reconstruction took a couple of hours (500GB) but it seems to be working now. So, as the above advice suggests, read man page for mdadm and try to recover the lost partition.
Akemi _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi !
Can I force the reconstruction without risk with this command ?
mdadm --assemble --force
If you have to force it, I wouldn't do it. Why not format the partition as ext2 with mke2fs -c to test it thoroughly ( or even -cc for a deeper test). Then you can redo it as Linux raid and re-add it into the array
On Thu, Apr 26, 2007 at 09:59:29AM -0700, Scott Silva wrote:
If you have to force it, I wouldn't do it. Why not format the partition as ext2 with mke2fs -c to test it thoroughly ( or even -cc for a deeper test). Then you can redo it as Linux raid and re-add it into the array
Or just run badblocks against it.
Thanks to all !!!
Roberto
2007/4/26, Stephen Harris lists@spuddy.org:
On Thu, Apr 26, 2007 at 09:59:29AM -0700, Scott Silva wrote:
If you have to force it, I wouldn't do it. Why not format the partition as ext2 with mke2fs -c to test it thoroughly ( or even -cc for a deeper test). Then you can redo it as Linux raid and re-add it into the array
Or just run badblocks against it.
--
rgds Stephen _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi Roberto.
The raid1 md1 is damaged. The partition hdc3 has been discarded by the system, that now only works with one of the two partitions of the mirror (hda3). You can have more details with:
mdadm --detail /dev/md1
Sorry, my English is not good
The partition md1 is RAID-1 with hda3 and hdc3 but hdc3 is wrong and the system is running with hda3
Roberto Pereyra escribió:
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root@server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root@server admin]# dmesg | grep md1
Kernel command line: ro root=/dev/md1 rhgb quiet md: created md1 raid1: raid set md1 active with 1 out of 2 mirrors md: md1 already running, cannot run hdc3 md: md1 already running, cannot run hdc3 EXT3-fs: md1: orphan cleanup on readonly fs EXT3-fs: md1: 4 orphan inodes deleted md: md1 already running, cannot run hdc3 EXT3 FS on md1, internal journal [root@server admin]#
Thanks for any help !!!
roberto
Thanks to all !!!
This is my mdadm output:
[root@server /]# /sbin/mdadm --detail /dev/md1 /dev/md1: Version : 00.90.01 Creation Time : Wed Apr 4 06:16:19 2007 Raid Level : raid1 Array Size : 77023552 (73.46 GiB 78.87 GB) Used Dev Size : 77023552 (73.46 GiB 78.87 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Thu Apr 26 07:35:41 2007 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0
UUID : 39c8b2dc:8d81c3fa:ebb5bd38:dd773473 Events : 0.712607
Number Major Minor RaidDevice State 0 3 3 0 active sync /dev/hda3 1 0 0 1 removed [root@server /]#
Is a logical failure or fisical ?
Can someone help me to fix it ?
Thanks again
roberto
2007/4/25, Miguel infopnte@pnte.cfnavarra.es:
Hi Roberto.
The raid1 md1 is damaged. The partition hdc3 has been discarded by the system, that now only works with one of the two partitions of the mirror (hda3). You can have more details with:
mdadm --detail /dev/md1
Sorry, my English is not good
The partition md1 is RAID-1 with hda3 and hdc3 but hdc3 is wrong and the system is running with hda3
Roberto Pereyra escribió:
Hi
I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
[root@server admin]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc2[1] hda2[0] 1052160 blocks [2/2] [UU]
md1 : active raid1 hda3[0] 77023552 blocks [2/1] [U_]
md0 : active raid1 hdc1[1] hda1[0] 104320 blocks [2/2] [UU]
What happens with md1 ?
My dmesg output is:
[root@server admin]# dmesg | grep md1
Kernel command line: ro root=/dev/md1 rhgb quiet md: created md1 raid1: raid set md1 active with 1 out of 2 mirrors md: md1 already running, cannot run hdc3 md: md1 already running, cannot run hdc3 EXT3-fs: md1: orphan cleanup on readonly fs EXT3-fs: md1: 4 orphan inodes deleted md: md1 already running, cannot run hdc3 EXT3 FS on md1, internal journal [root@server admin]#
Thanks for any help !!!
roberto
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos