[CentOS] Raid 1 newbie question
Roberto Pereyra
pereyra.roberto at gmail.com
Thu Apr 26 10:38:00 UTC 2007
Thanks to all !!!
This is my mdadm output:
[root at server /]# /sbin/mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Wed Apr 4 06:16:19 2007
Raid Level : raid1
Array Size : 77023552 (73.46 GiB 78.87 GB)
Used Dev Size : 77023552 (73.46 GiB 78.87 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Apr 26 07:35:41 2007
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 39c8b2dc:8d81c3fa:ebb5bd38:dd773473
Events : 0.712607
Number Major Minor RaidDevice State
0 3 3 0 active sync /dev/hda3
1 0 0 1 removed
[root at server /]#
Is a logical failure or fisical ?
Can someone help me to fix it ?
Thanks again
roberto
2007/4/25, Miguel <infopnte at pnte.cfnavarra.es>:
> Hi Roberto.
>
> The raid1 md1 is damaged. The partition hdc3 has been discarded by the
> system, that now only works with one of the two partitions of the mirror
> (hda3).
> You can have more details with:
>
> mdadm --detail /dev/md1
>
> Sorry, my English is not good
>
>
> The partition md1 is RAID-1 with hda3 and hdc3 but hdc3 is wrong and the
> system is running with hda3
>
> Roberto Pereyra escribió:
> > Hi
> >
> > I have a Raid 1 centos 4.4 setup and now have this /proc/mdstat output:
> >
> > [root at server admin]# cat /proc/mdstat
> > Personalities : [raid1]
> > md2 : active raid1 hdc2[1] hda2[0]
> > 1052160 blocks [2/2] [UU]
> >
> > md1 : active raid1 hda3[0]
> > 77023552 blocks [2/1] [U_]
> >
> > md0 : active raid1 hdc1[1] hda1[0]
> > 104320 blocks [2/2] [UU]
> >
> >
> > What happens with md1 ?
> >
> >
> > My dmesg output is:
> >
> > [root at server admin]# dmesg | grep md1
> >
> > Kernel command line: ro root=/dev/md1 rhgb quiet
> > md: created md1
> > raid1: raid set md1 active with 1 out of 2 mirrors
> > md: md1 already running, cannot run hdc3
> > md: md1 already running, cannot run hdc3
> > EXT3-fs: md1: orphan cleanup on readonly fs
> > EXT3-fs: md1: 4 orphan inodes deleted
> > md: md1 already running, cannot run hdc3
> > EXT3 FS on md1, internal journal
> > [root at server admin]#
> >
> >
> > Thanks for any help !!!
> >
> > roberto
> >
> >
>
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
--
Ing. Roberto Pereyra
ContenidosOnline
http://www.contenidosonline.com.ar
More information about the CentOS
mailing list