Hi, Can someone point me to documents as to how to install Redhat Enterprise AS 4.x Or Centos 4.x similar with a 1 active 3 passive clustering scenario?
Thanks,
On Wed, 2007-07-04 at 13:01 -0500, Erick Perez wrote:
Hi, Can someone point me to documents as to how to install Redhat Enterprise AS 4.x Or Centos 4.x similar with a 1 active 3 passive clustering scenario?
Thanks,
Have you searched on the official documentation web page ? it contains several documents regarding Cluster suite/gfs etc ... : http://www.centos.org/docs/4/
Thanks for the link, I was looking at the redhat site. ;))
Question. Installing GFS 6.1 with a cluster means Cluster Suite *must* be installed too?
On 7/4/07, Fabian Arrotin fabian.arrotin@arrfab.net wrote:
On Wed, 2007-07-04 at 13:01 -0500, Erick Perez wrote:
Hi, Can someone point me to documents as to how to install Redhat Enterprise AS 4.x Or Centos 4.x similar with a 1 active 3 passive clustering scenario?
Thanks,
Have you searched on the official documentation web page ? it contains several documents regarding Cluster suite/gfs etc ... : http://www.centos.org/docs/4/
-- Fabian Arrotin fabian.arrotin@arrfab.net Solution ? echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Any thoughts?
On 06/07/07, Andrew @ ATM Logic atmlogic@kmts.ca wrote:
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Hi, Can you explain what have you tried till now? All I can say "man mdamd" is sufficient.
Hi, Can you explain what have you tried till now? All I can say "man mdamd" is sufficient.
-- Regards, Sudev Barar _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
These are a few of the commands I have ran so far...
cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
(Just so you know... That's missing the "needed" MD2)
lvm pvscan - No matching physical volumes found lvm lvscan - No volume groups found lvm vgscan - Reading all physical volumes. This may take a while.... No volume groups found
lvm vgchange -ay No volume groups found
lvm vgdisplay No volume groups found
fdisk -l /dev/hda Disk /dev/hda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 fd Linux raid autodetect /dev/hda2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdc Disk /dev/hdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdc1 * 1 13 104391 fd Linux raid autodetect /dev/hdc2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdd Disk /dev/hdd: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdd1 * 1 13 104391 fd Linux raid autodetect /dev/hdd2 14 19457 156183930 fd Linux raid autodetect
raidstart /dev/md1 - No errors, just returns command prompt
raidstart /dev/md2 - No errors, just returns command prompt
Sooooooooo, Then I tried to start fixing things.....
mdadm --assemble -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
I get Mdadm: Bad super-minor number: /dev/md2
mdadm --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Then... Looking at
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
Md2 : inactive hdc2[1] hdd2[2] 312367616 Unused devices: <none>
Then... Trying to get a little more pushy...
mdadm --stop /dev/md2 mdadm --verbose --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: looking for devices for /dev/md2 Mdadm: /dev/hda2 is identified as a member of /dev/md2, slot 0 Mdadm: /dev/hdc2 is identified as a member of /dev/md2, slot 1 Mdadm: /dev/hdd2 is identified as a member of /dev/md2, slot 2 Mdadm: added /dev/hda2 to /dev/md2 as 0 Mdadm: added /dev/hdd2 to /dev/md2 as 2 Mdadm: added /dev/hdc2 to /dev/md2 as 1 Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Ok... So... That is where I quit... Any Idea what kind of hope I should be holding out for?
Andrew @ ATM Logic spake the following on 7/6/2007 4:39 AM:
Hi, Can you explain what have you tried till now? All I can say "man mdamd" is sufficient.
-- Regards, Sudev Barar _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
These are a few of the commands I have ran so far...
cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
(Just so you know... That's missing the "needed" MD2)
lvm pvscan
- No matching physical volumes found
lvm lvscan
- No volume groups found
lvm vgscan
- Reading all physical volumes. This may take a while....
No volume groups found
lvm vgchange -ay No volume groups found
lvm vgdisplay No volume groups found
fdisk -l /dev/hda Disk /dev/hda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 fd Linux raid autodetect /dev/hda2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdc Disk /dev/hdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdc1 * 1 13 104391 fd Linux raid autodetect /dev/hdc2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdd Disk /dev/hdd: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdd1 * 1 13 104391 fd Linux raid autodetect /dev/hdd2 14 19457 156183930 fd Linux raid autodetect
raidstart /dev/md1
- No errors, just returns command prompt
raidstart /dev/md2
- No errors, just returns command prompt
Sooooooooo, Then I tried to start fixing things.....
mdadm --assemble -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
I get Mdadm: Bad super-minor number: /dev/md2
mdadm --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Then... Looking at
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
Md2 : inactive hdc2[1] hdd2[2] 312367616 Unused devices: <none>
Then... Trying to get a little more pushy...
mdadm --stop /dev/md2 mdadm --verbose --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: looking for devices for /dev/md2 Mdadm: /dev/hda2 is identified as a member of /dev/md2, slot 0 Mdadm: /dev/hdc2 is identified as a member of /dev/md2, slot 1 Mdadm: /dev/hdd2 is identified as a member of /dev/md2, slot 2 Mdadm: added /dev/hda2 to /dev/md2 as 0 Mdadm: added /dev/hdd2 to /dev/md2 as 2 Mdadm: added /dev/hdc2 to /dev/md2 as 1 Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Ok... So... That is where I quit... Any Idea what kind of hope I should be holding out for?
I recently had luck with the following on an array that wouldn't start after the system was unplugged. mdadm --assemble --run --force --update=summaries /dev/"raid array" /dev/"drive1" /dev/"drive2" /dev/"drive3"
remember to fix the quoted parts with your actual entries.
Quoting "Andrew @ ATM Logic" atmlogic@kmts.ca:
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Any thoughts?
echo check > /sys/block/md1/md/sync_action
This will get your RAID array to rebuild and hopefully be fine... Or be marked as dead ;)
Oh, and if you want to read a little more on linux raid and data hints, check out:
http://www.crc.id.au/2007/07/06/raid-arrays-in-linux/
Quoting Steven Haigh netwiz@crc.id.au:
Quoting "Andrew @ ATM Logic" atmlogic@kmts.ca:
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Any thoughts?
echo check > /sys/block/md1/md/sync_action
This will get your RAID array to rebuild and hopefully be fine... Or be marked as dead ;)
-- Steven Haigh
Email: netwiz@crc.id.au Web: http://www.crc.id.au Phone: (03) 9017 0597 - 0404 087 474
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks, if you see the reply back to Sudev Barar let me know if you think I still stand a chance with the 'echo Check' keeping in mind the system does not actually start on its own, so this would have to be through Knoppix. Or simular.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Steven Haigh Sent: Thursday, July 05, 2007 10:09 PM To: centos@centos.org Subject: Re: [CentOS] Help repairing a broken Software Raid5
Quoting "Andrew @ ATM Logic" atmlogic@kmts.ca:
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Any thoughts?
echo check > /sys/block/md1/md/sync_action
This will get your RAID array to rebuild and hopefully be fine... Or be marked as dead ;)
-- Steven Haigh
Email: netwiz@crc.id.au Web: http://www.crc.id.au Phone: (03) 9017 0597 - 0404 087 474
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, 5 Jul 2007 at 9:37pm, Andrew @ ATM Logic wrote
Anyone know where I should to to get support to repair a failed Software Raid5. Actually the Drives are all perfectly fine, just somehow it broke, and it cannot pull itself togeter again...
Any thoughts?
The developers hang out on the linux-raid mailing list, linux-raid@vger.kernel.org, and can be very helpful in this sort of situation.
Good luck.