About 9 months ago, I made a policy decision to adopt RAID 1 on all systems I admin. (about 20 systems, all told) It's been a long, slow process.
One of the systems is being upgraded from CentOS3 to CentOS4. It's been rock-solid stable for over a year. It's a "frankenserver" with parts from all over the place, maintained over years of active service.
Currently, it's got an ASRock "M810LMR" motherboard in it. http://www.ciao.co.uk/ASRock_M810LMR__5410842
When I install the O/S (I've tried CentOS 4.0 and 4.2) with IDE RAID, it bombs, takes hours just to format the drives, and won't reboot, getting stuck at "GRUB". But, when I install the SAME TWO DRIVES without RAID, (eg: some partitions on one drive, some on the other) it works perfectly.
I've tried putting both drives on separate IDE cablels, using completely different drives, and trying CentOS 4.0 and 4.2.
When I try to set up IDE raid while it's taking a very long time to format the drives, I see lots of errors about DMA on the alternate consoles, and there being only one drive (unable to start RAID array, only 1 drive found) even though the installer and disk druid have no problem reporting both drives. It comes down to something like "/dev/mdN delayed until /dev/mdX is synchronized because they share physical resources"...
I'm stumped, and falling back to non-RAID because this system is needed in production. And, I'm not new to using software RAID, I've been using it for years without issue.
Any idea what's going on?
Check, IDS cables, jumpers, BIOS settings.
Also, download the .iso support CD from the manufacturer (western digital, seagte, etc) and boot with that CD, check SMART and hardware and re certificate your discs to be fault free.
Then, try it again. I thinks its a hardware issue
HTH Oliver
Benjamin Smith wrote:
About 9 months ago, I made a policy decision to adopt RAID 1 on all systems I admin. (about 20 systems, all told) It's been a long, slow process.
One of the systems is being upgraded from CentOS3 to CentOS4. It's been rock-solid stable for over a year. It's a "frankenserver" with parts from all over the place, maintained over years of active service.
Currently, it's got an ASRock "M810LMR" motherboard in it. http://www.ciao.co.uk/ASRock_M810LMR__5410842
When I install the O/S (I've tried CentOS 4.0 and 4.2) with IDE RAID, it bombs, takes hours just to format the drives, and won't reboot, getting stuck at "GRUB". But, when I install the SAME TWO DRIVES without RAID, (eg: some partitions on one drive, some on the other) it works perfectly.
I've tried putting both drives on separate IDE cablels, using completely different drives, and trying CentOS 4.0 and 4.2.
When I try to set up IDE raid while it's taking a very long time to format the drives, I see lots of errors about DMA on the alternate consoles, and there being only one drive (unable to start RAID array, only 1 drive found) even though the installer and disk druid have no problem reporting both drives. It comes down to something like "/dev/mdN delayed until /dev/mdX is synchronized because they share physical resources"...
I'm stumped, and falling back to non-RAID because this system is needed in production. And, I'm not new to using software RAID, I've been using it for years without issue.
Any idea what's going on?
On Thursday 16 February 2006 15:07, Oliver Schulze L. wrote:
Check, IDS cables, jumpers, BIOS settings.
Also, download the .iso support CD from the manufacturer (western digital, seagte, etc) and boot with that CD, check SMART and hardware and re certificate your discs to be fault free.
Then, try it again. I thinks its a hardware issue
So did I, for a full day. I replaced the hard drives completely, just to be sure. I replaced the cable, twice. I checked and rechecked the BIOS settings. As a strait system, it worked great. Add IDE-RAID - instant problems.
Thanks for considering it, though.
On Wed, 2006-02-15 at 20:32 -0800, Benjamin Smith wrote:
About 9 months ago, I made a policy decision to adopt RAID 1 on all systems I admin. (about 20 systems, all told) It's been a long, slow process.
One of the systems is being upgraded from CentOS3 to CentOS4. It's been rock-solid stable for over a year. It's a "frankenserver" with parts from all over the place, maintained over years of active service.
Currently, it's got an ASRock "M810LMR" motherboard in it. http://www.ciao.co.uk/ASRock_M810LMR__5410842
Looks like this board is based on a early socket A VIA chipset ... generally I would avoid these. My past experience with older VIA chipsets has been less than good ... I've had issues with buggy IDE implementations in KT266 & KT133A and having to run them in PIO mode rather than DMA mode (SLOW).
I would use a cheap PCI IDE card and see what happens.
Regards, Paul Berger
On Thursday 16 February 2006 18:36, Paul wrote:
Currently, it's got an ASRock "M810LMR" motherboard in it. http://www.ciao.co.uk/ASRock_M810LMR__5410842
Looks like this board is based on a early socket A VIA chipset ... generally I would avoid these. My past experience with older VIA chipsets has been less than good ... I've had issues with buggy IDE implementations in KT266 & KT133A and having to run them in PIO mode rather than DMA mode (SLOW).
My conclusion, as well. (and yes, it runs in PIO mode, and it's slow, and I don't care since this system isn't a high-load system)
I would use a cheap PCI IDE card and see what happens.
It's a 1U system. PCI cards are pretty much out of the question.
(sigh)
-Ben
On Fri, 2006-02-17 at 11:03 -0800, Benjamin Smith wrote:
On Thursday 16 February 2006 18:36, Paul wrote:
Currently, it's got an ASRock "M810LMR" motherboard in it. http://www.ciao.co.uk/ASRock_M810LMR__5410842
Looks like this board is based on a early socket A VIA chipset ... generally I would avoid these. My past experience with older VIA chipsets has been less than good ... I've had issues with buggy IDE implementations in KT266 & KT133A and having to run them in PIO mode rather than DMA mode (SLOW).
My conclusion, as well. (and yes, it runs in PIO mode, and it's slow, and I don't care since this system isn't a high-load system)
I would use a cheap PCI IDE card and see what happens.
It's a 1U system. PCI cards are pretty much out of the question.
Hmmm have you tried switching off dma in grub (IIRC adding ide0=nodma ide1=nodma to the kernel line should do that)? It sounds like you may have tried that.
I would also run the drives on separate IDE cables, looks like you tried that, but maybe not in conjunction with nodma
Regards, Paul Berger