So when I couldn't get the raid10 to work, I decided to do raid5. Everything installed and looked good. I left it overnight to rebuild the array, and when I came in this morning, everything was frozen. Upon reboot, it said that 2 of the 4 devices for the raid5 array failed. Luckily, I didn't have any data on it, but how do I know that the same thing won't happen when I have real data on it?
I kind of feel that the problem might be that I'm using SATA drives, and they probably tried to self-correct and error and took too long, and the raid controller assumed that it was a bad drive, and took it out of the array. The question is, is there anything I can do about it?
Russ
On Tue, May 01, 2007 at 01:58:55PM -0400, Ruslan Sivak wrote:
So when I couldn't get the raid10 to work, I decided to do raid5. Everything installed and looked good. I left it overnight to rebuild the array, and when I came in this morning, everything was frozen. Upon reboot, it said that 2 of the 4 devices for the raid5 array failed. Luckily, I didn't have any data on it, but how do I know that the same thing won't happen when I have real data on it?
I kind of feel that the problem might be that I'm using SATA drives, and they probably tried to self-correct and error and took too long, and the raid controller assumed that it was a bad drive, and took it out of the array. The question is, is there anything I can do about it?
It could also be that the drives are overheating.
That's possible, but that's no reason to kill my array. I have the server in my cube for now while I'm setting it up, but plan to put it in a server room later on. I don't, however, want a simple heat issue to cause me to loose all my data. Should I try raid6?
Russ
Luciano Miguel Ferreira Rocha wrote:
On Tue, May 01, 2007 at 01:58:55PM -0400, Ruslan Sivak wrote:
So when I couldn't get the raid10 to work, I decided to do raid5. Everything installed and looked good. I left it overnight to rebuild the array, and when I came in this morning, everything was frozen. Upon reboot, it said that 2 of the 4 devices for the raid5 array failed. Luckily, I didn't have any data on it, but how do I know that the same thing won't happen when I have real data on it?
I kind of feel that the problem might be that I'm using SATA drives, and they probably tried to self-correct and error and took too long, and the raid controller assumed that it was a bad drive, and took it out of the array. The question is, is there anything I can do about it?
It could also be that the drives are overheating.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, May 01, 2007 at 02:09:51PM -0400, Ruslan Sivak wrote:
That's possible, but that's no reason to kill my array. I have the server in my cube for now while I'm setting it up, but plan to put it in a server room later on. I don't, however, want a simple heat issue to cause me to loose all my data. Should I try raid6?
You shouldn't have lost any data. The two drives failed, the array became unavailable, but now you should be able to readd the drives.
You'll want to forcibly force the last drive to have failed to became avaiable and then rebuild the other one.
I'm not that confortable with mdadm to give the correct options, but take this time of testing to try the options listed in the manual page.
About raid6, it won't help, as long as the drives keep failing. You'd end with a degraded array that is a drive away of completely failing.
Luciano Miguel Ferreira Rocha wrote:
On Tue, May 01, 2007 at 02:09:51PM -0400, Ruslan Sivak wrote:
That's possible, but that's no reason to kill my array. I have the server in my cube for now while I'm setting it up, but plan to put it in a server room later on. I don't, however, want a simple heat issue to cause me to loose all my data. Should I try raid6?
You shouldn't have lost any data. The two drives failed, the array became unavailable, but now you should be able to readd the drives.
You'll want to forcibly force the last drive to have failed to became avaiable and then rebuild the other one.
I'm not that confortable with mdadm to give the correct options, but take this time of testing to try the options listed in the manual page.
About raid6, it won't help, as long as the drives keep failing. You'd end with a degraded array that is a drive away of completely failing.
I tried recreating the array, but it won't mount...
Also a bit weird is that I did a SMART short self-test, and one of the drives keeps returning a Read-Failure, but the overall SMART status is passed. Could this be from overheating? Would letting it cool off fix things, or should I call in for warranty?
Russ
On Tue, May 01, 2007 at 02:24:34PM -0400, Ruslan Sivak wrote:
I tried recreating the array, but it won't mount...
Also a bit weird is that I did a SMART short self-test, and one of the drives keeps returning a Read-Failure, but the overall SMART status is passed. Could this be from overheating?
I'm not sure. If those Read-Failure are error counts, than yes, I think so. The drive is currently OK, but when it had a lot of activity it overheated and started getting read errors.
Would letting it cool off fix things, or should I call in for warranty?
Array creation creates a lot of activity for the drives, and it could cause them to overheat. Or cause energy flutuation if the PSU isn't powerfull enough, but I'd expect PSU problems when booting, not when writing/reading.
I have an older system where I have to keep DMA disabled for my drives or the system locks when cron starts updatedb or someone copies a large file, thus my suspicion that your drives are also overheating.
About calling in for the warranty, I'd first try hddtemp (http://www.guzu.net/linux/hddtemp.php) and if the temperature does rise, then some more fans. :)
Luciano Miguel Ferreira Rocha wrote:
On Tue, May 01, 2007 at 02:24:34PM -0400, Ruslan Sivak wrote:
I tried recreating the array, but it won't mount...
Also a bit weird is that I did a SMART short self-test, and one of the drives keeps returning a Read-Failure, but the overall SMART status is passed. Could this be from overheating?
I'm not sure. If those Read-Failure are error counts, than yes, I think so. The drive is currently OK, but when it had a lot of activity it overheated and started getting read errors.
Actually what I'm saying is that I ran smartctl and did a short test on the drive, and it keeps returning a read error at a certain porint (LBA 271739730). I let it cool off for a few hours, and tried the test again, same thing.
Does this mean that the drive has developed bad sectors and needs replacement? These are brand new drives. The error only happens on 1 of the 4.
Would letting it cool off fix things, or should I call in for warranty?
Array creation creates a lot of activity for the drives, and it could cause them to overheat. Or cause energy flutuation if the PSU isn't powerfull enough, but I'd expect PSU problems when booting, not when writing/reading.
I have an older system where I have to keep DMA disabled for my drives or the system locks when cron starts updatedb or someone copies a large file, thus my suspicion that your drives are also overheating.
About calling in for the warranty, I'd first try hddtemp (http://www.guzu.net/linux/hddtemp.php) and if the temperature does rise, then some more fans. :)
I tried to get hddtemp, but alas, no compiler available in rescue mode. I will try Knoppix.
Russ
Ruslan Sivak spake the following on 5/1/2007 1:10 PM:
Luciano Miguel Ferreira Rocha wrote:
On Tue, May 01, 2007 at 02:24:34PM -0400, Ruslan Sivak wrote:
I tried recreating the array, but it won't mount...
Also a bit weird is that I did a SMART short self-test, and one of the drives keeps returning a Read-Failure, but the overall SMART status is passed. Could this be from overheating?
I'm not sure. If those Read-Failure are error counts, than yes, I think so. The drive is currently OK, but when it had a lot of activity it overheated and started getting read errors.
Actually what I'm saying is that I ran smartctl and did a short test on the drive, and it keeps returning a read error at a certain porint (LBA 271739730). I let it cool off for a few hours, and tried the test again, same thing. Does this mean that the drive has developed bad sectors and needs replacement? These are brand new drives. The error only happens on 1 of the 4.
Yes, that can be a sign of impending doom for the drive. The drives age doesn't have anything to do with it. A drive can fail in anything between minutes and years. I have had new drives fail in days, and I have some old drives that still keep chugging along in routers and such that have really only lost their usefulness because they are so small. I have an old print server running on an old Dell 486 with a 80 MB drive ( that's MB not GB). It just refuses to die on its own, and it has been in continuous operation for over 12 years. It will get replaced when it dies, but it still works great, and it is fast enough to spool print files. If you run the manufacturers tests on the drive, and it shows any error, return it.
I ran the manufacturers tests, and it failed again on the short test with the media error and the long test fixed it. I will be exchanging the drive for a new one, but for now its up and running.
This incident, however, has reinforced what I've read in many places - raid 5 is not safe. If during a rebuild a media error is encountered, you lose all your data.
I would like to set up raid 10 instead, but it doesn't seem like its supported by my mdadm - the proper personalities are not loaded. How do I get the raid10 personality in there?
Assuming I get the raid10 personality in there, how do I concert to raid10? I suppose I can fail one of the drives in the raid5, set up a filesystem on it, copy the data from the raid5, kill the raid5, set up a raid10 from the three disks in the raid5 set with one missing, copy the data from the single drive, and add the single drive to the raid10 array. Is this the right path? Would I just cp certain directories over or use something like dd?
Any help would be greatly appreciated.
Russ Sent wirelessly via BlackBerry from T-Mobile.
-----Original Message----- From: Scott Silva ssilva@sgvwater.com Date: Tue, 01 May 2007 13:28:50 To:centos@centos.org Subject: [CentOS] Re: Raid5 issues
Ruslan Sivak spake the following on 5/1/2007 1:10 PM:
Luciano Miguel Ferreira Rocha wrote:
On Tue, May 01, 2007 at 02:24:34PM -0400, Ruslan Sivak wrote:
I tried recreating the array, but it won't mount...
Also a bit weird is that I did a SMART short self-test, and one of the drives keeps returning a Read-Failure, but the overall SMART status is passed. Could this be from overheating?
I'm not sure. If those Read-Failure are error counts, than yes, I think so. The drive is currently OK, but when it had a lot of activity it overheated and started getting read errors.
Actually what I'm saying is that I ran smartctl and did a short test on the drive, and it keeps returning a read error at a certain porint (LBA 271739730). I let it cool off for a few hours, and tried the test again, same thing. Does this mean that the drive has developed bad sectors and needs replacement? These are brand new drives. The error only happens on 1 of the 4.
Yes, that can be a sign of impending doom for the drive. The drives age doesn't have anything to do with it. A drive can fail in anything between minutes and years. I have had new drives fail in days, and I have some old drives that still keep chugging along in routers and such that have really only lost their usefulness because they are so small. I have an old print server running on an old Dell 486 with a 80 MB drive ( that's MB not GB). It just refuses to die on its own, and it has been in continuous operation for over 12 years. It will get replaced when it dies, but it still works great, and it is fast enough to spool print files. If you run the manufacturers tests on the drive, and it shows any error, return it.
Russ wrote:
I ran the manufacturers tests, and it failed again on the short test with the media error and the long test fixed it. I will be exchanging the drive for a new one, but for now its up and running.
This incident, however, has reinforced what I've read in many places - raid 5 is not safe. If during a rebuild a media error is encountered, you lose all your data.
I would like to set up raid 10 instead, but it doesn't seem like its supported by my mdadm - the proper personalities are not loaded. How do I get the raid10 personality in there?
Assuming I get the raid10 personality in there, how do I concert to raid10? I suppose I can fail one of the drives in the raid5, set up a filesystem on it, copy the data from the raid5, kill the raid5, set up a raid10 from the three disks in the raid5 set with one missing, copy the data from the single drive, and add the single drive to the raid10 array. Is this the right path? Would I just cp certain directories over or use something like dd?
Any help would be greatly appreciated.
Russ Sent wirelessly via BlackBerry from T-Mobile.
. . . snipped top post . . .
I don't believe there's a raid10 personality. What you would do is build your 2 stripe sets, then mirror those striped md's. I tried that sort of thing a few years ago and it seemed to work, but I don't remember exactly how well it worked - may have been flakey at boot time - I did not spend much time on it.
Could also try raid5 with a hot spare or raid6.
I've been there with not trusting the HDs. I once had a bunch of new Maxtors scsi disks in a raid set and several failed in succession - they got swapped out with Seagates right quick.
Toby Bluhm wrote:
Russ wrote:
... I would like to set up raid 10 instead, but it doesn't seem like its supported by my mdadm - the proper personalities are not loaded. How do I get the raid10 personality in there?
I don't believe there's a raid10 personality. What you would do is build your 2 stripe sets, then mirror those striped md's...
Actually, you have it backwards. Build 2 separate mirror sets, then make a stripe set of those 2 mirrors. Here's why: In your setup, if one drive fails, that whole stripe set is down for the count. You're then running on only 2 drives, no redundancy. The other way, if one drive goes, the other drive in that mirror set keeps going; you're still using 3 drives. Hopefully I explained it clearly... :)
David A. Woyciesjes wrote:
Toby Bluhm wrote:
Russ wrote:
... I would like to set up raid 10 instead, but it doesn't seem like its supported by my mdadm - the proper personalities are not loaded. How do I get the raid10 personality in there?
I don't believe there's a raid10 personality. What you would do is build your 2 stripe sets, then mirror those striped md's...
Actually, you have it backwards. Build 2 separate mirror sets,
then make a stripe set of those 2 mirrors. Here's why: In your setup, if one drive fails, that whole stripe set is down for the count. You're then running on only 2 drives, no redundancy. The other way, if one drive goes, the other drive in that mirror set keeps going; you're still using 3 drives. Hopefully I explained it clearly... :)
I have tried this previously, but it's just not possible from anaconda. The weird thing is that there is a raid 10 personality... at least supposed to be http://cgi.cse.unsw.edu.au/~neilb/01093607424
I think that's the guy that mantains mdadm, and he's talking about version 1.7 of mdadm, while I have 2.5.4.
I can go to shell from anaconda, and build the 2 raid 1 sets and put the raid 0 on top of it, but anaconda won't see the raid0. So there's no way to install to it.
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Russ
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices,
Feizhou wrote:
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually.
Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
Is there a way to get it to use the raid that's part of the bios chip? Something about device mapper?
Russ
Ruslan Sivak wrote:
Feizhou wrote:
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers.
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
Thanks for you help so far,
Russ
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
I believe your problem is that fact that there is a raid array configured in the bios.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
I don't know about a raid10 personality but on boxes I used to run, I had raid1 and raid0 personalities loaded...
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
There you go. The installer picks up the drives individually so the installation process treats them as such but the raid bios does not and so leads to booting problems. Just blow the raid array configured in the si311x bios the thing should be able to boot. Check also your motherboard bios settings too.
Feizhou wrote:
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
I don't know about a raid10 personality but on boxes I used to run, I had raid1 and raid0 personalities loaded...
According to this, as well as other sources, the raid10 personality should be in the kernel by now: http://cgi.cse.unsw.edu.au/~neilb/01093607424
I do have raid1 and raid0 personalities, but I can't seem to combine them in the way that the installer would let me install on it.
I did 2 drives in md11 as raid1 2 drives in md12 as raid1 md11 and md12 in md10 as raid0
Anaconda only sees md11 and md12
I can set up LVM on top of md11 and md12. Is that really raid10 though?
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
There you go. The installer picks up the drives individually so the installation process treats them as such but the raid bios does not and so leads to booting problems. Just blow the raid array configured in the si311x bios the thing should be able to boot. Check also your motherboard bios settings too. _______________________________________________
I will try this. I do know it loads up the SIL3112 driver during the installation process. Is this only so it can read the drives? I though the whole process was transparent and didn't need special drivers if it's just seeing them as single drives?
Russ
I don't know about a raid10 personality but on boxes I used to run, I had raid1 and raid0 personalities loaded...
According to this, as well as other sources, the raid10 personality should be in the kernel by now: http://cgi.cse.unsw.edu.au/~neilb/01093607424
I do have raid1 and raid0 personalities, but I can't seem to combine them in the way that the installer would let me install on it. I did 2 drives in md11 as raid1 2 drives in md12 as raid1 md11 and md12 in md10 as raid0
Anaconda only sees md11 and md12
I can set up LVM on top of md11 and md12. Is that really raid10 though?
Striping of two mirrors via lvm is technically the same.
There you go. The installer picks up the drives individually so the installation process treats them as such but the raid bios does not and so leads to booting problems. Just blow the raid array configured in the si311x bios the thing should be able to boot. Check also your motherboard bios settings too. _______________________________________________
I will try this. I do know it loads up the SIL3112 driver during the installation process. Is this only so it can read the drives? I though the whole process was transparent and didn't need special drivers if it's just seeing them as single drives?
The problem here is that Linux uses the disks differently from what the bios expects. When the drives are treated individually, things are written to disk in a different manner than from when dm is in use. Besides, I do not remember there being any ability to boot of a fakeraid array. grub certainly does not know how to interpret things the way the bios says things should be. Please just destroy that array setup in the bios if you plan to boot off disks on the si3112.
Ruslan Sivak wrote:
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
Toby Bluhm wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
I do have a SIL3114 chipset, and I think it's supposed to be supported by device mapper. When I go to rescue mode, I see it loading the driver for SIL3112, but nothing appears under /dev/mapper except control. Are there instructions somewhere on getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost.
This is exactly what this article warns about:
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
Russ
Ruslan Sivak wrote:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
> I do have a SIL3114 chipset, and I think it's supposed to be > supported by device mapper. When I go to rescue mode, I see it > loading the driver for SIL3112, but nothing appears under > /dev/mapper except control. Are there instructions somewhere on > getting it to use my controller's raid?
Your controller only has a bios chip. It has no raid processing capability at all.
You need to use mdadm. anaconda should be able to let you create to mirrors and then create a third array that stripes those md devices, _______________________________________________
Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost. This is exactly what this article warns about:
The article doesn't seem to mention the fact that if a disk in a mirror set dies and the remaining disk within the set starts to have data corruption problems, the mirror will be rebuilt from corrupted data.
I don't know what you can do at this point, though. Perhaps make 2 separate mirrors and rsync them? You could keep copies of changes that way.
Toby Bluhm wrote:
Ruslan Sivak wrote:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote: > >> I do have a SIL3114 chipset, and I think it's supposed to be >> supported by device mapper. When I go to rescue mode, I see it >> loading the driver for SIL3112, but nothing appears under >> /dev/mapper except control. Are there instructions somewhere >> on getting it to use my controller's raid? > > Your controller only has a bios chip. It has no raid processing > capability at all. > > You need to use mdadm. anaconda should be able to let you create > to mirrors and then create a third array that stripes those md > devices, > _______________________________________________ Anaconda doesn't let me create a stripe raid set on top of a mirror set. And it doesn't detect it when I do it manually. Also the bios chip presents additional issues. I believe when I don't have a raid array set up, it won't boot at all. When I have it on raid10, I had trouble booting, and when I have it on concatenation, everything works fine, until a drive is replaced. At that point, i have to recreate the array, as concatenation is not a fault tolerant set, and at this point I seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
Is there a way to get it to use the raid that's part of the bios chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost. This is exactly what this article warns about:
The article doesn't seem to mention the fact that if a disk in a mirror set dies and the remaining disk within the set starts to have data corruption problems, the mirror will be rebuilt from corrupted data.
While this is true, it's far more likely that there will be a media error (i.e. bad sector), and that the system will notice it. With raid 5, it will just kick out the drive, and you can say bye bye to your data. With raid 10, if it happens on one of the disks in the other set, you don't have a problem, and if it happens to the disk in the same set (not very likely), I'm not sure what the outcome will be, but hopefully it can recover? I have just had a windows drive have a whole bunch of bad sectors and I was still able to boot to windows, and copy most of the data off. I can't imagine Linux being any worse.
I don't know what you can do at this point, though. Perhaps make 2 separate mirrors and rsync them? You could keep copies of changes that way.
I know there is a raid10 personality for md. I saw it in the source code. I see people's boot logs all over the web that say this:
md: linear personality registered as nr 1 md: raid0 personality registered as nr 2 md: raid1 personality registered as nr 3 md: raid10 personality registered as nr 9 md: raid5 personality registered as nr 4
Why does CentOS5 not support the raid10 personality? Do i need to custom compile md? Do I need to custom compile the kernel?
Russ
Ruslan Sivak wrote:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ruslan Sivak wrote: > Feizhou wrote: >> >>> I do have a SIL3114 chipset, and I think it's supposed to be >>> supported by device mapper. When I go to rescue mode, I see >>> it loading the driver for SIL3112, but nothing appears under >>> /dev/mapper except control. Are there instructions somewhere >>> on getting it to use my controller's raid? >> >> Your controller only has a bios chip. It has no raid processing >> capability at all. >> >> You need to use mdadm. anaconda should be able to let you >> create to mirrors and then create a third array that stripes >> those md devices, >> _______________________________________________ > Anaconda doesn't let me create a stripe raid set on top of a > mirror set. And it doesn't detect it when I do it manually. > Also the bios chip presents additional issues. I believe when I > don't have a raid array set up, it won't boot at all. When I > have it on raid10, I had trouble booting, and when I have it on > concatenation, everything works fine, until a drive is > replaced. At that point, i have to recreate the array, as > concatenation is not a fault tolerant set, and at this point I > seem to lose all my data.
It won't boot at all without a raid array setup? That sounds really funny.
Actually I'm not 100% sure on this, but I think this is the case. I believe the first time I set it up as a raid10, assuming that linux will just ignore it. I installed centos by putting boot on a raid1, and root on LVM over 2 raid1 sets. I had trouble getting it to boot.
> Is there a way to get it to use the raid that's part of the bios > chip?
Repeat after me. There is no raid that is part of the bios chip. It is just a simple table.
Yes, I know this is fakeraid, aka softraid, but I was hoping that using the drivers would make it easier to support raid 10 then with mdadm, which seems to be impossible to get to work with the installer. I'm not even sure why the raid10 personality is not loaded, as it seems to have been part of the mdadm since version 1.7.
> Something about device mapper?
You need the fake raid driver dmraid if you are going to set up stuff in the bios. What version of centos are you trying to install? libata in Centos 5 should support this without having to resort to the ide drivers. _________________________________
I'm trying to install centos 5 - the latest. How would I go about using dmraid and/or libata? The installer picks up the drives as individual drives. There is a drive on the silicon image website, but it's for RHEL4, and I couldn't get it to work. I'm open to using md for raid, or even LVM, if it supports it. I just want to be able to use raid10, as I can't trust raid5 anymore.
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost. This is exactly what this article warns about:
The article doesn't seem to mention the fact that if a disk in a mirror set dies and the remaining disk within the set starts to have data corruption problems, the mirror will be rebuilt from corrupted data.
While this is true, it's far more likely that there will be a media error (i.e. bad sector), and that the system will notice it. With raid 5, it will just kick out the drive, and you can say bye bye to your data.
Perhaps not totally lost, but not fun either. Force mdadm to run ; try to fix/relocate bad sector & force mdadm to run ; dd to identical disk & force mdadm to run.
With raid 10, if it happens on one of the disks in the other set, you don't have a problem, and if it happens to the disk in the same set (not very likely),
A 1 in 3 chance of putting the two worst disks together when using 4 disk raid10.
I'm not sure what the outcome will be, but hopefully it can recover? I have just had a windows drive have a whole bunch of bad sectors and I was still able to boot to windows, and copy most of the data off. I can't imagine Linux being any worse.
I don't know what you can do at this point, though. Perhaps make 2 separate mirrors and rsync them? You could keep copies of changes that way.
I know there is a raid10 personality for md. I saw it in the source code. I see people's boot logs all over the web that say this:
md: linear personality registered as nr 1 md: raid0 personality registered as nr 2 md: raid1 personality registered as nr 3 md: raid10 personality registered as nr 9 md: raid5 personality registered as nr 4
Why does CentOS5 not support the raid10 personality? Do i need to custom compile md? Do I need to custom compile the kernel? Russ
You have enlightened me to the raid10 module:
[root@tikal ~]# locate raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/linux/raid/raid10.h /lib/modules/2.6.9-42.0.3.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.10.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.3.ELsmp/kernel/drivers/md/raid10.ko
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1
This is not a Centos 5 machine though, it's SL4.4.
Toby Bluhm wrote:
With raid 10, if it happens on one of the disks in the other set, you don't have a problem, and if it happens to the disk in the same set (not very likely),
A 1 in 3 chance of putting the two worst disks together when using 4 disk raid10.
Yes, as in a 33% chance vs a 100% chance with raid5.
I know there is a raid10 personality for md. I saw it in the source code. I see people's boot logs all over the web that say this:
md: linear personality registered as nr 1 md: raid0 personality registered as nr 2 md: raid1 personality registered as nr 3 md: raid10 personality registered as nr 9 md: raid5 personality registered as nr 4
Why does CentOS5 not support the raid10 personality? Do i need to custom compile md? Do I need to custom compile the kernel? Russ
You have enlightened me to the raid10 module:
[root@tikal ~]# locate raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10/module.h
/usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/linux/raid/raid10.h /lib/modules/2.6.9-42.0.3.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.10.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.3.ELsmp/kernel/drivers/md/raid10.ko
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1
This is not a Centos 5 machine though, it's SL4.4.
Yes, this is exactly what I'm looking to use. How would I go about enabling it on CentOS?
Russ
Toby Bluhm wrote:
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
I believe I replied to this, but not sure why it didn't show up. Here it is again:
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost. This is exactly what this article warns about:
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
Russ
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
Ruslan Sivak wrote:
Toby Bluhm wrote:
IIRC you had two out of four new disks die? So maybe it would be more accurate to say it's your hardware you don't trust. Raid5 is used without problems by ( I assume ) many, many people, myself included. You could have a raid10 and still lose the whole array if two disks that in the same mirror die at once. I guess no software in the world can really overcome bad hardware. That's why we do backups :)
Anyway, perhaps excersizing /stressing the disks for a few days without error would make you feel more confident about the HDs.
I believe I replied to this, but not sure why it didn't show up. Here it is again:
Actually, 2 disks did not die. Due to the fact that it was a new raid 5 array (or for whatever reason), it was rebuilding the array. One of the drives had a media error, and this caused the whole array to be lost. This is exactly what this article warns about:
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
Russ
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Scanned with Copfilter Version 0.84beta1 (P3Scan 2.2.1) AntiSpam: SpamAssassin 3.1.8 AntiVirus: ClamAV 0.90.1/3208 - Fri May 4 19:26:30 2007 by Markus Madlener @ http://www.copfilter.org
William Warren wrote:
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
William,
This might be a bit of a problem. The computer is quite old, and I had to get a raid card just to be able to connect sata drives to it. The bios boot order only lists the cdrom (I can't even get it to boot from floppy). The only way I can get it to boot off these hard drives is if I have them in some sort of raid set up through the card's bios (even concatenation works). Unfortunatelly, with concatenation, if 1 drive dies, I am forced to recreate the array, which kills things.
I think I need to use the fakeraid driver. It keeps popping up anyway with weird error messages. For example, when there is no raid set up, I keep getting something like this:
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct).
I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work).
Russ
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct). I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work).
1) Blow away the RAID array in the bios of the raid card. 2) Reinstall and make sure you have /boot on its own raid1'ed partitions under the 1024 cylinder limit since you have an old box. 3) If the motherboard bios don't give you a boot off board disk controller option. then make yourself a grub floppy to handle the kernel and initrd image loading. This is just a grub floppy. no kernel, no initrd image will stored on the floppy. Its sole purpose is to provide you a way to load the kernel and its initrd image from disk.
You will never be able to boot of a fakeraid from a normal installation procedure. If you are dead set on doing that, you have to get the dmraid driver loaded during the installation process and the grub installation must be done to the dm device. Another thing is, the dmraid driver in raid 1 does not support failover when a disk dies, it only knows mirroring. Have fun.
Ruslan Sivak spake the following on 5/5/2007 9:16 AM:
William Warren wrote:
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
William,
This might be a bit of a problem. The computer is quite old, and I had to get a raid card just to be able to connect sata drives to it. The bios boot order only lists the cdrom (I can't even get it to boot from floppy). The only way I can get it to boot off these hard drives is if I have them in some sort of raid set up through the card's bios (even concatenation works). Unfortunatelly, with concatenation, if 1 drive dies, I am forced to recreate the array, which kills things. I think I need to use the fakeraid driver. It keeps popping up anyway with weird error messages. For example, when there is no raid set up, I keep getting something like this:
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct). I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work). Russ
Can you add each drive as a separate stripe? IE.. 4 drives, 4 arrays, 1 drive per array. I had to do that with an old promise card I used as a controller.
Scott Silva wrote:
Ruslan Sivak spake the following on 5/5/2007 9:16 AM:
William Warren wrote:
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
William,
This might be a bit of a problem. The computer is quite old, and I had to get a raid card just to be able to connect sata drives to it. The bios boot order only lists the cdrom (I can't even get it to boot from floppy). The only way I can get it to boot off these hard drives is if I have them in some sort of raid set up through the card's bios (even concatenation works). Unfortunatelly, with concatenation, if 1 drive dies, I am forced to recreate the array, which kills things. I think I need to use the fakeraid driver. It keeps popping up anyway with weird error messages. For example, when there is no raid set up, I keep getting something like this:
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct). I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work). Russ
Can you add each drive as a separate stripe? IE.. 4 drives, 4 arrays, 1 drive per array. I had to do that with an old promise card I used as a controller.
William,
Thanks for the suggestion. I actually though of the same thing this morning, and added the first drive to it's own (concatenated) array. That seemed to work and allow it to boot. I still can't get raid10 to work, but I'll start a new thread on that.
Russ
Ruslan Sivak spake the following on 5/7/2007 9:49 AM:
Scott Silva wrote:
Ruslan Sivak spake the following on 5/5/2007 9:16 AM:
William Warren wrote:
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
William,
This might be a bit of a problem. The computer is quite old, and I had to get a raid card just to be able to connect sata drives to it. The bios boot order only lists the cdrom (I can't even get it to boot from floppy). The only way I can get it to boot off these hard drives is if I have them in some sort of raid set up through the card's bios (even concatenation works). Unfortunatelly, with concatenation, if 1 drive dies, I am forced to recreate the array, which kills things. I think I need to use the fakeraid driver. It keeps popping up anyway with weird error messages. For example, when there is no raid set up, I keep getting something like this:
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct). I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work). Russ
Can you add each drive as a separate stripe? IE.. 4 drives, 4 arrays, 1 drive per array. I had to do that with an old promise card I used as a controller.
William,
Thanks for the suggestion. I actually though of the same thing this morning, and added the first drive to it's own (concatenated) array. That seemed to work and allow it to boot. I still can't get raid10 to work, but I'll start a new thread on that.
Russ
I don't think you will get raid 10 out of the anaconda installer. I don't think it is supported yet. So even if the raid 10 driver is in the installer's initrd, you probably will not get a raid 10 from anaconda.
Scott Silva wrote:
Ruslan Sivak spake the following on 5/7/2007 9:49 AM:
Scott Silva wrote:
Ruslan Sivak spake the following on 5/5/2007 9:16 AM:
William Warren wrote:
rus,
This is critical
turn off the raid functions of your bios. You must do this or the fakeraid of the bios WILL interfere with md's ability to perform. Then try building the md raid.
William,
This might be a bit of a problem. The computer is quite old, and I had to get a raid card just to be able to connect sata drives to it. The bios boot order only lists the cdrom (I can't even get it to boot from floppy). The only way I can get it to boot off these hard drives is if I have them in some sort of raid set up through the card's bios (even concatenation works). Unfortunatelly, with concatenation, if 1 drive dies, I am forced to recreate the array, which kills things. I think I need to use the fakeraid driver. It keeps popping up anyway with weird error messages. For example, when there is no raid set up, I keep getting something like this:
Error adding sda to set silxxxxxxx: Raid type 234 is not supported. (This is from memory, so probably not exactly correct). I tried updating the bios, but I can't even seem to do that. It won't boot off the floppy, and when I boot of any dos based cdrom, I can't see the floppy. (I have external usb floppy. The internal one doesn't seem to work). Russ
Can you add each drive as a separate stripe? IE.. 4 drives, 4 arrays, 1 drive per array. I had to do that with an old promise card I used as a controller.
William,
Thanks for the suggestion. I actually though of the same thing this morning, and added the first drive to it's own (concatenated) array. That seemed to work and allow it to boot. I still can't get raid10 to work, but I'll start a new thread on that.
Russ
I don't think you will get raid 10 out of the anaconda installer. I don't think it is supported yet. So even if the raid 10 driver is in the installer's initrd, you probably will not get a raid 10 from anaconda.
This is what I've found as well. Would the solution be to make a raid5 (or 6) partition for / about 10GB and then once booted, set up the raid10 partition manually?
How would I go about auto mounting it as /data or something like that?
Russ
I don't think you will get raid 10 out of the anaconda installer. I don't think it is supported yet. So even if the raid 10 driver is in the installer's initrd, you probably will not get a raid 10 from anaconda.
This is absolutely weird...I got raid10 working a long time ago both without disk druid support (RH9 ...) and with disk druid support (FC2).
Feizhou wrote:
I don't think you will get raid 10 out of the anaconda installer. I don't think it is supported yet. So even if the raid 10 driver is in the installer's initrd, you probably will not get a raid 10 from anaconda.
This is absolutely weird...I got raid10 working a long time ago both without disk druid support (RH9 ...) and with disk druid support (FC2).
I do not mean root on raid10 though.
Would letting it cool off fix things, or should I call in for warranty?
Array creation creates a lot of activity for the drives, and it could cause them to overheat. Or cause energy flutuation if the PSU isn't powerfull enough, but I'd expect PSU problems when booting, not when writing/reading.
I have an older system where I have to keep DMA disabled for my drives or the system locks when cron starts updatedb or someone copies a large file, thus my suspicion that your drives are also overheating.
About calling in for the warranty, I'd first try hddtemp (http://www.guzu.net/linux/hddtemp.php) and if the temperature does rise, then some more fans. :)
I tried to get hddtemp, but alas, no compiler available in rescue mode. I will try Knoppix.
I just ran Knoppix and ran hddtemp, and it says that the drive is not in the DB, but is reporting a temperature of 36C for that drive, which is within the normal operating temperature of 32° F to 140° F.
Russ
On Tue, May 01, 2007 at 04:29:46PM -0400, Ruslan Sivak wrote:
I just ran Knoppix and ran hddtemp, and it says that the drive is not in the DB, but is reporting a temperature of 36C for that drive, which is within the normal operating temperature of 32° F to 140° F.
Yes, but my question is if it will remain inside normal parameters while intensive use, like the building of the array.