http://www.highpoint-tech.com/datasheets/RR2220%20datasheet_050430.pdf Anyone know if this is any good under Centos 4.3 ? I see there is a current RHEL 3 and 4 driver. Would that driver survive kernel upgrades, I'm guessing it would being modular ?
(please don't tell me that 3ware is the only make to use, tw_cli is not exactly user friendly).
Anyone using this - http://www.promise.com/product/product_detail_eng.asp?segment=RAID%20HBAs&am... Promise has never given me any joy under Linux.
cheers
On Tue, 6 Jun 2006 at 12:28pm, Tony Wicks wrote
http://www.highpoint-tech.com/datasheets/RR2220%20datasheet_050430.pdf Anyone know if this is any good under Centos 4.3 ? I see there is a current RHEL 3 and 4 driver. Would that driver survive kernel upgrades, I'm guessing it would being modular ?
Is the driver in the kernel? That's the real test for me. Relying on a vendor for driver updates for new kernels (no, you can't use a kernel module compiled against kernel 2.4.9-31 in kernel 2.4.9-32) is *never* a good place to be.
(please don't tell me that 3ware is the only make to use, tw_cli is not exactly user friendly).
I disagree, but then there's always the pointy-clickiness of 3DM2.
Tony Wicks wrote:
http://www.highpoint-tech.com/datasheets/RR2220%20datasheet_050430.pdf Anyone know if this is any good under Centos 4.3 ? I see there is a current RHEL 3 and 4 driver. Would that driver survive kernel upgrades, I'm guessing it would being modular ?
Forget this proprietary software raid drivers. Just use the kernel software raid capabilities. There has yet to appear a company that manufactures bios on a chip raid cards that provide adequate or even minimal linux driver support.
(please don't tell me that 3ware is the only make to use, tw_cli is not exactly user friendly).
If you don't like hardware raid then there is nothing else for you.
Anyone using this - http://www.promise.com/product/product_detail_eng.asp?segment=RAID%20HBAs&am...
Promise has never given me any joy under Linux.
Nor has any other bios on a chip raid card manufacturer.
Yes, Promise is supposed to have a few actual hardware raid cards...
If you don't like hardware raid then there is nothing else for you.
I love hardware raid, just can't afford it on every server.
Anyone using this - http://www.promise.com/product/product_detail_eng.asp?segment=RAID%20HBAs&am...
Promise has never given me any joy under Linux.
Nor has any other bios on a chip raid card manufacturer.
Yes, Promise is supposed to have a few actual hardware raid cards...
Does someone have a up to date list of actual hardware raid cards ? I know that as a general rule cheap=fake raid, but there must be a better guide than that. Hell, I've spent good money on adaptec scsi raid carsd to find they don't have real support. Infact adaptec seems to have very little real linux support nowdays. My "Dell" megaraids seem to work well, but I can't seem to get anything useful out of the supposed CLI interface.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Tony Wicks wrote:
If you don't like hardware raid then there is nothing else for you.
I love hardware raid, just can't afford it on every server.
:D ... I just find it odd that people would look for fakeraid cards and actually use their drivers when there is clearly no attempt at support by their manufacturers. It is not as if Linux's raid support is shabby now is it?
:D ... I just find it odd that people would look for fakeraid cards and actually use their drivers when there is clearly no attempt at support by their manufacturers. It is not as if Linux's raid support is shabby now is it?
Manufacturers don't help by trying to make out that they "support" Linux when really what they support is a totally unusable sham.
On 06/06/06, Tony Wicks tony@prophecy.net.nz wrote:
little real linux support nowdays. My "Dell" megaraids seem to work well, but I can't seem to get anything useful out of the supposed CLI interface.
Is this the "afacli" utility for AACRAID based controllers? If so the attached notes might help.
Will.
2006/6/6, Feizhou feizhou@graffiti.net:
Forget this proprietary software raid drivers. Just use the kernel software raid capabilities. There has yet to appear a company that manufactures bios on a chip raid cards that provide adequate or even minimal linux driver support.
(please don't tell me that 3ware is the only make to use, tw_cli is not exactly user friendly).
If you don't like hardware raid then there is nothing else for you.
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
-Drew
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Lazy Sent: Thursday, June 08, 2006 2:09 PM To: CentOS mailing list Subject: Re: [CentOS] Raid Cards
2006/6/6, Feizhou feizhou@graffiti.net:
Forget this proprietary software raid drivers. Just use the kernel software raid capabilities. There has yet to appear a company that manufactures bios on a chip raid cards that provide adequate or even minimal linux driver support.
(please don't tell me that 3ware is the only make to use, tw_cli is
not
exactly user friendly).
If you don't like hardware raid then there is nothing else for you.
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
On Thu, 2006-06-08 at 14:15 -0400, Drew Weaver wrote:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
-Drew
Was that a CentOS-4 install (or even a 2.6 kernel)?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Lazy Sent: Thursday, June 08, 2006 2:09 PM To: CentOS mailing list Subject: Re: [CentOS] Raid Cards
2006/6/6, Feizhou feizhou@graffiti.net:
Forget this proprietary software raid drivers. Just use the kernel software raid capabilities. There has yet to appear a company that manufactures bios on a chip raid cards that provide adequate or even minimal linux driver support.
(please don't tell me that 3ware is the only make to use, tw_cli is
not
exactly user friendly).
If you don't like hardware raid then there is nothing else for you.
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
Drew Weaver spake the following on 6/8/2006 11:15 AM:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
The only time I had a real software raid problem, it was self-inflicted. I tried to use all the channels on an ide card. Using desktop drives instead of enterprise drives can also jinx it.
Scott Silva wrote:
Drew Weaver spake the following on 6/8/2006 11:15 AM:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
The only time I had a real software raid problem, it was self-inflicted. I tried to use all the channels on an ide card. Using desktop drives instead of enterprise drives can also jinx it.
I was using software RAID 0/5 back in 97-99-ish on extremely busy web servers and never lost any data or suffered any failures. I've used it on and off since then and have never lost any data whatsoever as a result of a software RAID failure. Nowadays, I don't really bother with it because 3Ware cards are so cheap, but software RAID has been stable under Linux for MANY years. If Drew is having those kinds of problems, he's either having wetware issues or is using dodgy/unsupported hardware.
Cheers,
I was using software RAID 0/5 back in 97-99-ish on extremely busy web servers and never lost any data or suffered any failures. I've used it on and off since then and have never lost any data whatsoever as a result of a software RAID failure. Nowadays, I don't really bother with
it because 3Ware cards are so cheap, but software RAID has been stable under Linux for MANY years. If Drew is having those kinds of problems, he's either having wetware issues or is using dodgy/unsupported hardware.
Cheers, ---
Dodgy unsupported hardware like plain vanilla Intel GTP motherboards and Seagate hard disks? Lol ;-) I'm not entirely certain either how you can have a 'wetware' issue when the installer sets up raid for you, and then it fails for no reason; but you could be right. It doesn't matter now because we're using Dell 1850s and the PERC sata cards work flawlessly for us. I was just giving my impression on the problems we've had in our datacenter with over 1200 servers.
Thanks!
-Drew
Chris Mauritz wrote on Thu, 08 Jun 2006 16:42:43 -0400:
3Ware cards are so cheap
I wouldn't call them "cheap". At least here in Germany the "cheapest" 3Ware ATA I can find is an Escalade 7006 at EUR 114,--. That price is ok, but not "cheap" in my eyes. The cheapest SATA I can find is well over EUR 300,--. So the controller alone costs more than the 4 disks I can attach to it. How much do you pay in the US for 3Ware?
Kai
On Fri, Jun 09, 2006 at 03:31:17PM +0200, Kai Schaetzl enlightened us:
Chris Mauritz wrote on Thu, 08 Jun 2006 16:42:43 -0400:
3Ware cards are so cheap
I wouldn't call them "cheap". At least here in Germany the "cheapest" 3Ware ATA I can find is an Escalade 7006 at EUR 114,--. That price is ok, but not "cheap" in my eyes. The cheapest SATA I can find is well over EUR 300,--. So the controller alone costs more than the 4 disks I can attach to it. How much do you pay in the US for 3Ware?
For the 8006-2LP I pay around $130 US. The 9550sx-4LP just cost me $335.45 and the 9550sx-8LP was $462.
Matt
For the 8006-2LP I pay around $130 US. The 9550sx-4LP just cost me $335.45 and the 9550sx-8LP was $462.
That is way cheaper than what I would have to pay!
For a 9550sx-8LP, I will have to fork over 600USD!
Is that what you can get online? I might just consider having the thing shipped international...
Geez.
On Tue, Jun 13, 2006 at 02:51:06PM +0800, Feizhou enlightened us:
For the 8006-2LP I pay around $130 US. The 9550sx-4LP just cost me $335.45 and the 9550sx-8LP was $462.
That is way cheaper than what I would have to pay!
For a 9550sx-8LP, I will have to fork over 600USD!
Is that what you can get online? I might just consider having the thing shipped international...
I get that from my local vendor, and that may include an educational discount. For example, another vendor I often use lists the 8LP at $609.99. My price is $491.98.
I think I should consider myself lucky!
Matt
I get that from my local vendor, and that may include an educational discount. For example, another vendor I often use lists the 8LP at $609.99. My price is $491.98.
I think I should consider myself lucky!
/me looks at Matt with envy.
The price I am quoting comes from the local distributor...and it ain't that much lower too from local resellers.
A whole hundred bucks difference. Sigh. So much for Hong Kong being a place for good deals.
Feizhou wrote:
I get that from my local vendor, and that may include an educational discount. For example, another vendor I often use lists the 8LP at $609.99. My price is $491.98. I think I should consider myself lucky!
/me looks at Matt with envy.
The price I am quoting comes from the local distributor...and it ain't that much lower too from local resellers.
A whole hundred bucks difference. Sigh. So much for Hong Kong being a place for good deals.
I think my wife would disagree. She would see the availability of Granville Road in Kowloon where she could buy unlimited supplies of dirt cheap designer bags and clothes as a good substitute. :D
How many cards did you need? I wonder if they'd be cheaper in Shenzhen? Might be worth the ferry ride if you bought a lot of them.
Cheers,
OT
Chris Mauritz wrote:
Feizhou wrote:
I get that from my local vendor, and that may include an educational discount. For example, another vendor I often use lists the 8LP at $609.99. My price is $491.98. I think I should consider myself lucky!
/me looks at Matt with envy.
The price I am quoting comes from the local distributor...and it ain't that much lower too from local resellers.
A whole hundred bucks difference. Sigh. So much for Hong Kong being a place for good deals.
I think my wife would disagree. She would see the availability of Granville Road in Kowloon where she could buy unlimited supplies of dirt cheap designer bags and clothes as a good substitute. :D
Ah, I should have specified the illusion of electronic goods being cheaper in this duty free haven :D
How many cards did you need? I wonder if they'd be cheaper in Shenzhen? Might be worth the ferry ride if you bought a lot of them.
Heh, the pricing I got was from the local distributor who was also the local distributor in Shenzen.
Even the proximity with Taiwan didn't make much of a difference with Areca cards...they are pretty much the same as the 3ware cards...I guess that means the vote is for 3ware :P given the maturity of their drivers and the whole known stability their cards have :) whereas Areca is pretty much unexplored territory for me.
None at the moment. I was just wondered what, if any, differences there would be if I bought one card here.
Kai Schaetzl wrote:
Chris Mauritz wrote on Thu, 08 Jun 2006 16:42:43 -0400:
3Ware cards are so cheap
I wouldn't call them "cheap". At least here in Germany the "cheapest" 3Ware ATA I can find is an Escalade 7006 at EUR 114,--. That price is ok, but not "cheap" in my eyes. The cheapest SATA I can find is well over EUR 300,--. So the controller alone costs more than the 4 disks I can attach to it. How much do you pay in the US for 3Ware?
I guess cheap is in the eye of the beholder. :-) A 7006 here costs about US$110. When I say cheap, I guess I am comparing it to the "old days" when you had to buy SCSI RAID controllers to get a real hardware RAID device and then got the double penalty of then paying several times the amount for storage compared to IDE disks.
Cheers,
Chris Mauritz wrote on Fri, 09 Jun 2006 10:28:29 -0400:
When I say cheap, I guess I am comparing it to the "old days" when you had to buy SCSI RAID controllers to get a real hardware RAID device and then got the double penalty of then paying several times the amount for storage compared to IDE disks.
I see. Thank you both for the price info. Looks like the price is "translated" 1:1 from $ to EUR, so they cost about 50% more or so.
Kai
On Thu, 2006-06-08 at 14:15 -0400, Drew Weaver wrote:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
I've had software raid work very well for years, but mostly on scsi controllers. I've even hotswapped replacement drives and rebuilt without shutting down. I like it because it's not unreliable on my hardware and since I use RAID1 I can recover data from any single disk by connecting it to any compatible controller. Or if a motherboard dies I can shove the drives in a spare chassis without worrying about whether it has exactly the same controller and raid configuration.
If you have a problem booting, you can boot the 1st install CD in rescue mode and fix the grub setup - and 'cat /proc/mdstat' will always show you if the mirrors are active.
I have heard of problems on SATA drives where errors aren't passed up to the md layer correctly but I don't have any experience with that myself. I'd probably go with a 3ware controller for SATA.
Les Mikesell wrote:
If you have a problem booting, you can boot the 1st install CD in rescue mode and fix the grub setup - and 'cat /proc/mdstat' will always show you if the mirrors are active.
I'm using software raid1 /, /boot, and swap partitions on CentOS 4.3. I've heard tell that the installer does not get the grub install/config quite right. i.e. if the first drive fails the system won't reboot.
Can someone tell me for sure what ai need to do? Is it just something like grub-install hd(1,0)?
Thanks, Steve
On Thu, 2006-06-08 at 14:15 -0500, Steve wrote:
If you have a problem booting, you can boot the 1st install CD in rescue mode and fix the grub setup - and 'cat /proc/mdstat' will always show you if the mirrors are active.
I'm using software raid1 /, /boot, and swap partitions on CentOS 4.3. I've heard tell that the installer does not get the grub install/config quite right. i.e. if the first drive fails the system won't reboot.
Can someone tell me for sure what ai need to do? Is it just something like grub-install hd(1,0)?
I haven't installed on raid from the latest version so it may be fixed now, but generally mine haven't even installed grub right on the first drive of the set. But, it is fairly easy to fix by hand. You can either ctl-alt-F2 to a shell prompt at the end of an install before the reboot or boot from the install cd with 'linux rescue' and chroot to /mnt/sysinstall after it mounts the drives for you. Then:
grub
device (hd0) /dev/sda root (hd0,0) setup (hd0) device (hd1) /dev/sdb root (hd1,0) setup (hd1) quit
If you are in rescue mode, exit twice to reboot.
That assumes scsi disks and /boot as the first partition of each of the 1st 2 drives.
Les Mikesell wrote:
grub
device (hd0) /dev/sda root (hd0,0) setup (hd0) device (hd1) /dev/sdb root (hd1,0) setup (hd1) quit
Thanks! My machines use SATA but the names should be he same as SCSI. I've been installing CentOS 4 since 4.1 and it has at least gotten the first drive correct.
The bootloader has always been a weak spot for RH installs. I remember back in the 4.x and 5.x days I always just assumed that it wouldn't get it right and considered a rescue boot and /usr/sbin/lilo to be just another part of the installaion! ;-)
They've gotten a lot better, though.
-Steve
Just as an FYI, I just did a hexdump of both /dev/sda and /dev/sdb on a machine I originally installed as 4.2 (and without manually grubbing) and both drives look like the first sector has something boot related on them, with text like:
Loading PBR for descriptor 1....done....failed....Bad boot flag... active partitions...PBR load error...Bad PBR signature........
So I guess the installer got it right.
-Steve
Just as an FYI, I just did a hexdump of both /dev/sda and /dev/sdb on a machine I originally installed as 4.2 (and without manually grubbing) and both drives look like the first sector has something boot related on them, with text like:
Loading PBR for descriptor 1....done....failed....Bad boot flag... active partitions...PBR load error...Bad PBR signature........
So I guess the installer got it right.
-Steve _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
In my experience grub is not being correctly installed when the following condition exists:
There is no floppy drive and the floppy controller was not disabled in the bios of the system.
If you watch the F4 screen near the end of the install there will be an error about /dev/fd0 and grub doesn't install.
I have not documented this but it has happened to me enough that I sometimes just put a floppy on for the install so I don't have to play games before/after rebooting the shiny new system.
Rik
wrote on Thu, 8 Jun 2006 17:07:12 -0600 (MDT):
If you watch the F4 screen near the end of the install there will be an error about /dev/fd0 and grub doesn't install.
You will get an error about not being able to write to /dev/fd0 if you run the grub shell. It's just so fast that it's almost impossible to read it. But this error is nothing serious. I may be wrong but I think it's the same that you mean happening on the setup screen.
Kai
The bootloader has always been a weak spot for RH installs. I remember back in the 4.x and 5.x days I always just assumed that it wouldn't get it right and considered a rescue boot and /usr/sbin/lilo to be just another part of the installaion! ;-)
grub is done properly in FC5 so we can expect the same for RHEL5 and therefore Centos 5.
Feizhou wrote:
The bootloader has always been a weak spot for RH installs. I remember back in the 4.x and 5.x days I always just assumed that it wouldn't get it right and considered a rescue boot and /usr/sbin/lilo to be just another part of the installaion! ;-)
grub is done properly in FC5 so we can expect the same for RHEL5 and therefore Centos 5.
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
Hard to believe the upstream provider of an Enterprise Class OS could allow raid bootloader configuration to not work on a very large percentage of machines for very long without there being hell to pay, and big headlines in the trade journals.
Just to get my comment in on linux RAID reliability, like others who have commented, I was using Linux software raid back in 1999ish (RH 6.2) for both boot and root. It worked very well. My only complaint was that it wasn't supported by the installer and so it required a fair amount of jiggery-pokery to get it configured. I then started ordering the PERC controllers with the Dell servers we buy. I've had more trouble with them actually. You just never know when support for your (not so old) controller is going to get silently dropped. (To be fair, my negative experiences along these lines have been with FC and those guys don't seem to care what they break or when it gets fixed.)
Since I've switched to CentOS, I've been using software raid1 and everything seems to be working as advertised, including the support for /boot and / raid in the the installer.
-Steve
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
The installer will install grub. Just not properly. If the first disk dies, when the box comes up, the previous second disk grub stage 1 will load but it will not look for the stage 2 on the current disk and therefore fail.
Hard to believe the upstream provider of an Enterprise Class OS could allow raid bootloader configuration to not work on a very large percentage of machines for very long without there being hell to pay, and big headlines in the trade journals.
Heh.
Since I've switched to CentOS, I've been using software raid1 and everything seems to be working as advertised, including the support for /boot and / raid in the the installer.
Go get a spare box, install, pull the first disk and see what happens. Then repeat with FC5.
Feizhou wrote:
Go get a spare box, install, pull the first disk and see what happens. Then repeat with FC5.
As it happens, I have a newly installed CentOS 4.3 RAID1 server in the office that has not been shipped out yet. I'll give it a try and report back.
If it still doesn't work out of the box after 3 update releases it *should* have big headlines in the trade journals.
(BTW, I understand and agree completely with CentOS' policy of 100% bug for bug compatibility with upstream so this should not be taken as a criticism of CentOS.)
-Steve
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
The installer will install grub. Just not properly. If the first disk dies, when the box comes up, the previous second disk grub stage 1 will load but it will not look for the stage 2 on the current disk and therefore fail.
That will depend on what your bios does when the first drive fails and whether you've had to move it to make it boot at all. On the scsi systems where I've used it, the 2nd drive will boot and everything on the cable shifts up if the first drive fails or is removed. IDE systems are different - and normally won't boot if a failed drive is still connected.
Les Mikesell wrote:
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
The installer will install grub. Just not properly. If the first disk dies, when the box comes up, the previous second disk grub stage 1 will load but it will not look for the stage 2 on the current disk and therefore fail.
That will depend on what your bios does when the first drive fails and whether you've had to move it to make it boot at all. On the scsi systems where I've used it, the 2nd drive will boot and everything on the cable shifts up if the first drive fails or is removed. IDE systems are different - and normally won't boot if a failed drive is still connected.
Hmm, does it come back up on a scsi based system?
I cannot remember but if the second disk's grub stage 1 does get loaded, it tries to load stage 2 from the 'second' drive which will not work and the reason for the failure or whether it does load stage2 but stage2 looks in the wrong 'disk' for its config file.
So either way, it should also fail on a scsi system...
On Fri, 2006-06-09 at 23:19 +0800, Feizhou wrote:
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
The installer will install grub. Just not properly. If the first disk dies, when the box comes up, the previous second disk grub stage 1 will load but it will not look for the stage 2 on the current disk and therefore fail.
That will depend on what your bios does when the first drive fails and whether you've had to move it to make it boot at all. On the scsi systems where I've used it, the 2nd drive will boot and everything on the cable shifts up if the first drive fails or is removed. IDE systems are different - and normally won't boot if a failed drive is still connected.
Hmm, does it come back up on a scsi based system?
Mine does but it may depend on the bios.
I cannot remember but if the second disk's grub stage 1 does get loaded, it tries to load stage 2 from the 'second' drive which will not work and the reason for the failure or whether it does load stage2 but stage2 looks in the wrong 'disk' for its config file.
So either way, it should also fail on a scsi system...
These seem to shift the first working drive into the first bios drive as well as /dev/sda. The same disk will boot whether I put it in the 1st or 2nd (SCA hot-swap) slot. I have noticed that some newer machines must be configured specifically for the slot to use when booting and are even unhappy if you swap in a different drive type in the same slot without reconfiguring the bios boot selection so this may vary a lot among machines.
I did the grub setup by hand but have noticed that there are different sets of instructions around that apply to IDE drives. In any case, I think it is a good idea to keep a rescue CD handy and understand how to re-install grub.
On Fri, 2006-06-09 at 11:09 -0500, Les Mikesell wrote:
On Fri, 2006-06-09 at 23:19 +0800, Feizhou wrote:
<snip>
These seem to shift the first working drive into the first bios drive as well as /dev/sda. The same disk will boot whether I put it in the 1st or 2nd (SCA hot-swap) slot. I have noticed that some newer machines must be configured specifically for the slot to use when booting and are even unhappy if you swap in a different drive type in the same slot without reconfiguring the bios boot selection so this may vary a lot among machines.
IIRC, this was std. behavior for Adaptec HAs (and others?) if BIOS was enabled. Always liked that because the early 4GB drives were very unreliable and I could keep booting and running due to good backup, planning and drive relocation by the HA.
<snip sig stuff>
On Fri, 2006-06-09 at 10:10 -0500, Les Mikesell wrote:
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
I just looked at the boot sector on the 3 machines that I have running software raid1. It looks like the installer got it right on all 3. IIRC, one was installed as CentOS 4.1, another as 4.2, and another as 4.3.
The installer will install grub. Just not properly. If the first disk dies, when the box comes up, the previous second disk grub stage 1 will load but it will not look for the stage 2 on the current disk and therefore fail.
That will depend on what your bios does when the first drive fails and whether you've had to move it to make it boot at all. On the scsi systems where I've used it, the 2nd drive will boot and everything on the cable shifts up if the first drive fails or is removed. IDE systems are different - and normally won't boot if a failed drive is still connected.
If the BIOS is able to get around the failed drive, HD device IDs are shifted: 0x81->0x80, 0x82->0x82, ... Ditto if you tell BIOS to boot from D:, E:, ... and IDs "wrap" if needed: 0x83->0x80, 0x80->0x81...
And if from other device(s), somewhat similar, but different, adjustments *may* occur, e.g. El Torito Spec Boot CD (Phoenix Bios/IBM circa 1995?).
On Fri, 2006-06-09 at 11:22 -0400, William L. Maltby wrote:
On Fri, 2006-06-09 at 10:10 -0500, Les Mikesell wrote:
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
<snip>
If the BIOS is able to get around the failed drive, HD device IDs are shifted: 0x81->0x80, 0x82->0x82, ... Ditto if you tell BIOS to boot from
0x81
Oops.
<snip>
On Fri, 2006-06-09 at 11:22 -0400, William L. Maltby wrote:
On Fri, 2006-06-09 at 10:10 -0500, Les Mikesell wrote:
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
<snip>
If the BIOS is able to get around the failed drive, HD device IDs are shifted: 0x81->0x80, 0x82->0x82, ...
I mis-spoke here. I was thinking of when I told BIOS to boot from another drive and have no knowledge for sure that any or all BIOS will automatically assign drives when a failure occurs.
Sorry.
<snip sig stuff>
On Fri, 2006-06-09 at 12:48 -0400, William L. Maltby wrote:
On Fri, 2006-06-09 at 11:22 -0400, William L. Maltby wrote:
On Fri, 2006-06-09 at 10:10 -0500, Les Mikesell wrote:
On Fri, 2006-06-09 at 22:58 +0800, Feizhou wrote:
<snip>
If the BIOS is able to get around the failed drive, HD device IDs are shifted: 0x81->0x80, 0x82->0x82, ...
I mis-spoke here. I was thinking of when I told BIOS to boot from another drive and have no knowledge for sure that any or all BIOS will automatically assign drives when a failure occurs.
If your BIOS is set to have a boot failover, then drives will be assigned.
<snip>
Les Mikesell wrote on Thu, 08 Jun 2006 14:28:43 -0500:
You can either ctl-alt-F2 to a shell prompt at the end of an install before the reboot or boot from the install cd with 'linux rescue' and chroot to /mnt/sysinstall after it mounts the drives for you.
For software IDE RAID you can do it right from the booted system after the install is done. You don't need the rescue CD.
Kai
Drew Weaver wrote:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
Dunno, I have a box that has two of them IBM IDE Deathstar drives in RAID 1 mode and I still DO NOT have raid problems even though they have started down the road of self-destruction. Oh, BTW, it is a RH 7.1 install.
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
Or maybe research the entire toolchains used by Linux software raid. Most IDE controllers will not tolerate a faulty device on the cable so my disks are the only devices on the entire channel. No problemo.
In contrast, on another box where I run Centos 4, i had to put a dying drive together with a new disk. Later on, the controller starting acting up because the drive started its death throes and so I got problems with the new mirror and with booting up in which the controller would not recognize the drives.
Taking the faulty drive off the channel resolved things. The problem here was not with Linux software raid but with the controller and that is a hardware problem, not even a kernel driver problem.
Oh, proprietary software raid drivers from HP, Promise and whoever are something else. If you use those, please go after the manufacturer's drivers and don't blame Linux's software raid drivers.
On Thu, Jun 08, 2006 at 08:08:57PM +0200, Lazy wrote:
2006/6/6, Feizhou feizhou@graffiti.net:
Forget this proprietary software raid drivers. Just use the kernel software raid capabilities. There has yet to appear a company that manufactures bios on a chip raid cards that provide adequate or even minimal linux driver support.
(please don't tell me that 3ware is the only make to use, tw_cli is not exactly user friendly).
If you don't like hardware raid then there is nothing else for you.
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
I don't think the areca drivers are in the kernel tree yet - I believe they're slated for inclusion in possibly 2.6.18. Certainly the CentOS4 kernels don't include areca support.
They are pretty decent cards, but I wouldn't use them for boot disks until they actually are in the stock kernel. For RHEL5/CentOS5 they might be good.
Cheers, Gavin
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
They have drivers for RHEL 4 update 3 among many others for their PCIe RAID cards that I looked at.
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
Maybe I will check them out. I've been looking for some PCIe solution for some time and they have great Linux support from the looks of it.
* Feizhou feizhou@graffiti.net [2006-06-09 04:44:00]:
Check out areca. The driver is in tree (writen by areca). And it looks like they are actively supporting linux.
They have drivers for RHEL 4 update 3 among many others for their PCIe RAID cards that I looked at.
I've been producing Areca/CentOS driver RPM packages since early 4.0 days:
http://www.bodgit-n-scarper.com/code.html#centos
Might be a little easier to keep updated with yum.
As for booting, provided the kernel driver is in the initrd for the kernel, it will work fine as a boot volume, (the same as all of the other storage controllers). As long as you have the line:
alias scsi_hostadapter arcmsr
in /etc/modprobe.conf, the driver will get included by mkinitrd. My package will check for this on install.
I need to upload the latest versions, but they Work For Me so far.
Matt
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
ftp://ftp.areca.com.tw/RaidCards/Documents/Performance/ARC1120_Single_Card_Performance.pdf
This is their own benchmark I think for one of their PCIe cards.
ftp://ftp.areca.com.tw/RaidCards/Documents/Hardware/HWCompatibilityList_011706.pdf
They appear to do their own testing so things look positive for Areca I guess.
I am not sure that they are software raid implementations...if you ask me they look more like hardware raid cards. The adaptors ARC12xx all come with onboard memory, 128M for 4/8 port cards and a SO-DIMM slot that supports up to 1G for 12/16/24 port cards, which can also be battery backed-up.
This looks more like an alternative to 3ware and not to HP/Promise bios on an adapter 'raid' cards.
2006/6/9, Feizhou feizhou@graffiti.net:
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
This looks more like an alternative to 3ware and not to HP/Promise bios on an adapter 'raid' cards.
Yes. I was giving an alternative to 3ware.
Lazy wrote:
2006/6/9, Feizhou feizhou@graffiti.net:
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
This looks more like an alternative to 3ware and not to HP/Promise bios on an adapter 'raid' cards.
Yes. I was giving an alternative to 3ware.
Just an update.
Areca is not quite there yet with 3ware with regard to driver quality according to what I read from posts to LKML.
That is in the eyes of the kernel maintainers and developers but it is very near to inclusion into the mainline kernel after having been in the -mm kernel for over a year.
arecas are true hardware raid cards.
Feizhou wrote:
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
ftp://ftp.areca.com.tw/RaidCards/Documents/Performance/ARC1120_Single_Card_Performance.pdf
This is their own benchmark I think for one of their PCIe cards.
ftp://ftp.areca.com.tw/RaidCards/Documents/Hardware/HWCompatibilityList_011706.pdf
They appear to do their own testing so things look positive for Areca I guess.
I am not sure that they are software raid implementations...if you ask me they look more like hardware raid cards. The adaptors ARC12xx all come with onboard memory, 128M for 4/8 port cards and a SO-DIMM slot that supports up to 1G for 12/16/24 port cards, which can also be battery backed-up.
This looks more like an alternative to 3ware and not to HP/Promise bios on an adapter 'raid' cards. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Feizhou spake the following on 6/8/2006 9:03 PM:
There was some nice performance comparison betwean linux compatible sata raid cards and areca performed really well. If my memory serves me well it was significantly faster then 3ware. Can't find the url now :(. So i have no arguments to support it now. But certainly areca it's worth a try.
ftp://ftp.areca.com.tw/RaidCards/Documents/Performance/ARC1120_Single_Card_Performance.pdf
This is their own benchmark I think for one of their PCIe cards.
ftp://ftp.areca.com.tw/RaidCards/Documents/Hardware/HWCompatibilityList_011706.pdf
They appear to do their own testing so things look positive for Areca I guess.
I am not sure that they are software raid implementations...if you ask me they look more like hardware raid cards. The adaptors ARC12xx all come with onboard memory, 128M for 4/8 port cards and a SO-DIMM slot that supports up to 1G for 12/16/24 port cards, which can also be battery backed-up.
This looks more like an alternative to 3ware and not to HP/Promise bios on an adapter 'raid' cards.
I have a promise SX6000 laying in a cabinet that is perfectly fine in Windows, but worthless in linux. Suprisingly, FreeBSD supports it fine, it just won't boot from it. It isn't even heavy enough for a boat anchor!
I have a promise SX6000 laying in a cabinet that is perfectly fine in Windows, but worthless in linux. Suprisingly, FreeBSD supports it fine, it just won't boot from it. It isn't even heavy enough for a boat anchor!
Haha, Promise does not even show promise as useful junk now?