Hi,
I have a client that insists on going with software rather than hardware raid1 to save a few dollars.
Can Centos 4.2 do Boot and Root on software Raid1?
I've heard criticisms of Grub and bootable RAID. Also, if it is not set up to work out of the box, I'd like to try to talk them out of it.
I've been down that road before. If I go nonstandard, by the time I actually need to boot from the second drive in an emergency, I've forgotten the special procedure.
(Why can't businesses just spend a few extra dollars to do things right?!)
Thanks for any info.
-Steve
Just some additional info that I should have included in my original post.
This would be SATA raid using the onboard SATA interfaces on a Dell Poweredge SC1420 server. They call it a "Software RAID" controller, but I think that just means they give you 2 SATA channels and your OS can do the RAID part.
Quoting Steve Bergman steve@rueb.com:
Just some additional info that I should have included in my original post.
This would be SATA raid using the onboard SATA interfaces on a Dell Poweredge SC1420 server. They call it a "Software RAID" controller, but I think that just means they give you 2 SATA channels and your OS can do the RAID part.
Exactly. What you would do is disable RAID in BIOS (the OS will see them as two separate drives anyhow). Then configure software RAID-1 as you would do with "normal" ATA/SATA/SCSI/whatever disks.
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
Thank you all for the responses. Just to put things in perspective, the last time I set up software RAID was on RH6.2 and boot/root RAID was very much a nonstandard thing to do. I never could get it to boot from the second drive, so if the first one went out I would have to boot from a floppy drive that probably hasn't been used in years with a special floppy which is years old as well.
(Also the issue of RAID always comes up in LILO vs GRUB flamefests, and that had me nervous. ;-0 )
Obviously things have changed a lot since RH6.2.
Good points as to the advantages of software RAID. This machine is fully certified by RedHat, but the CERC 6ch controller is uncertain, so software RAID seems the right choice.
Thanks Again, Steve Bergman
Steve Bergman steve@rueb.com wrote:
Good points as to the advantages of software RAID. This machine is fully certified by RedHat, but the CERC 6ch controller is uncertain,
The CERC 6ch uses the aaraid driver. Administrative tools are largely lacking though.
It's also an old Intel i960-based design, so it's rather sluggish. At RAID-5 its so-so, the i960 is a bit of a bottleneck with today's drives. But at RAID-0, 1 or 10, the i960 will be a massive bottleneck.
so software RAID seems the right choice.
The question is, how are you going to do software RAID? Are you going to use the AIC-7xxx SCSI channels on the CERC 6ch?
If that does bypass the sluggish i960, then it would probably be faster at RAID-5 and far, far faster at RAID-0, 1 or 10. But if you have to put the i960 in JBOD mode, the i960 still might be a bottleneck to the SCSI channels.
Which means you're not going to see any better performance, so you might want to just use the i960 hardware RAID.
[ SIDE NOTE: This is why I don't like generalized RAID discussions. How to do RAID is a question of what hardware you have -- for _both_ software or hardware. ]
Quoting Steve Bergman steve@rueb.com:
Hi,
I have a client that insists on going with software rather than hardware raid1 to save a few dollars.
Can Centos 4.2 do Boot and Root on software Raid1?
Yes on i386. No on ia64 (partition where /boot dir is can't be on RAID-1). Don't know for x86_64 (should be).
I've heard criticisms of Grub and bootable RAID. Also, if it is not set up to work out of the box, I'd like to try to talk them out of it.
Yes, Anaconda had this long standing bug of not handling /boot on RAID-1 properly when installing Grub. With LILO it worked perfectly for a very long time out of the box. I don't know if Anaconda/Grub problem was fixed recently. Even if it wasn't it is relatively trivial to adjust grub configuration and install it on both drives.
I've been down that road before. If I go nonstandard, by the time I actually need to boot from the second drive in an emergency, I've forgotten the special procedure.
Depending on the type of failure, special procedure might not be needed. Even if it is, it is relatively trivial (and I wouldn't really call it special procedure) ;-)
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
On Thu, 2005-10-20 at 09:59, Steve Bergman wrote:
I have a client that insists on going with software rather than hardware raid1 to save a few dollars.
Software raid some advantages of its own.
Can Centos 4.2 do Boot and Root on software Raid1?
I've heard criticisms of Grub and bootable RAID. Also, if it is not set up to work out of the box, I'd like to try to talk them out of it.
It doesn't install by itself but the grub setup only has to be done once manually.
I've been down that road before. If I go nonstandard, by the time I actually need to boot from the second drive in an emergency, I've forgotten the special procedure.
The nature of software RAID is such that you can pretend it wasn't there and use a single drive like a single drive. Thus, in the worst case of a boot problem you would take whichever drive looked the most likely to work, connect to a controller position that is configured to boot, and do what you would do with a single drive. That means if you had pre-installed grub on the second drive it will just work. If you haven't, you can boot with the install CD in rescue mode, chroot where it tells you, and do a grub-install just like you would with any other drive.
(Why can't businesses just spend a few extra dollars to do things right?!)
Often it is because for the extra money, you get no extra features except being locked into some particular vendor's product. With software RAID1 you can pull out any single disk and recover the data on any machine with a similar interface type. With a raid controller, if the PC or controller fails you'll have to have exactly the same model to ever access those drives again - and you may or may not have the tools to observe the status and 'smart' condition of the drives and to rebuild the mirrors online.
Les Mikesell lesmikesell@gmail.com wrote:
Often it is because for the extra money, you get no extra features except being locked into some particular vendor's product
As I have pointed out before, that depends on the vendor.
Please don't blanket all vendors as the same. Some vendors have 6+ years of proven inter-product hardware RAID volume compatibility with excellent Linux support.
With software RAID1 you can pull out any single disk and recover the data on any machine with a similar interface type.
Not always. SCSI controllers can very in their format, and even ATA can be suspect. Rare is this case, yes. But I have run into it -- especially with SCSI.
With a raid controller, if the PC or controller fails you'll have to have exactly the same model to ever access those drives again -
Once again, I will ask you to not blanket all vendors as such.
3Ware has maintained 6+ years of inter-model volume compatibility -- far more, far better and far longer than Linux's MD/LVM which has gone through several, significant changes. This is indisputable, despite the insistance of some in the MD/LVM community -- it's more about ignorance of 3Ware than usage (or improper usage of 3Ware cards for software RAID, instead of using their hardware RAID features).
and you may or may not have the tools to observe the status and 'smart' condition of the drives and to rebuild the mirrors online.
Any "quality" vendor has tools to not only rebuild the volume on-line, but the rebuild is done in hardware. And has many people have pointed out, the hardware can pick up where it left off regardless of any power, OS or other transient "incident."
As far as monitoring tools, yes, many vendors have their own. At the same time, nearly all vendors send standard syslog messages. A few are even integrating with smartd and mdadm, and most of the reason they did not prior is because of a lack of their standardization (e.g., mdadm).
Again, be careful with blanket statements. There are vendors with good track records and vendors with poor track records -- both in maintaining inter-model volume compatibility as well as Linux support.
-- Bryan
SIDE NOTE: It's clear to me that 3Ware's introduction of the new Escalade 9550 series with an embedded PowerPC signals that their legacy ASIC was not sufficient for DRAM and RAID-5 in the preceding Escalade 9500 series. I've said it before and I'll say it again, 3Ware Escalade 7000/8000 series products are best for RAID-0, 1 and 10, but I can't recommend the 3Ware Escalade 9000 series for RAID-5 yet (although this new Escalade 9550 looks very, very promising).
On Thu, 2005-10-20 at 12:09, Bryan J. Smith wrote:
Often it is because for the extra money, you get no extra features except being locked into some particular vendor's product
As I have pointed out before, that depends on the vendor.
Please don't blanket all vendors as the same. Some vendors have 6+ years of proven inter-product hardware RAID volume compatibility with excellent Linux support.
Even if they permit drives to move between different models it still means you have to have a spare compatible one handy if your primary box dies.
3Ware has maintained 6+ years of inter-model volume compatibility -- far more, far better and far longer than Linux's MD/LVM which has gone through several, significant changes. This is indisputable, despite the insistance of some in the MD/LVM community -- it's more about ignorance of 3Ware than usage (or improper usage of 3Ware cards for software RAID, instead of using their hardware RAID features).
You can mount one partition out of an md RAID1 set directly as the underlying partition, bypassing any concern about compatibility of md versions if you need to recover data. LVM is a different story.
Nothing against the 3Ware cards - I agree they are very good, although you did forget to mention the various bugs they have had and fixed over those 6+ years.
Les Mikesell lesmikesell@gmail.com wrote:
Even if they permit drives to move between different models it still means you have to have a spare compatible one handy if your primary box dies.
Not always. There are ways to map various hardware RAID-0 and 10 block-striped volumes in Linux MD or LVM (via DeviceMapper), including 3Ware. RAID-1 is no issue at all, it's just a mirror.
RAID-5 is the only one that differs greatly.
You can mount one partition out of an md RAID1 set directly as the underlying partition, bypassing any concern about compatibility of md versions if you need to recover data. LVM is a different story.
MD hasn't always been that good, but only in more recent kernels. I've been using 3Ware since late kernel 2.0.
Nothing against the 3Ware cards - I agree they are very good, although you did forget to mention the various bugs they have had and fixed over those 6+ years.
Bugs have been limited to RAID-5 limitations.
First off, they should have never implemented RAID-5 on the Escalade 6000. It's ASIC was never designed for it.
Secondly, I have repeatedly stated that the ASIC+SRAM approach in even the 7000/8000 series is for non-blocking I/O, and not good for a buffering/XOR operation like RAID-5 writes. It's fine if you largely just read from the RAID-5 volume, but tanks on RAID-5 writes (although it's can be far better than software RAID when rebuilding).
Lastly, I have _never_ been a proponent of the Escalade 9500S, and recomended people stick with the 8506 and use RAID-10. Their recent introduction of the Escalade 9550SX series which adds an embedded PowerPC tells me that the ASIC design, even with DRAM added (in the 9500S), would never be a good performer for RAID-5 writes. But I'm hopeful we'll see great things out of the 9550SX series.
Until someone shows me an application that RAID-5 is faster than RAID-10, I will stick with RAID-10. With ATA drives as cheap as they are, getting a few extra GBs is not worth the write performance gains of RAID-10. And RAID-10 load balances reads better than RAID-5.
I've not yet tried Software RAID 1 with Centos 4.x but I've done so with Fedora Core 1 / X86-32 so I'd assume that my comments would apply.
I tend to prefer software RAID simply because then I'm not locked to a specific vendor/controller. If a hardware failure occurs that takes out the controller but leaves at least one of the HDDs ok, I can take one software RAID HDD, stick it into another controller, and have a working system in very short order. Hardware RAID frequently does not have this advantage.
When I've set up RAID, I did so with the RH installer, and have always picked RAID1. (RAID5 is a joke for SW RAID) I've set up a number of RAID installs with "boot/root" and extensions using the Software RAID howto. (google it)
Experimentally, I've set up a RAID array, removed one drive, booted, shutdown, and then replaced it with the other. Both drives booted fine, so there doesn't appear to be any particular issue with grub. When done, I had to resync the drives (again, see the Software RAID howto)
The only time I ran into trouble is that when you set up a RAID array, you have to have all the partitions installed on the machine at setup time. It seems you can't add active partitions after the fact.
Other than that, in 5 cases, it's been basically perfect for me, and I plan to deploy Centos 4.x/Software RAID/Boot-root again sometime next month.
Hope this helps, =)
-Ben
On Thursday 20 October 2005 07:59, Steve Bergman wrote:
Hi,
I have a client that insists on going with software rather than hardware raid1 to save a few dollars.
Can Centos 4.2 do Boot and Root on software Raid1?
I've heard criticisms of Grub and bootable RAID. Also, if it is not set up to work out of the box, I'd like to try to talk them out of it.
I've been down that road before. If I go nonstandard, by the time I actually need to boot from the second drive in an emergency, I've forgotten the special procedure.
(Why can't businesses just spend a few extra dollars to do things right?!)
Thanks for any info.
-Steve _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[ I really dislike these discussions because it is often opinions that are based on limited viewpoints. I've used a lot of software and hardware approaches over many different platforms and many different systems, and what I repeatedly see is absolutes applied when they are not applicable to many vendors. ]
Benjamin Smith lists@benjamindsmith.com wrote:
I've not yet tried Software RAID 1 with Centos 4.x but I've done so with Fedora Core 1 / X86-32 so I'd assume that my comments would apply.
Just be wary of changes in MD and/or LVM/LVM2.
I tend to prefer software RAID simply because then I'm not locked to a specific vendor/controller.
With RAID-1 (and not even block-striped RAID-0 or 10), several vendors don't "lock you in." Not only can you typically read the disk label on the "raw" disk, but there is support for reading volumes of different drives.
In fact, this is how LVM2+DM (DeviceMapper) is adding support for FRAID in kernel 2.6.
If a hardware failure occurs that takes out the controller but leaves at least one of the HDDs ok, I can take one software RAID HDD, stick it into another controller, and have a working system in very short order.
So can I, and I have done so when I didn't have a 3Ware Escalade or equivalent FRAID card around.
Hardware RAID frequently does not have this advantage.
That is an absolutely _false_ technical statement with regards to _several_ vendors. Please stop "blanket covering" all "Hardware RAID" with such absolutes.
When I've set up RAID, I did so with the RH installer, and have always picked RAID1.
I'm a huge fan of RAID-1 and RAID-10.
(RAID5 is a joke for SW RAID)
Agreed. The newer Opteron systems help as long as they have an excellent I/O design, but that loads much of the interconnect doing just I/O operations for the writes (let alone during rebuilds) -- loads that could be doing data services.
I've set up a number of RAID installs with "boot/root" and extensions using the Software RAID howto. (google it)
And I have as well. Unfortunately, the main concern is headless/remote recovery when the disk fails. Installing the MBR and bootstrap so it can boot from another device when the BIOS still sees the original, yet failed, disk is the issue.
Until the LVM2+DM work supports more FRAID chips/cards to overcome the BIOS mapping issue (not likely until the FRAID vendors recognize and support the DM work), I still prefer at $100 3Ware Escalade.
Experimentally, I've set up a RAID array, removed one drive, booted, shutdown, and then replaced it with the other.
As have I, on non-x86/non-Linux architectures as well as Linux. But if you have a headless/remote system, and the first drive fails, that doesn't solve the issue the BIOS mapping.
Both drives booted fine, so there doesn't appear to be any particular issue with grub.
As long as you have physical access to the system.
When done, I had to resync the drives (again, see the Software RAID howto)
I prefer autonomous operation. It's worth $100 IMHO.
The only time I ran into trouble is that when you set up a RAID array, you have to have all the partitions installed on the machine at setup time.
_Not_ true with even software RAID!
If you aren't using LVM, then yes, you have to pre-partition. But even then, you can define new MD slices.
But if you are using LVM/LVM2 (whether LVM/LVM2 is atop of a MD setup, or you create MD slices in LVM/LVM2 extents), you can dynamically create slices, fileystems, etc... without bringing down the box.
It seems you can't add active partitions after the fact.
I think you're mixing the fact that it is difficult to "resize" MD slices with adding "active" partitions. Those are more limitations with the legacy BIOS/DOS disk label than Linux MD, which LVM/LVM2 solves nicely.
[ Just like LDM Disk Labels solve for Windows NT5+ (2000+) ]
Other than that, in 5 cases, it's been basically perfect for me, and I plan to deploy Centos 4.x/Software RAID/Boot-root again sometime next month.
As have I. But at the same time, I find that putting in a $100 3Ware card has saved my butt.
Like the time the first disk failed 1,000 miles away, and the BIOS was still mapping the primary disk which it couldn't boot from.
Since then, I have refused to put in a co-located box without a 3Ware Escalade 700x-2 or 800x-2 card. The system has to be able to boot without local modification.
Bryan,
If my viewpoint is limited, I have within my email defined (to the best of my ability) the limits of my viewpoint. Use as you see fit.
Take a big, fat, chill pill, and realize that you're amongst friends, eh? Where have I applied absolutes? Is it not true that hardware RAID "frequently" leaves you locked in? Not "ALWAYS" (which would be an "absolute") but frequently? "several vendors don't lock you in" doesn't sound much like "infrequent" to me.
And, if performance isn't a big issue, why bother with HW RAID? There are many circumstances where data integrity is important, but a few hours of downtime won't kill anybody.
I'm glad you like your 3ware card(s). But I've made stuff work and work well on 1/10 your price (where $100 includes the entire computer sans monitor) with software RAID.
Spend your $100 however you like. I offer my opinion, and I offer clear qualifications on the scope of my opinions. If you're running Yahoo, $100 is not even on the radar. But, if you're running a server for a 6-man company, $100 can be the difference between gaining and losing a contract.
So, get off your high horse, offer your endorsements of the 3ware cards to the rest of us, and relax already!
-Ben
PS: If my butt is on the line and it's located 1,000 miles away, I'm going to demand 24x7 "hot hands" at a high quality colo with qualified staff. (and I do currently) There are many things that can go wrong, only one of which is a HDD failure, and if a controller card is all you feel you can count on, may god have mercy on you and your clientelle!
On Tuesday 25 October 2005 17:38, Bryan J. Smith wrote:
[ I really dislike these discussions because it is often opinions that are based on limited viewpoints. I've used a lot of software and hardware approaches over many different platforms and many different systems, and what I repeatedly see is absolutes applied when they are not applicable to many vendors. ]
Benjamin Smith lists@benjamindsmith.com wrote:
I've not yet tried Software RAID 1 with Centos 4.x but I've done so with Fedora Core 1 / X86-32 so I'd assume that my comments would apply.
Just be wary of changes in MD and/or LVM/LVM2.
I tend to prefer software RAID simply because then I'm not locked to a specific vendor/controller.
With RAID-1 (and not even block-striped RAID-0 or 10), several vendors don't "lock you in." Not only can you typically read the disk label on the "raw" disk, but there is support for reading volumes of different drives.
In fact, this is how LVM2+DM (DeviceMapper) is adding support for FRAID in kernel 2.6.
If a hardware failure occurs that takes out the controller but leaves at least one of the HDDs ok, I can take one software RAID HDD, stick it into another controller, and have a working system in very short order.
So can I, and I have done so when I didn't have a 3Ware Escalade or equivalent FRAID card around.
Hardware RAID frequently does not have this advantage.
That is an absolutely _false_ technical statement with regards to _several_ vendors. Please stop "blanket covering" all "Hardware RAID" with such absolutes.
When I've set up RAID, I did so with the RH installer, and have always picked RAID1.
I'm a huge fan of RAID-1 and RAID-10.
(RAID5 is a joke for SW RAID)
Agreed. The newer Opteron systems help as long as they have an excellent I/O design, but that loads much of the interconnect doing just I/O operations for the writes (let alone during rebuilds) -- loads that could be doing data services.
I've set up a number of RAID installs with "boot/root" and extensions using the Software RAID howto. (google it)
And I have as well. Unfortunately, the main concern is headless/remote recovery when the disk fails. Installing the MBR and bootstrap so it can boot from another device when the BIOS still sees the original, yet failed, disk is the issue.
Until the LVM2+DM work supports more FRAID chips/cards to overcome the BIOS mapping issue (not likely until the FRAID vendors recognize and support the DM work), I still prefer at $100 3Ware Escalade.
Experimentally, I've set up a RAID array, removed one drive, booted, shutdown, and then replaced it with the other.
As have I, on non-x86/non-Linux architectures as well as Linux. But if you have a headless/remote system, and the first drive fails, that doesn't solve the issue the BIOS mapping.
Both drives booted fine, so there doesn't appear to be any particular issue with grub.
As long as you have physical access to the system.
When done, I had to resync the drives (again, see the Software RAID howto)
I prefer autonomous operation. It's worth $100 IMHO.
The only time I ran into trouble is that when you set up a RAID array, you have to have all the partitions installed on the machine at setup time.
_Not_ true with even software RAID!
If you aren't using LVM, then yes, you have to pre-partition. But even then, you can define new MD slices.
But if you are using LVM/LVM2 (whether LVM/LVM2 is atop of a MD setup, or you create MD slices in LVM/LVM2 extents), you can dynamically create slices, fileystems, etc... without bringing down the box.
It seems you can't add active partitions after the fact.
I think you're mixing the fact that it is difficult to "resize" MD slices with adding "active" partitions. Those are more limitations with the legacy BIOS/DOS disk label than Linux MD, which LVM/LVM2 solves nicely.
[ Just like LDM Disk Labels solve for Windows NT5+ (2000+) ]
Other than that, in 5 cases, it's been basically perfect for me, and I plan to deploy Centos 4.x/Software RAID/Boot-root again sometime next month.
As have I. But at the same time, I find that putting in a $100 3Ware card has saved my butt.
Like the time the first disk failed 1,000 miles away, and the BIOS was still mapping the primary disk which it couldn't boot from.
Since then, I have refused to put in a co-located box without a 3Ware Escalade 700x-2 or 800x-2 card. The system has to be able to boot without local modification.
-- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith@ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
BEFORE THIS TURNS INTO A PISSING CONTEST ... BE NICE, BE NICE, BE NICE
-- Johnny Hughes CentOS 4 Developer ... and mailman admin for this list :)
On Wed, 2005-10-26 at 01:58 -0700, Benjamin Smith wrote:
Bryan,
If my viewpoint is limited, I have within my email defined (to the best of my ability) the limits of my viewpoint. Use as you see fit.
Take a big, fat, chill pill, and realize that you're amongst friends, eh? Where have I applied absolutes? Is it not true that hardware RAID "frequently" leaves you locked in? Not "ALWAYS" (which would be an "absolute") but frequently? "several vendors don't lock you in" doesn't sound much like "infrequent" to me.
And, if performance isn't a big issue, why bother with HW RAID? There are many circumstances where data integrity is important, but a few hours of downtime won't kill anybody.
I'm glad you like your 3ware card(s). But I've made stuff work and work well on 1/10 your price (where $100 includes the entire computer sans monitor) with software RAID.
Spend your $100 however you like. I offer my opinion, and I offer clear qualifications on the scope of my opinions. If you're running Yahoo, $100 is not even on the radar. But, if you're running a server for a 6-man company, $100 can be the difference between gaining and losing a contract.
So, get off your high horse, offer your endorsements of the 3ware cards to the rest of us, and relax already!
-Ben
PS: If my butt is on the line and it's located 1,000 miles away, I'm going to demand 24x7 "hot hands" at a high quality colo with qualified staff. (and I do currently) There are many things that can go wrong, only one of which is a HDD failure, and if a controller card is all you feel you can count on, may god have mercy on you and your clientelle!
On Tuesday 25 October 2005 17:38, Bryan J. Smith wrote:
[ I really dislike these discussions because it is often opinions that are based on limited viewpoints. I've used a lot of software and hardware approaches over many different platforms and many different systems, and what I repeatedly see is absolutes applied when they are not applicable to many vendors. ]
Benjamin Smith lists@benjamindsmith.com wrote:
I've not yet tried Software RAID 1 with Centos 4.x but I've done so with Fedora Core 1 / X86-32 so I'd assume that my comments would apply.
Just be wary of changes in MD and/or LVM/LVM2.
I tend to prefer software RAID simply because then I'm not locked to a specific vendor/controller.
With RAID-1 (and not even block-striped RAID-0 or 10), several vendors don't "lock you in." Not only can you typically read the disk label on the "raw" disk, but there is support for reading volumes of different drives.
In fact, this is how LVM2+DM (DeviceMapper) is adding support for FRAID in kernel 2.6.
If a hardware failure occurs that takes out the controller but leaves at least one of the HDDs ok, I can take one software RAID HDD, stick it into another controller, and have a working system in very short order.
So can I, and I have done so when I didn't have a 3Ware Escalade or equivalent FRAID card around.
Hardware RAID frequently does not have this advantage.
That is an absolutely _false_ technical statement with regards to _several_ vendors. Please stop "blanket covering" all "Hardware RAID" with such absolutes.
When I've set up RAID, I did so with the RH installer, and have always picked RAID1.
I'm a huge fan of RAID-1 and RAID-10.
(RAID5 is a joke for SW RAID)
Agreed. The newer Opteron systems help as long as they have an excellent I/O design, but that loads much of the interconnect doing just I/O operations for the writes (let alone during rebuilds) -- loads that could be doing data services.
I've set up a number of RAID installs with "boot/root" and extensions using the Software RAID howto. (google it)
And I have as well. Unfortunately, the main concern is headless/remote recovery when the disk fails. Installing the MBR and bootstrap so it can boot from another device when the BIOS still sees the original, yet failed, disk is the issue.
Until the LVM2+DM work supports more FRAID chips/cards to overcome the BIOS mapping issue (not likely until the FRAID vendors recognize and support the DM work), I still prefer at $100 3Ware Escalade.
Experimentally, I've set up a RAID array, removed one drive, booted, shutdown, and then replaced it with the other.
As have I, on non-x86/non-Linux architectures as well as Linux. But if you have a headless/remote system, and the first drive fails, that doesn't solve the issue the BIOS mapping.
Both drives booted fine, so there doesn't appear to be any particular issue with grub.
As long as you have physical access to the system.
When done, I had to resync the drives (again, see the Software RAID howto)
I prefer autonomous operation. It's worth $100 IMHO.
The only time I ran into trouble is that when you set up a RAID array, you have to have all the partitions installed on the machine at setup time.
_Not_ true with even software RAID!
If you aren't using LVM, then yes, you have to pre-partition. But even then, you can define new MD slices.
But if you are using LVM/LVM2 (whether LVM/LVM2 is atop of a MD setup, or you create MD slices in LVM/LVM2 extents), you can dynamically create slices, fileystems, etc... without bringing down the box.
It seems you can't add active partitions after the fact.
I think you're mixing the fact that it is difficult to "resize" MD slices with adding "active" partitions. Those are more limitations with the legacy BIOS/DOS disk label than Linux MD, which LVM/LVM2 solves nicely.
[ Just like LDM Disk Labels solve for Windows NT5+ (2000+) ]
Other than that, in 5 cases, it's been basically perfect for me, and I plan to deploy Centos 4.x/Software RAID/Boot-root again sometime next month.
As have I. But at the same time, I find that putting in a $100 3Ware card has saved my butt.
Like the time the first disk failed 1,000 miles away, and the BIOS was still mapping the primary disk which it couldn't boot from.
Since then, I have refused to put in a co-located box without a 3Ware Escalade 700x-2 or 800x-2 card. The system has to be able to boot without local modification.
-- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith@ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos