[CentOS] Centos 4.2 and Boot/Root on RAID?

Wed Oct 26 09:48:02 UTC 2005
Johnny Hughes <mailing-lists at hughesjr.com>

BEFORE THIS TURNS INTO A PISSING CONTEST ... BE NICE, BE NICE, BE NICE

--
Johnny Hughes
CentOS 4 Developer ... and mailman admin for this list :)

On Wed, 2005-10-26 at 01:58 -0700, Benjamin Smith wrote:
> Bryan, 
> 
> If my viewpoint is limited, I have within my email defined (to the best of my 
> ability) the limits of my viewpoint. Use as you see fit. 
> 
> Take a big, fat, chill pill, and realize that you're amongst friends, eh? 
> Where have I applied absolutes? Is it not true that hardware RAID 
> "frequently" leaves you locked in? Not "ALWAYS" (which would be an 
> "absolute") but frequently? "several vendors don't lock you in" doesn't sound 
> much like "infrequent" to me. 
> 
> And, if performance isn't a big issue, why bother with HW RAID? There are many 
> circumstances where data integrity is important, but a few hours of downtime 
> won't kill anybody. 
> 
> I'm glad you like your 3ware card(s). But I've made stuff work and work well 
> on 1/10 your price (where $100 includes the entire computer sans monitor) 
> with software RAID. 
> 
> Spend your $100 however you like. I offer my opinion, and I offer clear 
> qualifications on the scope of my opinions. If you're running Yahoo, $100 is 
> not even on the radar.  But, if you're running a server for a 6-man company, 
> $100 can be the difference between gaining and losing a contract. 
> 
> So, get off your high horse, offer your endorsements of the 3ware cards to the 
> rest of us, and relax already! 
> 
> -Ben 
> 
> PS: If my butt is on the line and it's located 1,000 miles away, I'm going to 
> demand 24x7 "hot hands" at a high quality colo with qualified staff. (and I 
> do currently) There are many things that can go wrong, only one of which is a 
> HDD failure, and if a controller card is all you feel you can count on, may 
> god have mercy on you and your clientelle! 
> 
> On Tuesday 25 October 2005 17:38, Bryan J. Smith wrote:
> > [ I really dislike these discussions because it is often
> > opinions that are based on limited viewpoints.  I've used a
> > lot of software and hardware approaches over many different
> > platforms and many different systems, and what I repeatedly
> > see is absolutes applied when they are not applicable to many
> > vendors. ]
> > 
> > Benjamin Smith <lists at benjamindsmith.com> wrote:
> > > I've not yet tried Software RAID 1 with Centos 4.x but I've
> > > done so with Fedora Core 1 / X86-32 so I'd assume that my
> > > comments would apply. 
> > 
> > Just be wary of changes in MD and/or LVM/LVM2.
> > 
> > > I tend to prefer software RAID simply because then I'm not
> > > locked to a specific vendor/controller.
> > 
> > With RAID-1 (and not even block-striped RAID-0 or 10),
> > several vendors don't "lock you in."  Not only can you
> > typically read the disk label on the "raw" disk, but there is
> > support for reading volumes of different drives.
> > 
> > In fact, this is how LVM2+DM (DeviceMapper) is adding support
> > for FRAID in kernel 2.6.
> > 
> > > If a hardware failure occurs that takes out the controller
> > > but leaves at least one of the HDDs ok, I can take one
> > > software RAID HDD, stick it into another controller, and
> > > have a working system in very short order.
> > 
> > So can I, and I have done so when I didn't have a 3Ware
> > Escalade or equivalent FRAID card around.
> > 
> > > Hardware RAID frequently does not have this advantage. 
> > 
> > That is an absolutely _false_ technical statement with
> > regards to _several_ vendors.  Please stop "blanket covering"
> > all "Hardware RAID" with such absolutes.
> > 
> > > When I've set up RAID, I did so with the RH installer, and
> > > have always picked  RAID1.
> > 
> > I'm a huge fan of RAID-1 and RAID-10.
> > 
> > > (RAID5 is a joke for SW RAID)
> > 
> > Agreed.  The newer Opteron systems help as long as they have
> > an excellent I/O design, but that loads much of the
> > interconnect doing just I/O operations for the writes (let
> > alone during rebuilds) -- loads that could be doing data
> > services.
> > 
> > > I've set up a number of RAID installs with "boot/root" and
> > > extensions using the Software RAID howto. (google it) 
> > 
> > And I have as well.  Unfortunately, the main concern is
> > headless/remote recovery when the disk fails.  Installing the
> > MBR and bootstrap so it can boot from another device when the
> > BIOS still sees the original, yet failed, disk is the issue.
> > 
> > Until the LVM2+DM work supports more FRAID chips/cards to
> > overcome the BIOS mapping issue (not likely until the FRAID
> > vendors recognize and support the DM work), I still prefer at
> > $100 3Ware Escalade.
> > 
> > > Experimentally, I've set up a RAID array, removed one
> > > drive, booted, shutdown, and then replaced it with the
> > > other.
> > 
> > As have I, on non-x86/non-Linux architectures as well as
> > Linux.  But if you have a headless/remote system, and the
> > first drive fails, that doesn't solve the issue the BIOS
> > mapping.
> > 
> > > Both drives booted fine, so there doesn't appear to be
> > > any particular issue with grub.
> > 
> > As long as you have physical access to the system.
> > 
> > > When done, I had to resync the drives (again, see the
> > > Software RAID howto) 
> > 
> > I prefer autonomous operation.  It's worth $100 IMHO.
> > 
> > > The only time I ran into trouble is that when you set up a
> > > RAID array, you have to have all the partitions installed
> > > on the machine at setup time.
> > 
> > _Not_ true with even software RAID!
> > 
> > If you aren't using LVM, then yes, you have to pre-partition.
> >  But even then, you can define new MD slices.
> > 
> > But if you are using LVM/LVM2 (whether LVM/LVM2 is atop of a
> > MD setup, or you create MD slices in LVM/LVM2 extents), you
> > can dynamically create slices, fileystems, etc... without
> > bringing down the box.
> > 
> > > It seems you can't add active partitions after the fact.
> > 
> > I think you're mixing the fact that it is difficult to
> > "resize" MD slices with adding "active" partitions.  Those
> > are more limitations with the legacy BIOS/DOS disk label than
> > Linux MD, which LVM/LVM2 solves nicely.
> > 
> > [ Just like LDM Disk Labels solve for Windows NT5+ (2000+) ]
> >  
> > > Other than that, in 5 cases, it's been basically perfect
> > > for me, and I plan to deploy Centos 4.x/Software
> > > RAID/Boot-root again sometime next month. 
> > 
> > As have I.  But at the same time, I find that putting in a
> > $100 3Ware card has saved my butt.
> > 
> > Like the time the first disk failed 1,000 miles away, and the
> > BIOS was still mapping the primary disk which it couldn't
> > boot from.
> > 
> > Since then, I have refused to put in a co-located box without
> > a 3Ware Escalade 700x-2 or 800x-2 card.  The system has to be
> > able to boot without local modification.
> > 
> > 
> > -- 
> > Bryan J. Smith                | Sent from Yahoo Mail
> > mailto:b.j.smith at ieee.org     |  (please excuse any
> > http://thebs413.blogspot.com/ |   missing headers)
> > _______________________________________________
> > CentOS mailing list
> > CentOS at centos.org
> > http://lists.centos.org/mailman/listinfo/centos
> > 
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.centos.org/pipermail/centos/attachments/20051026/69eccdff/attachment-0005.sig>