[CentOS] raid 1 question

Fri Mar 22 15:34:44 UTC 2013
Dave Johansen <davejohansen at gmail.com>

On Fri, Mar 22, 2013 at 5:16 AM, SilverTip257 <silvertip257 at gmail.com>
wrote:
>
> On Thu, Mar 21, 2013 at 10:04 PM, Dave Johansen
> <davejohansen at gmail.com>wrote:
>
> > On Tue, Mar 12, 2013 at 5:04 PM, SilverTip257 <silvertip257 at gmail.com>
> > wrote:
> > >
> > > On Tue, Mar 12, 2013 at 1:28 PM, Dave Johansen
> > > <davejohansen at gmail.com>wrote:
> > >
> > > > On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257
> > > > <silvertip257 at gmail.com>
> > > > wrote:
> > > > >
> > > > > On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno <greno at verizon.net>
> > wrote:
> > > > >
> > > > > > On 03/07/2013 06:52 PM, Les Mikesell wrote:
> > > > > > > On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce
> > > > > > > <pierce at hogranch.com>
> > > > > > wrote:
> > > > > > >> On 3/7/2013 3:35 PM, Gerry Reno wrote:
> > > > > > >>> Dave,  I've been using software raid with every type of
> > > > > > >>> RedHat
> > > > distro
> > > > > > RH/CentOS/Fedora for over 10 years without any
> > > > > > >>> serious difficulties.  I don't quite understand the logic in
> > all
> > > > these
> > > > > > negative statements about software raid on that
> > > > > > >>> wiki page.  The worst I get into is I have to boot from a
> > > > > > >>> bootdisk
> > > > if
> > > > > > the MBR gets corrupted for any reason.  No big
> > > > > > >>> deal.  Just rerun grub.
> > > > > > >> have you been putting /boot on a mdraid?  that's what the
> > article
> > > > > > >> is
> > > > > > >> recommending against.
> > > > > > > I've put /boot on md  raid1 on a lot of machines (always
> > > > > > > drives
> > > > > > > small
> > > > > > > enough to be MBR based) and never had any problem with the
> > > > > > > partition
> > > > > > > looking enough like a native one for grub to boot it.  The
> > > > > > > worst
> > > > thing
> > > > > >
> > > > >
> > > > > No problems here either - I have had /boot on software raid1 on
> > > > > quite
> > > > > a
> > > > few
> > > > > systems past and present.
> > > > >
> > > > >
> > > > > > > I've seen about it is that some machines change their idea of
> > bios
> > > > > >
> > > > >
> > > > > If I do have a drive fail, I can frequently hot-remove them and
> > > > > hot-add
> > > > the
> > > > > replacement drive to get it resyncing without powering off.
> > > > >
> > > > >
> > > > > > > disk 0 and 1 when the first one fails, so your grub setup
> > > > > > > might
> > be
> > > > > > > wrong even after you do it on the 2nd disk - and that would be
> > the
> > > > > > > same with/without raid.   As long as you are prepared to boot
> > from
> > > > > > > a
> > > > > > > rescue disk you can fix it easily anyway.
> > > > > > >
> > > > > > Good point, Les.   Rescue disks and bootdisks are key and
> > > > > > critical
> > > > > > if
> > > > > > you're going to use software raid.
> > > > > >
> > > > > >
> > > > > I think we could argue that rescue disks are a necessity
> > > > > regardless
> > > > > one
> > > > is
> > > > > using software raid or not.  :)
> > > >
> > > > Thanks for all of the helpful info, and now I have a follow on
> > > > question. I have a Dell m6500 that I've been running as a RAID 1
> > > > using
> > > > the BIOS RAID on RHEL 5. The issue is that when you switch one of
> > > > the
> > > > drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize
> > > > it
> > > > anymore. So here are my questions:
> > > >
> > > > 1) Has this issue of handling the renaming been resolved in RHEL 6?
> > > > (my guess is no)
> > > >
> > >
> > > I've not seen any weird bios naming issues.
> >
> > It's an Intel Software RAID and whenever either of the drives are
> > switched, the system won't boot at all because it says it can't find
> > the device. I end up having to boot to a rescue disk and then manually
> > edit the init image to get it to boot.
> >
> > >
> > >
> > > > 2) Would a software RAID be a better choice than using the BIOS
> > > > RAID?
> > > >
> > >
> > > It depends!  In another thread on this list someone said they prefer
> > > the
> > > reliable Linux toolset over the manufacturer tools/RAID controllers.
> > >
> > > In a way it comes down to what you can afford and what you are
> > comfortable
> > > with.
> > > And then there are chips on the hardware RAID controllers on which the
> > > RAID
> > > operations are offloaded.
> > >
> > >
> > > > 3) If a software RAID is the better choice, are there going to be an
> > > > impact on performance/stability/etc?
> > > >
> > >
> > > Supposedly hardware RAID performs better.  I've not done any tests to
> > > quantify this statement I've heard others make.
> > >
> > > Since it's software RAID, the OS will be using a few CPU cycles to
> > > handle
> > > the softraid.  But I'll doubt you'll miss those CPU cycles ... I
> > > haven't
> > > and I have a mix of hardware raid and software raid systems.  In the
> > > end
> > > that it will come down to drive performance in terms of how many IOPS
> > > you
> > > can squeeze out of your drives.
> > >
> > > Make certain to set up checks for your array health and disk health
> > > (smartd).  If you use hardware raid many controllers don't allow
> > > directly
> > > accessing the drives with smartctl ... you have to use a vendor
> > > binary/utility (or open source one if available).
> > >
> > > And then there's aligning your hardware RAID boundaries... [0] [1]  :)
> > >
> > > [0]
> > >
> > >
> >
> > http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-raid-the-theory/
> > > [1]
> > >
> > >
> >
> > http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edition/
> >
> > Thanks to everyone for all the info and help. It sounds like the best
> > solution is a software RAID with /boot being a non-RAID partition and
> > then being rsynced on to the second drive, so I'll give that a whirl.
> >
>
> Just a bit of emphasis here:
>
> I've had success with /boot being part of a RAID1 software array and
> installing GRUB on the MBR of both disks.  That way if the main/first disk
> fails the other disk still has GRUB installed and the system will boot if
> rebooted.
>
> RAID1 is viewed as single disks by GRUB.  That is it boots off the _first
> drive_ -- it is not until the Linux kernel has loaded and mdadm has
> assembled the drives that you actually have a RAID1.  Since GRUB only
> boots
> off one drive it is prudent to install GRUB on both disks when set up this
> way.
>
> Putting /boot on a RAID1 software array will save you having to rsync
> /boot
> to the other drive.  And booting to a rescue CD to install GRUB on the new
> drive after the primary drive died.  The above configuration is some work
> up front, but less hassle in the wake of a drive failure.
>
> Try it on a test system if you don't trust the configuration ... it will
> boot.  :)
> I have this configuration on fifteen or more systems (rough estimate) and
> some have been in service for years.

That does sound like a simpler solution in the longer term and I'm
more concerned with maintenance/use than with the difficulty of
setting it up, so I will give this a whirl.
Thanks,
Dave