[CentOS] Re: Anaconda doesn't support raid10

Scott Silva ssilva at sgvwater.com
Mon May 7 22:13:23 UTC 2007


Ruslan Sivak spake the following on 5/7/2007 2:43 PM:
> Scott Silva wrote:
>> Ruslan Sivak spake the following on 5/7/2007 1:44 PM:
>>  
>>> Toby Bluhm wrote:
>>>    
>>>> Ruslan Sivak wrote:
>>>>      
>>>>> Ross S. W. Walker wrote:
>>>>>        
>>>>>>> -----Original Message-----
>>>>>>> From: centos-bounces at centos.org
>>>>>>> [mailto:centos-bounces at centos.org] On
>>>>>>> Behalf Of Ruslan Sivak
>>>>>>> Sent: Monday, May 07, 2007 12:53 PM
>>>>>>> To: CentOS mailing list
>>>>>>> Subject: [CentOS] Anaconda doesn't support raid10
>>>>>>>
>>>>>>> So after troubleshooting this for about a week, I was finally able
>>>>>>> to create a raid 10 device by installing the system, copying the md
>>>>>>> modules onto a floppy, and loading the raid10 module during the
>>>>>>> install.
>>>>>>> Now the problem is that I can't get it to show up in anaconda.  It
>>>>>>> detects the other arrays (raid0 and raid1) fine, but the raid10
>>>>>>> array won't show up.  Looking through the logs (Alt-F3), I see the
>>>>>>> following warning:
>>>>>>>
>>>>>>> WARNING: raid level RAID10 not supported, skipping md10.
>>>>>>> I'm starting to hate the installer more and more.  Why won't it let
>>>>>>> me install on this device, even though it's working perfectly from
>>>>>>> the shell?  Why am I the only one having this problem?  Is nobody
>>>>>>> out there using md based raid10?                 
>>>>>> Most people install the OS on a 2 disk raid1, then create a separate
>>>>>> raid10 for data storage.
>>>>>>
>>>>>> Anaconda was never designed to create RAID5/RAID10 during install.
>>>>>>
>>>>>> -Ross
>>>>>>
>>>>>>             
>>>>> Whether or not it was designed to create a Raid5/raid10, it allows
>>>>> the creating of raid5 and raid6 during install.  It doesn't, however,
>>>>> allow the use of raid10 even if it's created in the shell outside of
>>>>> anaconda (or if you have an old installation on a raid10).
>>>>> I've just installed the system as follows
>>>>>
>>>>> Raid1 for /boot with 2 spares (200mb)
>>>>> raid0 for swap  (1GB)
>>>>> raid6 for / (10GB)
>>>>>
>>>>> after installing, I was able to create a raid10 device and
>>>>> successfully mount and automount by using /etc/fstab
>>>>>
>>>>> Now to test what happens when a drive fails.  I pulled out the first
>>>>> drive - Box refuses to boot.  Going into rescue mode, I was able to
>>>>> mount /boot, was not able to mount the swap drive (as to be expected,
>>>>> as it's a raid0), was also not able to mount the / for some reason,
>>>>> which is a little surprising.
>>>>> I was able to mount the raid10 parition just fine.
>>>>> Maybe I messed up somewhere along the line.  I'll try again, but it's
>>>>> disheartening to see that a raid6 array would die after one drive
>>>>> failure, even if it was somehow my fault.
>>>>> Also assuming that the raid5 array could be recovered, what would I
>>>>> do with the swap partition?  Would I just recreate it from the space
>>>>> in the leftover drives and would that be all that I need to boot?
>>>>> Russ
>>>>>
>>>>>
>>>>>         
>>>> Russ,
>>>>
>>>> Nothing here to help you (again - :) just looking down the road a
>>>> little. If you do get this thing working the way you want, will you be
>>>> able to trust it to stay that way?
>>>>
>>>>       
>>> Well, it's been my experience, that in linux, unlike windows, it might
>>> take a while to get things the way you want, but once you do, you can
>>> pretty much trust it to stay that way.
>>> So yea, this is what I'm looking to do here.  I want to set up a system,
>>> that will live after 1 (or possibly 2) drive failures.  I want to know
>>> what I need to do ahead of time, so that I can be confident in my set
>>> up, and know what to do in case disaster strikes.
>>>
>>> Russ
>>>     
>> If you have the hardware, or the money, you can make a system pretty
>> durable.
>> But you get to a point that the gains aren't worth the cost. You can
>> get a
>> system to 3 "9's" fairly easy, but the cost to get to 4 "9's" is much
>> more. If
>> you want something better than 4 "9's", you will have to look at
>> clustering,
>> because a single reboot in a month can shoot down your numbers.
>>
>> If you want total reliability, you will need hot spares and a raid
>> method that
>> builds quickly, and you will need regular backups.
>>
>>   
> I'm not looking for total reliability.  I am building a low budget
> file/backup server.  I would like it to be fairly reliable with good
> performance.  Basically if 1 drive fails, I would like to still be up
> and running, even if it requires slight reconfigurations (ie recreating
> the swap partition).
> If 2 drives fail, I would like to still be able to be up and running
> assuming I wasn't unlucky enough to have 2 drives fail in the same
> mirror set.
> If 3 drives fail, I'm pretty much SOL.
> The most important thing is that I can easily survive a single disk
> failure.
> Russ
I had 2 seperate 2 drive failures on 2 brand new HP servers. I burned the
servers in over 2 weeks running various drive exercising duties like creating
and deleting files and such, and they waited to fail within an hour of each
other. Not even enough time for the hot spare to rebuild. Then I have lost 2
more drives in single event failures. HP was great with sending new drives,
usually the next day, and stated that if one more drive failed from the
original set they would replace the whole lot. But that doesn't get your data
back, or keep the server going. We even ordered an extra drive per server, and
 I stuck them in the hardware closet( after some burn-in time) just to be safe.

That also relates to my adaptec raid controller nightmare, but have since been
sleeping easy since I moved to 3ware's.

-- 

MailScanner is like deodorant...
You hope everybody uses it, and
you notice quickly if they don't!!!!




More information about the CentOS mailing list