[CentOS] looking for RAID 1+0 setup instructions?

Ross Walker rswwalker at gmail.com
Tue Sep 1 13:44:06 UTC 2009


On Aug 31, 2009, at 7:59 PM, Christopher Chan <christopher.chan at bradbury.edu.hk 
 > wrote:

> Ross Walker wrote:
>> On Aug 30, 2009, at 10:33 PM, Christopher Chan <christopher.chan at bradbury.edu.hk
>>> wrote:
>>
>>
>>>>>> How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and  
>>>>>> then a
>>>>>> RAID 0 on top of it) on say CentOS 4.6 ?
>>>>>>
>>>>>>
>>>>> Setup both RAID-1 arrays then stripe them with LVM?
>>>>>
>>>>> http://www.redhat.com/magazine/009jul05/features/lvm2/
>>>>>
>>>>> Though I'd prefer to opt for a hardware raid card, I think
>>>>> you said you had SATA disks, which if that's the case would
>>>>> go for a 3ware.
>>>>>
>>>>> nate
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>>
>>>>>
>>>> Nate, this is what I was looking for  :)
>>>>
>>>> I'm going away for 2 weeks now, but will definitely give it a  
>>>> shot as
>>>> soon as I can,
>>>>
>>>>
>>>>
>>> I would NOT do that. You should like the md layer handle all things
>>> raid
>>> and let lvm do just volume management.
>>>
>>
>> Your under the asumption that they are two different systems.
>>
> You're under the assumption that they are not.

http://en.m.wikipedia.org/wiki/Device_mapper

If you want I can forward LXR references to MD and LVM into the device  
mapper code or LKML references that talk about rewriting MD and LVM  
for device mapper.


>> Md RAID and LVM are both interfaces to the device mapper system which
>> handles the LBA translation, duplication and parity calculation.
>>
>
> Are they? Since when was md and dm the same thing? dm was added  
> after md
> had had a long presence in the linux kernel...like since linux 2.0

Both MD RAID and LVM were rewritten to use the device mapper interface  
to mapped block devices back around the arrival of 2.6.

>> I have said it before, but I'll say it again, how much I wish md RAID
>> and LVM would merge to provide a single interface for creation of
>> volume groups that support different RAID levels.
>>
>>
> Good luck with that. If key Linux developers diss the zfs approach and
> vouch for the multi-layer approach, I do not ever see md and dm  
> merging.

I'm not talking ZFS, I'm not talking about merging the file system,  
just the RAID and logical volume manager which could make designing  
installers and managing systems simpler.


>>
>>> To create a raid1+0 array, you first create the mirrors and then you
>>> create a striped array that consists of the mirror devices. There is
>>> another raid10 module that does its own thing with regards to
>>> 'raid10',
>>> is not supported by the installer and does not necessarily behave  
>>> like
>>> raid1+0.
>>>
>>
>> Problem is the install program doesn't support setting up RAID10 or
>> layered MD devices.
>>
> Oh? I have worked around it before even in the RH9 days. Just go into
> the shell (Hit F2), create what you want, go back to the installer.  
> Are
> you so sure that anaconda does not support creating layered md  
> devices?
> BTW, why are you talking about md devices now? I thought you said md  
> and
> dm are the same?

You know what, let me try just that today, I have a new install to do,  
so I'll try pre-creating a RAID10 on install and report back. First  
I'll try layered MD devices and then I'll try creating a RAID10 md  
device and we'll see if it can even boot off them.

>> I would definitely avoid layered MD devices as it's more complicated
>> to resolve disk failures.
>>
> Huh?
>
> I do not see what part of 'cat /proc/mdstat' will confuse you. It will
> always report which md device had a problem and it will report which
> device, be they md devices (rare) or disks.

Having a complex setup is always more error prone to a simpler one.  
Always.

>> In my tests an LVM striped across two RAID1 devices gave the exact
>> same performance as a RAID10, but it gave the added benefit of
>> creating LVs with varying stripe segment sizes which is great for
>> varying workloads.
>
>
> Now that is complicating things. Is the problem in the dm layer or in
> the md layer...yada, yada

Not really, have multiple software or hardware RAID1s make a VG out of  
them, then create LVs. One doesn't have to do anything special if it  
isn't needed, but it's there and simple to do if you need to. Try  
changing the segment size of an existing software or hardware array  
when it's already setup.

You know you really are an arrogant person that doesn't tolerate  
anyone disagreeing with them. You are the embodyment of everything  
people talk about when they talk about the Linux community's elist  
attitude and I wish you would make at least a small attempt to change  
your attitude.

-Ross
  



More information about the CentOS mailing list