[CentOS] looking for RAID 1+0 setup instructions?

Tue Sep 1 15:09:48 UTC 2009
Chan Chung Hang Christopher <christopher.chan at bradbury.edu.hk>

>>>> I would NOT do that. You should like the md layer handle all things
>>>> raid
>>>> and let lvm do just volume management.
>>>>
>>>>         
>>> Your under the asumption that they are two different systems.
>>>
>>>       
>> You're under the assumption that they are not.
>>     
>
> http://en.m.wikipedia.org/wiki/Device_mapper
>
> If you want I can forward LXR references to MD and LVM into the device  
> mapper code or LKML references that talk about rewriting MD and LVM  
> for device mapper.
>
>
>   
md can make use of dm to get devices for its use but it certainly does 
not just ask dm to create a raid1 device. md does the actually raiding 
itself. Not dm.


>>> Md RAID and LVM are both interfaces to the device mapper system which
>>> handles the LBA translation, duplication and parity calculation.
>>>
>>>       
>> Are they? Since when was md and dm the same thing? dm was added  
>> after md
>> had had a long presence in the linux kernel...like since linux 2.0
>>     
>
> Both MD RAID and LVM were rewritten to use the device mapper interface  
> to mapped block devices back around the arrival of 2.6.
>
>   
That does not equate to md and dm being the same thing. Like you say, 
'TO USE' dm. When did that mean they are the same thing?


>>> I have said it before, but I'll say it again, how much I wish md RAID
>>> and LVM would merge to provide a single interface for creation of
>>> volume groups that support different RAID levels.
>>>
>>>
>>>       
>> Good luck with that. If key Linux developers diss the zfs approach and
>> vouch for the multi-layer approach, I do not ever see md and dm  
>> merging.
>>     
>
> I'm not talking ZFS, I'm not talking about merging the file system,  
> just the RAID and logical volume manager which could make designing  
> installers and managing systems simpler.
>
>   
Good luck taking Neil Brown out then. http://lwn.net/Articles/169142/
and http://lwn.net/Articles/169140/

Get rid of Neil Brown and md will disappear. I think.
>   
>>>> To create a raid1+0 array, you first create the mirrors and then you
>>>> create a striped array that consists of the mirror devices. There is
>>>> another raid10 module that does its own thing with regards to
>>>> 'raid10',
>>>> is not supported by the installer and does not necessarily behave  
>>>> like
>>>> raid1+0.
>>>>
>>>>         
>>> Problem is the install program doesn't support setting up RAID10 or
>>> layered MD devices.
>>>
>>>       
>> Oh? I have worked around it before even in the RH9 days. Just go into
>> the shell (Hit F2), create what you want, go back to the installer.  
>> Are
>> you so sure that anaconda does not support creating layered md  
>> devices?
>> BTW, why are you talking about md devices now? I thought you said md  
>> and
>> dm are the same?
>>     
>
> You know what, let me try just that today, I have a new install to do,  
> so I'll try pre-creating a RAID10 on install and report back. First  
> I'll try layered MD devices and then I'll try creating a RAID10 md  
> device and we'll see if it can even boot off them.
>
>   
Let me just point out that I never said you can boot off a raid1+0 
device. I only said that you can create a raid1+0 device at install 
time. /boot will have to be on a raid1 device. The raid1+0 device can be 
used for other filesystems including root or as a physical volume. 
Forget raid10, that module is not even available at install time with 
Centos 4 IIRC. Not sure about Centos 5.


>>> I would definitely avoid layered MD devices as it's more complicated
>>> to resolve disk failures.
>>>
>>>       
>> Huh?
>>
>> I do not see what part of 'cat /proc/mdstat' will confuse you. It will
>> always report which md device had a problem and it will report which
>> device, be they md devices (rare) or disks.
>>     
>
> Having a complex setup is always more error prone to a simpler one.  
> Always.
>
>   
-_-

Both are still multilayered...just different codepaths/tech. I do not 
see how lvm is simpler than md.
>>> In my tests an LVM striped across two RAID1 devices gave the exact
>>> same performance as a RAID10, but it gave the added benefit of
>>> creating LVs with varying stripe segment sizes which is great for
>>> varying workloads.
>>>       
>> Now that is complicating things. Is the problem in the dm layer or in
>> the md layer...yada, yada
>>     
>
> Not really, have multiple software or hardware RAID1s make a VG out of  
> them, then create LVs. One doesn't have to do anything special if it  
> isn't needed, but it's there and simple to do if you need to. Try  
> changing the segment size of an existing software or hardware array  
> when it's already setup.
>   
Yeah, using lvm to stripe is certainly more convenient.


> You know you really are an arrogant person that doesn't tolerate  
> anyone disagreeing with them. You are the embodyment of everything  
> people talk about when they talk about the Linux community's elist  
> attitude and I wish you would make at least a small attempt to change  
> your attitude.

How have I been elitist? Did I tell you to get lost like elites like to 
do? Did I snub you or something? Only you can say that I made 
assumptions and not you? ???