Thanks for all of the inputs...I finally came across a good article summarizing what I needed, looks like I am going to try to the f2 option and then do some testing vs the default n2 option. I am building the array as we speak but it looks like building the f2 option will take 24hrs vs 2hrs for the n2 option....this is on 2 1TB hdd....<br>
<br><div class="gmail_quote">On Sat, Sep 25, 2010 at 3:04 PM, Ross Walker <span dir="ltr"><<a href="mailto:rswwalker@gmail.com">rswwalker@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im">On Sep 25, 2010, at 1:52 PM, Tom H <<a href="mailto:tomh0665@gmail.com">tomh0665@gmail.com</a>> wrote:<br>
<br>
> On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker <<a href="mailto:rswwalker@gmail.com">rswwalker@gmail.com</a>> wrote:<br>
>> On Sep 25, 2010, at 9:11 AM, Christopher Chan <<a href="mailto:christopher.chan@bradbury.edu.hk">christopher.chan@bradbury.edu.hk</a>> wrote:<br>
>>> Jacob Bresciani wrote:<br>
>>>> RAID10 requires at least 4 drives does it not?<br>
>>>><br>
>>>> Since it's a strip set of mirrored disks, the smallest configuration I<br>
>>>> can see is 4 disks, 2 mirrored pairs stripped.<br>
>>><br>
>>> He might be referring to what he can get from the mdraid10 (i know, Neil<br>
>>> Brown could have chosen a better name) which is not quite the same as<br>
>>> nested 1+0. Doing it the nested way, you need at least 4 drives. Using<br>
>>> mdraid10 is another story. Thanks Neil for muddying the waters!<br>
><br>
><br>
>> True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3<br>
>> drives to get the distributed copy feature of Neil's mdraid10.<br>
><br>
> I had posted earlier (<br>
> <a href="http://lists.centos.org/pipermail/centos/2010-September/099473.html" target="_blank">http://lists.centos.org/pipermail/centos/2010-September/099473.html</a> )<br>
> that mdraid10 with two drives is basically raid1 but that it has some<br>
> mirroring options. In the "far layout" mirroring option (where,<br>
> according to WP, "all the drives are divided into f sections and all<br>
> the chunks are repeated in each section but offset by one device")<br>
> reads are faster than mdraid1 or vanilla mdraid10 on two drives.<br>
<br>
</div>If you have any two copies of the same chunk on the same drive then redundancy is completely lost.<br>
<br>
Therefore without loosing redundancy mdraid10 over two drives will have to be identical to raid1.<br>
<br>
Reads on a raid1 can be serviced by either side of the mirror, I believe the policy is hard coded to round robin. I don't know if it is smart enough to distinguish sequential pattern from random and only service sequential reads from one side or not.<br>
<div class="im"><br>
>> For true RAID10 support in Linux you create multiple mdraid1 physical<br>
>> volumes, create a LVM volume group out of them and create logical<br>
>> volumes that interleave between these physical volumes.<br>
><br>
> Vanilla mdraid10 with four drives is "true raid10".<br>
<br>
</div>Well like you stated above that depends on the near or far layout pattern, you can get the same performance as a raid10 or better in certain workloads, but it really isn't a true raid10 in the sense that it isn't a stripe set of raid1s, but a distributed mirror set.<br>
<br>
Now don't get me wrong I'm not saying it's not as good as a true raid10, in fact I believe it to be better as it provides way more flexibility and is a lot simpler of an implementation, but not really a raid10, but something completely new.<br>
<font color="#888888"><br>
-Ross<br>
</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
CentOS mailing list<br>
<a href="mailto:CentOS@centos.org">CentOS@centos.org</a><br>
<a href="http://lists.centos.org/mailman/listinfo/centos" target="_blank">http://lists.centos.org/mailman/listinfo/centos</a><br>
</div></div></blockquote></div><br>