[CentOS] Software RAID10 - which two disks can fail?

Tue Apr 8 06:49:41 UTC 2014
Christopher Chan <christopher.chan at bradbury.edu.hk>

On Tuesday, April 08, 2014 03:47 AM, Rafał Radecki wrote:
> As far as I know raid10 is ~ "a raid0 built on top of two raid1" (
> http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_1.2B0 - raid10). So I
> think that by default in my case:
No, Linux md raid10 is NOT a nested raid setup where you build a raid0 
on top of two raid1 arrays.

>
> /dev/sda6 and /dev/sdb6 form the first "raid1"
> /dev/sdd6 and /dev/sdc6 form the second "raid1"
>
> So is it so that if I fail/remove for example:
> - /dev/sdb6 and /dev/sdc6 (different "raid1's") - the raid10 will be
> usable/data will be ok?
> - /dev/sda6 and /dev/sdb6 (the same "raid1") - the raid10 will be not
> usable/data will be lost?
The man page for md which has a section on "RAID10" describes the 
possibility of something is absolutely impossibe with a nested raid1+0 
setup.

Excerpt: If, for example, an array is created with 5 devices and 2 
replicas, then space equivalent to 2.5 of the devices will be available, 
and every block will be stored on two different devices.

So contrary to this statement: "RAID10 provides a combination of RAID1 
and RAID0, and is sometimes known as RAID1+0.", linux md raid10 is NOT 
raid1+0. Is something entirely new and different but unfortunately 
called raid10 perhaps due to it being able to create a raid1+0 array and 
a different layout using similar concepts.


>
> I read in context of raid10 about replicas of data (2 by default) and the
> data layout (near/far/offset). I see in the output of mdadm -D the line
> "Layout : near=2, far=1" and am not sure which layout is exactly used and
> how it influences data layout/distribution in my case :|
>
> I would really appreciate a definite answer which partitions I can remove
> and which I cannot remove at the same time because I need to perform some
> disk maintenance tasks on this raid10 array. Thanks for all help!
>

If you want something that you can be sure about, do what I do. Make two 
raid1 md devices and then use them to make a raid0 device. raid10 is 
something cooked up by Neil Brown and but is not raid1+0. 
http://en.wikipedia.org/wiki/Linux_MD_RAID_10#LINUX-MD-RAID-10