I have been reading lots of stuff but trying to find out if a raid10 2drive setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives for my data and a seperate system drive, I am only interested in doing raid on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2 /dev/sdb1 /dev/sdc1
I have also read about near and far but was going to play with this and was wondering if anyone had any insights for 2 drives setup...Thanks...
On 10-09-24 10:27 PM, Tom Bishop wrote:
I have been reading lots of stuff but trying to find out if a raid10 2drive setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives for my data and a seperate system drive, I am only interested in doing raid on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2 /dev/sdb1 /dev/sdc1
I have also read about near and far but was going to play with this and was wondering if anyone had any insights for 2 drives setup...Thanks...
Raid 10 requires 4 drives. First you would make two RAID 0 arrays, then create a third array that is RAID 1 using the two RAID 0 arrays for it's devices.
With only two drives, your option is RAID 1 (mirroring - proper redundancy) or RAID 0 (striping only - lose one drive and you lose *all* data).
On Fri, Sep 24, 2010 at 7:50 PM, Digimer linux@alteeve.com wrote:
On 10-09-24 10:27 PM, Tom Bishop wrote:
I have been reading lots of stuff but trying to find out if a raid10 2drive setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives for my data and a seperate system drive, I am only interested in doing raid on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2 /dev/sdb1 /dev/sdc1
I have also read about near and far but was going to play with this and was wondering if anyone had any insights for 2 drives setup...Thanks...
Raid 10 requires 4 drives. First you would make two RAID 0 arrays, then create a third array that is RAID 1 using the two RAID 0 arrays for it's devices.
This would be a RAID 0+1, stripped set's mirror together. RAID 1+0 is mirrored disk sets stripped together.
http://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID
With only two drives, your option is RAID 1 (mirroring - proper
redundancy) or RAID 0 (striping only - lose one drive and you lose *all* data).
-- Digimer E-Mail: linux@alteeve.com AN!Whitepapers: http://alteeve.com Node Assassin: http://nodeassassin.org _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Sep 24, 2010 at 10:50 PM, Digimer linux@alteeve.com wrote:
On 10-09-24 10:27 PM, Tom Bishop wrote:
I have been reading lots of stuff but trying to find out if a raid10 2drive setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives for my data and a seperate system drive, I am only interested in doing raid on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2 /dev/sdb1 /dev/sdc1
I have also read about near and far but was going to play with this and was wondering if anyone had any insights for 2 drives setup...Thanks...
Raid 10 requires 4 drives. First you would make two RAID 0 arrays, then create a third array that is RAID 1 using the two RAID 0 arrays for it's devices.
With only two drives, your option is RAID 1 (mirroring - proper redundancy) or RAID 0 (striping only - lose one drive and you lose *all* data).
mdraid does offer a 2-disk raid10 option but it is basically raid1 with some extra mirroring options: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
You can specify the layout options with "--layout". From the man page:
<begin> The layout options for RAID10 are one of 'n', 'o' or 'p' followed by a small number. The default is 'n2'.
n signals 'near' copies. Multiple copies of one data block are at similar offsets in different devices.
o signals 'offset' copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, and are one chunk further down.
f signals 'far' copies (multiple copies have very different offsets). See md(4) for more detail about 'near' and 'far'.
The number is the number of copies of each datablock. 2 is normal, 3 can be useful. This number can be at most equal to the number of devices in the array. It does not need to divide evenly into that number (e.g. it is perfectly legal to have an 'n2' layout for an array with an odd number of devices). </end>
On 09/24/2010 07:50 PM, Digimer wrote:
Raid 10 requires 4 drives. First you would make two RAID 0 arrays, then create a third array that is RAID 1 using the two RAID 0 arrays for it's devices.
With only two drives, your option is RAID 1 (mirroring - proper redundancy) or RAID 0 (striping only - lose one drive and you lose *all* data).
That's 0+1 not 1+0.
And don't do it that way.
If you have a single drive failure with RAID 0+1 you've lost *all* of your redundancy - one more failure and you are dead. If you create two RAID1 sets and then strip them into a RAID0 you get pretty much the same performance and space efficiency characteristics, but if you have a drive failure you still have partial redundancy. You could actually take a *second* drive failure as long as it was in the other RAID1 pair. With 4 drives raid0+1 can only survive 1 drive failure. With 4 drives in raid 1+0 you can survive an average of 1.67 drive failures.
On 09/25/2010 01:06 PM, Benjamin Franz wrote:
If you have a single drive failure with RAID 0+1 you've lost *all* of your redundancy - one more failure and you are dead. If you create two
Things get a bit 'grey' with the mdraid10 and extentions, look at : http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 for an overview.
- KB
And don't do it that way.
If you have a single drive failure with RAID 0+1 you've lost *all* of your redundancy - one more failure and you are dead. If you create two RAID1 sets and then strip them into a RAID0 you get pretty much the same performance and space efficiency characteristics, but if you have a drive failure you still have partial redundancy. You could actually take a *second* drive failure as long as it was in the other RAID1 pair. With 4 drives raid0+1 can only survive 1 drive failure. With 4 drives in raid 1+0 you can survive an average of 1.67 drive failures.
Indeed.
This article explains the odds of loosing data with RAID 1+0 vs 0+1:
Why is RAID 1+0 better than RAID 0+1? http://www.aput.net/~jheiss/raid10/
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
On Fri, Sep 24, 2010 at 7:27 PM, Tom Bishop bishoptf@gmail.com wrote:
I have been reading lots of stuff but trying to find out if a raid10 2drive setup is any better/worse than a normal raid 1 setup....I have to 1Tb drives for my data and a seperate system drive, I am only interested in doing raid on my data...
So i setup my initial test like this....
mdadm -v --create /dev/md0 --chunk 1024 --level=raid10 --raid-devices=2 /dev/sdb1 /dev/sdc1
I have also read about near and far but was going to play with this and was wondering if anyone had any insights for 2 drives setup...Thanks...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know, Neil Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
On Sep 25, 2010, at 9:11 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know, Neil Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3 drives to get the distributed copy feature of Neil's mdraid10.
Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say but a raid level based on distributing copies of chunks around the spindles for redundancy.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
This can give you the ability to extend a LVM RAID10 VG by adding RAID10 PVs to the VG. Unfortunately there isn't a resilver feature to LVM so you need to create a new LV to stripe it across all the members afterward, so leave room in the VG to do that.
-Ross
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 9:11 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know, Neil Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3 drives to get the distributed copy feature of Neil's mdraid10.
I had posted earlier ( http://lists.centos.org/pipermail/centos/2010-September/099473.html ) that mdraid10 with two drives is basically raid1 but that it has some mirroring options. In the "far layout" mirroring option (where, according to WP, "all the drives are divided into f sections and all the chunks are repeated in each section but offset by one device") reads are faster than mdraid1 or vanilla mdraid10 on two drives.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
Vanilla mdraid10 with four drives is "true raid10".
On Sep 25, 2010, at 1:52 PM, Tom H tomh0665@gmail.com wrote:
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 9:11 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know, Neil Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3 drives to get the distributed copy feature of Neil's mdraid10.
I had posted earlier ( http://lists.centos.org/pipermail/centos/2010-September/099473.html ) that mdraid10 with two drives is basically raid1 but that it has some mirroring options. In the "far layout" mirroring option (where, according to WP, "all the drives are divided into f sections and all the chunks are repeated in each section but offset by one device") reads are faster than mdraid1 or vanilla mdraid10 on two drives.
If you have any two copies of the same chunk on the same drive then redundancy is completely lost.
Therefore without loosing redundancy mdraid10 over two drives will have to be identical to raid1.
Reads on a raid1 can be serviced by either side of the mirror, I believe the policy is hard coded to round robin. I don't know if it is smart enough to distinguish sequential pattern from random and only service sequential reads from one side or not.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
Vanilla mdraid10 with four drives is "true raid10".
Well like you stated above that depends on the near or far layout pattern, you can get the same performance as a raid10 or better in certain workloads, but it really isn't a true raid10 in the sense that it isn't a stripe set of raid1s, but a distributed mirror set.
Now don't get me wrong I'm not saying it's not as good as a true raid10, in fact I believe it to be better as it provides way more flexibility and is a lot simpler of an implementation, but not really a raid10, but something completely new.
-Ross
Thanks for all of the inputs...I finally came across a good article summarizing what I needed, looks like I am going to try to the f2 option and then do some testing vs the default n2 option. I am building the array as we speak but it looks like building the f2 option will take 24hrs vs 2hrs for the n2 option....this is on 2 1TB hdd....
On Sat, Sep 25, 2010 at 3:04 PM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 1:52 PM, Tom H tomh0665@gmail.com wrote:
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker rswwalker@gmail.com
wrote:
On Sep 25, 2010, at 9:11 AM, Christopher Chan <
christopher.chan@bradbury.edu.hk> wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know,
Neil
Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you would
need 3
drives to get the distributed copy feature of Neil's mdraid10.
I had posted earlier ( http://lists.centos.org/pipermail/centos/2010-September/099473.html ) that mdraid10 with two drives is basically raid1 but that it has some mirroring options. In the "far layout" mirroring option (where, according to WP, "all the drives are divided into f sections and all the chunks are repeated in each section but offset by one device") reads are faster than mdraid1 or vanilla mdraid10 on two drives.
If you have any two copies of the same chunk on the same drive then redundancy is completely lost.
Therefore without loosing redundancy mdraid10 over two drives will have to be identical to raid1.
Reads on a raid1 can be serviced by either side of the mirror, I believe the policy is hard coded to round robin. I don't know if it is smart enough to distinguish sequential pattern from random and only service sequential reads from one side or not.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
Vanilla mdraid10 with four drives is "true raid10".
Well like you stated above that depends on the near or far layout pattern, you can get the same performance as a raid10 or better in certain workloads, but it really isn't a true raid10 in the sense that it isn't a stripe set of raid1s, but a distributed mirror set.
Now don't get me wrong I'm not saying it's not as good as a true raid10, in fact I believe it to be better as it provides way more flexibility and is a lot simpler of an implementation, but not really a raid10, but something completely new.
-Ross
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, Sep 25, 2010 at 4:04 PM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 1:52 PM, Tom H tomh0665@gmail.com wrote:
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 9:11 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration I can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know, Neil Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you would need 3 drives to get the distributed copy feature of Neil's mdraid10.
I had posted earlier ( http://lists.centos.org/pipermail/centos/2010-September/099473.html ) that mdraid10 with two drives is basically raid1 but that it has some mirroring options. In the "far layout" mirroring option (where, according to WP, "all the drives are divided into f sections and all the chunks are repeated in each section but offset by one device") reads are faster than mdraid1 or vanilla mdraid10 on two drives.
If you have any two copies of the same chunk on the same drive then redundancy is completely lost.
Therefore without loosing redundancy mdraid10 over two drives will have to be identical to raid1.
Reads on a raid1 can be serviced by either side of the mirror, I believe the policy is hard coded to round robin. I don't know if it is smart enough to distinguish sequential pattern from random and only service sequential reads from one side or not.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
Vanilla mdraid10 with four drives is "true raid10".
Well like you stated above that depends on the near or far layout pattern, you can get the same performance as a raid10 or better in certain workloads, but it really isn't a true raid10 in the sense that it isn't a stripe set of raid1s, but a distributed mirror set.
Now don't get me wrong I'm not saying it's not as good as a true raid10, in fact I believe it to be better as it provides way more flexibility and is a lot simpler of an implementation, but not really a raid10, but something completely new.
You must've misunderstood me.
mdraid10 on two disks: it is raid1 but you have the option of mirroring, for example, cylinder 24 on disk 1 with cylinder 48 on disk 2; the Wikipedia article says that it makes reads faster (I don't understand why but that's a different story).
mdraid10 on four disks: it is true raid10 but you also have various "--layout=" options.
Thanks everyone for the input...I have decided to go with the f2 option, however the rebuild time seems to be taking quite a long time, almost 24hr...I have read that there are options for speeding this up but want to make sure that they are ok to do....has to do with setting the minimum speed limit...
# sysctl dev.raid.speed_limit_min # sysctl dev.raid.speed_limit_max
On Sat, Sep 25, 2010 at 9:39 PM, Tom H tomh0665@gmail.com wrote:
On Sat, Sep 25, 2010 at 4:04 PM, Ross Walker rswwalker@gmail.com wrote:
On Sep 25, 2010, at 1:52 PM, Tom H tomh0665@gmail.com wrote:
On Sat, Sep 25, 2010 at 11:48 AM, Ross Walker rswwalker@gmail.com
wrote:
On Sep 25, 2010, at 9:11 AM, Christopher Chan <
christopher.chan@bradbury.edu.hk> wrote:
Jacob Bresciani wrote:
RAID10 requires at least 4 drives does it not?
Since it's a strip set of mirrored disks, the smallest configuration
I
can see is 4 disks, 2 mirrored pairs stripped.
He might be referring to what he can get from the mdraid10 (i know,
Neil
Brown could have chosen a better name) which is not quite the same as nested 1+0. Doing it the nested way, you need at least 4 drives. Using mdraid10 is another story. Thanks Neil for muddying the waters!
True, but if you figure it out mdraid10 with 2 drives = raid1, you
would need 3
drives to get the distributed copy feature of Neil's mdraid10.
I had posted earlier ( http://lists.centos.org/pipermail/centos/2010-September/099473.html ) that mdraid10 with two drives is basically raid1 but that it has some mirroring options. In the "far layout" mirroring option (where, according to WP, "all the drives are divided into f sections and all the chunks are repeated in each section but offset by one device") reads are faster than mdraid1 or vanilla mdraid10 on two drives.
If you have any two copies of the same chunk on the same drive then redundancy is completely lost.
Therefore without loosing redundancy mdraid10 over two drives will have to be identical to raid1.
Reads on a raid1 can be serviced by either side of the mirror, I believe the policy is hard coded to round robin. I don't know if it is smart enough to distinguish sequential pattern from random and only service sequential reads from one side or not.
For true RAID10 support in Linux you create multiple mdraid1 physical volumes, create a LVM volume group out of them and create logical volumes that interleave between these physical volumes.
Vanilla mdraid10 with four drives is "true raid10".
Well like you stated above that depends on the near or far layout
pattern,
you can get the same performance as a raid10 or better in certain workloads, but it really isn't a true raid10 in the sense that it isn't a
stripe
set of raid1s, but a distributed mirror set.
Now don't get me wrong I'm not saying it's not as good as a true raid10, in fact I believe it to be better as it provides way more flexibility and
is a
lot simpler of an implementation, but not really a raid10, but something completely new.
You must've misunderstood me.
mdraid10 on two disks: it is raid1 but you have the option of mirroring, for example, cylinder 24 on disk 1 with cylinder 48 on disk 2; the Wikipedia article says that it makes reads faster (I don't understand why but that's a different story).
mdraid10 on four disks: it is true raid10 but you also have various "--layout=" options. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
From: Tom Bishop bishoptf@gmail.com
Thanks everyone for the input...I have decided to go with the f2 option, however
the rebuild time seems to be taking quite a long time, almost 24hr...I have read
that there are options for speeding this up but want to make sure that they are
ok to do....has to do with setting the minimum speed limit...
# sysctl dev.raid.speed_limit_min # sysctl dev.raid.speed_limit_max
It just sets how "aggressive" the reconstruction will be (depending on server load, etc)... If you set it low, the reconstruction will be slow, but the server fast. If you set it high, the reconstruction will be fast, but the server slow.
JD
Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say but a raid level based on distributing copies of chunks around the spindles for redundancy.
Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a minimum of 3 drives?
This seems to me a much better name for it than calling it "RAID 10"...
Miguel Medalha wrote:
Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say but a raid level based on distributing copies of chunks around the spindles for redundancy.
Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a minimum of 3 drives?
This seems to me a much better name for it than calling it "RAID 10"...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Yes it is Raid-1E. This is explicitly documented in the link that Karanbir provided... http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Nataraj
On Sep 25, 2010, at 4:15 PM, Miguel Medalha miguelmedalha@sapo.pt wrote:
Mdraid10 actually allows for a 3 drive raid10 set. It isn't raid10 per say but a raid level based on distributing copies of chunks around the spindles for redundancy.
Isn't this what they call RAID 1e (RAID 1 Enhanced), which needs a minimum of 3 drives?
This seems to me a much better name for it than calling it "RAID 10"...
The raid1e type probably didn't exist when Neil Brown came up with the algorithm.
He should have patented it though...
Maybe he started out with the idea to create a raid10, but didn't want the complexity of managing sub-arrays so decided just to redistribute chunk copies instead and then it took off from there.
-Ross
The raid1e type probably didn't exist when Neil Brown came up with the algorithm.
You are probably right.
He should have patented it though...
Maybe...
Maybe he started out with the idea to create a raid10, but didn't want the complexity of managing sub-arrays so decided just to redistribute chunk copies instead and then it took off from there.
Yes. I didn't want to sound harsh to him. I am VERY grateful for his outstanding work.