Hi,
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
On 31/08/2009, at 1:18 AM, Rudi Ahlers wrote:
Hi,
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
Hi Rudi RAID 10 and RAID 1+0 are the same thing.
See here: http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29 or here: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
-- Kind Regards Rudi Ahlers CEO, SoftDux Hosting Web: http://www.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sun, Aug 30, 2009 at 5:53 PM, Oliver Ransomoliver@ransom.com.au wrote:
On 31/08/2009, at 1:18 AM, Rudi Ahlers wrote:
Hi,
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
Hi Rudi RAID 10 and RAID 1+0 are the same thing.
See here: http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29 or here: http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Hi Oliver,
It's not the same thing :) Although they work, and do the same, the installer CD & mdadm needs to support it. The specific appliance that I want to install, doesn't support RAID 10, so I need to install RAID 1 + RAID 0, i.e. setting 2x RAID 1 mirrors, and then stripe then in RAID0 - but once the first 1 mirrors are setup, I can't stripe them.
I've seen people use LVM to add them to one volume, but the installer doesn't see to like that either.
Rudi Ahlers wrote:
Hi Oliver, It's not the same thing :) Although they work, and do the same, the installer CD & mdadm needs to support it. The specific appliance that I want to install, doesn't support RAID 10, so I need to install RAID 1 + RAID 0, i.e. setting 2x RAID 1 mirrors, and then stripe then in RAID0 - but once the first 1 mirrors are setup, I can't stripe them.
I've seen people use LVM to add them to one volume, but the installer doesn't see to like that either
Hmmm... 'specific appliance'.
This doesn't sound like you are installing CentOS. This makes it *really* hard for us to help you since we have absolutely no idea what you are actually doing. ;)
A) What are you actually doing?
B) Do you have to have RAID10 during install or is it sufficient that you can build a data 'drive' after install?
On Sun, Aug 30, 2009 at 9:10 PM, Jerry Franzjfranz@freerun.com wrote:
Rudi Ahlers wrote:
Hi Oliver, It's not the same thing :) Although they work, and do the same, the installer CD & mdadm needs to support it. The specific appliance that I want to install, doesn't support RAID 10, so I need to install RAID 1 + RAID 0, i.e. setting 2x RAID 1 mirrors, and then stripe then in RAID0 - but once the first 1 mirrors are setup, I can't stripe them.
I've seen people use LVM to add them to one volume, but the installer doesn't see to like that either
Hmmm... 'specific appliance'.
This doesn't sound like you are installing CentOS. This makes it *really* hard for us to help you since we have absolutely no idea what you are actually doing. ;)
A) What are you actually doing?
B) Do you have to have RAID10 during install or is it sufficient that you can build a data 'drive' after install?
-- Benjamin Franz _______________________________________________
heh, I was a bit hesitant to say what it is, but it's fine I guess - seeing as my previous post dealt on the matter of NAS devices :)
I'm installing Openfiler 2.3, which looks very very similar to the CentOS installer. My reason for RAID 1+0 (like I said it doesn't support RAID10) is for the higher level of redundancy & speed, and I would like to utilize the drives to their max,
I've setup the /boot on sda1 & sdc1 (100MB each) on RAID 1, and then configured sda2, sdb2, sdc2, sdd2 as 4GB swap each (no RAID), then the remainder on sda3, sdb3, sdc3 & sdd3 as 2x RAID 1 mirrors - i.e. sda3 & sdc3 on /dev/md2 & sdb3 & sdd3 on /dev/md3.
Yet, I can't create another RAID, i.e. /dev/md4 of md2 & md3 combined, nor can I tell the LVM setup to span md2 & md3 into the same volume. So, now I'm stuck.
Rudi Ahlers wrote:
I'm installing Openfiler 2.3, which looks very very similar to the CentOS installer. My reason for RAID 1+0 (like I said it doesn't support RAID10) is for the higher level of redundancy & speed, and I would like to utilize the drives to their max,
Note that the 'speed' benefit of the raid0 layer only applies when you are working with files large enough to need to span drive cylinders _and_ have no concurrent operations that would rather have that other drive's head somewhere else.
I've setup the /boot on sda1 & sdc1 (100MB each) on RAID 1, and then configured sda2, sdb2, sdc2, sdd2 as 4GB swap each (no RAID), then the remainder on sda3, sdb3, sdc3 & sdd3 as 2x RAID 1 mirrors - i.e. sda3 & sdc3 on /dev/md2 & sdb3 & sdd3 on /dev/md3.
And you probably really don't want to make the heads seek away from where your applications want them to do operating system trivia like logging all the time.
Although they work, and do the same, the installer CD & mdadm needs to support it. The specific appliance that I want to install, doesn't support RAID 10, so I need to install RAID 1 + RAID 0, i.e. setting 2x RAID 1 mirrors, and then stripe then in RAID0 - but once the first 1 mirrors are setup, I can't stripe them.
Aren't you using CentOS, then? You didn't tell us that... My answer was based on the assumption that you were talking about CentOS...
On Sun, Aug 30, 2009 at 9:15 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
Although they work, and do the same, the installer CD & mdadm needs to support it. The specific appliance that I want to install, doesn't support RAID 10, so I need to install RAID 1 + RAID 0, i.e. setting 2x RAID 1 mirrors, and then stripe then in RAID0 - but once the first 1 mirrors are setup, I can't stripe them.
Aren't you using CentOS, then? You didn't tell us that... My answer was based on the assumption that you were talking about CentOS... _______________________________________________
No, I'm using Openfiler. To make a long story short, the CentOS 5.3 DVD that I have with me right now is corrupt, and then new one that I downloaded yesterday is corrupt as well, I think our ISP's cache is busted. So, I want to setup 1 of 2 servers with Openfiler (as a proof of concept), and then the 2nd one with CentOS 5.3. But, the idea is to download the missing / corrupt CentOS installer libs from another repository once the Openfiler server is up & running.
This page from openfiler.com clearly states the following:
"Openfiler supports RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10."
On Sun, Aug 30, 2009 at 9:44 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
This page from openfiler.com clearly states the following:
"Openfiler supports RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10."
http://www.openfiler.com/products/openfiler-architecture _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Yet, I can't setup the RAID 10 they talk about, I only have RAID0, RAID1, RAID5 & RAID6 in the setup options, which is why I'm thinking of doing this another way.
Yet, I can't setup the RAID 10 they talk about, I only have RAID0, RAID1, RAID5 & RAID6 in the setup options, which is why I'm thinking of doing this another way.
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected?
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected?
I beg your pardon, I didn't see your previous post with the above information.
At this point, I think you should undo the RAID 1 groups and revert to the 4 raw partitions marked as Linux Software Raid. Then try to create a RAID 10 group. If the guys from openfiler say that it is possible... I see no reasons to doubt them.
On Sun, Aug 30, 2009 at 9:57 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected?
I beg your pardon, I didn't see your previous post with the above information.
At this point, I think you should undo the RAID 1 groups and revert to the 4 raw partitions marked as Linux Software Raid. Then try to create a RAID 10 group. If the guys from openfiler say that it is possible... I see no reasons to doubt them. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
There is no option to setup RAID 10.
But, let's get back to my previous request,
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
There is no option to setup RAID 10.
But, let's get back to my previous request,
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
There is no option to setup RAID 10 because you are trying to create an unsupported configuration. You should separate the OS from the data area.
Rudi Ahlers wrote:
On Sun, Aug 30, 2009 at 9:57 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected?
I beg your pardon, I didn't see your previous post with the above information.
At this point, I think you should undo the RAID 1 groups and revert to the 4 raw partitions marked as Linux Software Raid. Then try to create a RAID 10 group. If the guys from openfiler say that it is possible... I see no reasons to doubt them. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
There is no option to setup RAID 10.
But, let's get back to my previous request,
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Have you tried this the obvious way: using "mdadm create" for each step, giving the md devices created in the first step as the partitions for the RAID0 device?
But out of curiosity, why would you consider installing a CentOS 4.x now?
On Sun, Aug 30, 2009 at 10:57 PM, Les Mikeselllesmikesell@gmail.com wrote:
Rudi Ahlers wrote:
On Sun, Aug 30, 2009 at 9:57 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected?
I beg your pardon, I didn't see your previous post with the above information.
At this point, I think you should undo the RAID 1 groups and revert to the 4 raw partitions marked as Linux Software Raid. Then try to create a RAID 10 group. If the guys from openfiler say that it is possible... I see no reasons to doubt them. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
There is no option to setup RAID 10.
But, let's get back to my previous request,
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Have you tried this the obvious way: using "mdadm create" for each step, giving the md devices created in the first step as the partitions for the RAID0 device?
But out of curiosity, why would you consider installing a CentOS 4.x now?
-- Les Mikesell lesmikesell@gmail
Les, I have used SME Server 7.3 for a long time, with great success which runs on CentOS 4.7, so I would like to be able todo it with SME as well, since it doesn't support RAID 10.
I have tried mdam in the shell, but was told RAID 10 wasn't supported. I forgot the exact messages, but it was something like "invalid RAID level". I could set RAID 5 & 6 though.
Rudi Ahlers wrote:
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
On Mon, Aug 31, 2009 at 3:12 AM, natecentos@linuxpowered.net wrote:
Rudi Ahlers wrote:
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
Nate, this is what I was looking for :)
I'm going away for 2 weeks now, but will definitely give it a shot as soon as I can,
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
Nate, this is what I was looking for :)
I'm going away for 2 weeks now, but will definitely give it a shot as soon as I can,
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
On Aug 30, 2009, at 10:33 PM, Christopher Chan <christopher.chan@bradbury.edu.hk
wrote:
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
Nate, this is what I was looking for :)
I'm going away for 2 weeks now, but will definitely give it a shot as soon as I can,
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
Your under the asumption that they are two different systems.
Md RAID and LVM are both interfaces to the device mapper system which handles the LBA translation, duplication and parity calculation.
I have said it before, but I'll say it again, how much I wish md RAID and LVM would merge to provide a single interface for creation of volume groups that support different RAID levels.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
I would definitely avoid layered MD devices as it's more complicated to resolve disk failures.
In my tests an LVM striped across two RAID1 devices gave the exact same performance as a RAID10, but it gave the added benefit of creating LVs with varying stripe segment sizes which is great for varying workloads.
-Ross
Ross Walker wrote:
On Aug 30, 2009, at 10:33 PM, Christopher Chan <christopher.chan@bradbury.edu.hk
wrote:
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
Nate, this is what I was looking for :)
I'm going away for 2 weeks now, but will definitely give it a shot as soon as I can,
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
Your under the asumption that they are two different systems.
You're under the assumption that they are not.
Md RAID and LVM are both interfaces to the device mapper system which handles the LBA translation, duplication and parity calculation.
Are they? Since when was md and dm the same thing? dm was added after md had had a long presence in the linux kernel...like since linux 2.0
I have said it before, but I'll say it again, how much I wish md RAID and LVM would merge to provide a single interface for creation of volume groups that support different RAID levels.
Good luck with that. If key Linux developers diss the zfs approach and vouch for the multi-layer approach, I do not ever see md and dm merging.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
Oh? I have worked around it before even in the RH9 days. Just go into the shell (Hit F2), create what you want, go back to the installer. Are you so sure that anaconda does not support creating layered md devices? BTW, why are you talking about md devices now? I thought you said md and dm are the same?
I would definitely avoid layered MD devices as it's more complicated to resolve disk failures.
Huh?
I do not see what part of 'cat /proc/mdstat' will confuse you. It will always report which md device had a problem and it will report which device, be they md devices (rare) or disks.
In my tests an LVM striped across two RAID1 devices gave the exact same performance as a RAID10, but it gave the added benefit of creating LVs with varying stripe segment sizes which is great for varying workloads.
Now that is complicating things. Is the problem in the dm layer or in the md layer...yada, yada
On Aug 31, 2009, at 7:59 PM, Christopher Chan <christopher.chan@bradbury.edu.hk
wrote:
Ross Walker wrote:
On Aug 30, 2009, at 10:33 PM, Christopher Chan <christopher.chan@bradbury.edu.hk
wrote:
How would one setup RAID 1+0 (i.e. 2x mirror'ed RAID1's and then a RAID 0 on top of it) on say CentOS 4.6 ?
Setup both RAID-1 arrays then stripe them with LVM?
http://www.redhat.com/magazine/009jul05/features/lvm2/
Though I'd prefer to opt for a hardware raid card, I think you said you had SATA disks, which if that's the case would go for a 3ware.
nate
Nate, this is what I was looking for :)
I'm going away for 2 weeks now, but will definitely give it a shot as soon as I can,
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
Your under the asumption that they are two different systems.
You're under the assumption that they are not.
http://en.m.wikipedia.org/wiki/Device_mapper
If you want I can forward LXR references to MD and LVM into the device mapper code or LKML references that talk about rewriting MD and LVM for device mapper.
Md RAID and LVM are both interfaces to the device mapper system which handles the LBA translation, duplication and parity calculation.
Are they? Since when was md and dm the same thing? dm was added after md had had a long presence in the linux kernel...like since linux 2.0
Both MD RAID and LVM were rewritten to use the device mapper interface to mapped block devices back around the arrival of 2.6.
I have said it before, but I'll say it again, how much I wish md RAID and LVM would merge to provide a single interface for creation of volume groups that support different RAID levels.
Good luck with that. If key Linux developers diss the zfs approach and vouch for the multi-layer approach, I do not ever see md and dm merging.
I'm not talking ZFS, I'm not talking about merging the file system, just the RAID and logical volume manager which could make designing installers and managing systems simpler.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
Oh? I have worked around it before even in the RH9 days. Just go into the shell (Hit F2), create what you want, go back to the installer. Are you so sure that anaconda does not support creating layered md devices? BTW, why are you talking about md devices now? I thought you said md and dm are the same?
You know what, let me try just that today, I have a new install to do, so I'll try pre-creating a RAID10 on install and report back. First I'll try layered MD devices and then I'll try creating a RAID10 md device and we'll see if it can even boot off them.
I would definitely avoid layered MD devices as it's more complicated to resolve disk failures.
Huh?
I do not see what part of 'cat /proc/mdstat' will confuse you. It will always report which md device had a problem and it will report which device, be they md devices (rare) or disks.
Having a complex setup is always more error prone to a simpler one. Always.
In my tests an LVM striped across two RAID1 devices gave the exact same performance as a RAID10, but it gave the added benefit of creating LVs with varying stripe segment sizes which is great for varying workloads.
Now that is complicating things. Is the problem in the dm layer or in the md layer...yada, yada
Not really, have multiple software or hardware RAID1s make a VG out of them, then create LVs. One doesn't have to do anything special if it isn't needed, but it's there and simple to do if you need to. Try changing the segment size of an existing software or hardware array when it's already setup.
You know you really are an arrogant person that doesn't tolerate anyone disagreeing with them. You are the embodyment of everything people talk about when they talk about the Linux community's elist attitude and I wish you would make at least a small attempt to change your attitude.
-Ross
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
Your under the asumption that they are two different systems.
You're under the assumption that they are not.
http://en.m.wikipedia.org/wiki/Device_mapper
If you want I can forward LXR references to MD and LVM into the device mapper code or LKML references that talk about rewriting MD and LVM for device mapper.
md can make use of dm to get devices for its use but it certainly does not just ask dm to create a raid1 device. md does the actually raiding itself. Not dm.
Md RAID and LVM are both interfaces to the device mapper system which handles the LBA translation, duplication and parity calculation.
Are they? Since when was md and dm the same thing? dm was added after md had had a long presence in the linux kernel...like since linux 2.0
Both MD RAID and LVM were rewritten to use the device mapper interface to mapped block devices back around the arrival of 2.6.
That does not equate to md and dm being the same thing. Like you say, 'TO USE' dm. When did that mean they are the same thing?
I have said it before, but I'll say it again, how much I wish md RAID and LVM would merge to provide a single interface for creation of volume groups that support different RAID levels.
Good luck with that. If key Linux developers diss the zfs approach and vouch for the multi-layer approach, I do not ever see md and dm merging.
I'm not talking ZFS, I'm not talking about merging the file system, just the RAID and logical volume manager which could make designing installers and managing systems simpler.
Good luck taking Neil Brown out then. http://lwn.net/Articles/169142/ and http://lwn.net/Articles/169140/
Get rid of Neil Brown and md will disappear. I think.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
Oh? I have worked around it before even in the RH9 days. Just go into the shell (Hit F2), create what you want, go back to the installer. Are you so sure that anaconda does not support creating layered md devices? BTW, why are you talking about md devices now? I thought you said md and dm are the same?
You know what, let me try just that today, I have a new install to do, so I'll try pre-creating a RAID10 on install and report back. First I'll try layered MD devices and then I'll try creating a RAID10 md device and we'll see if it can even boot off them.
Let me just point out that I never said you can boot off a raid1+0 device. I only said that you can create a raid1+0 device at install time. /boot will have to be on a raid1 device. The raid1+0 device can be used for other filesystems including root or as a physical volume. Forget raid10, that module is not even available at install time with Centos 4 IIRC. Not sure about Centos 5.
I would definitely avoid layered MD devices as it's more complicated to resolve disk failures.
Huh?
I do not see what part of 'cat /proc/mdstat' will confuse you. It will always report which md device had a problem and it will report which device, be they md devices (rare) or disks.
Having a complex setup is always more error prone to a simpler one. Always.
-_-
Both are still multilayered...just different codepaths/tech. I do not see how lvm is simpler than md.
In my tests an LVM striped across two RAID1 devices gave the exact same performance as a RAID10, but it gave the added benefit of creating LVs with varying stripe segment sizes which is great for varying workloads.
Now that is complicating things. Is the problem in the dm layer or in the md layer...yada, yada
Not really, have multiple software or hardware RAID1s make a VG out of them, then create LVs. One doesn't have to do anything special if it isn't needed, but it's there and simple to do if you need to. Try changing the segment size of an existing software or hardware array when it's already setup.
Yeah, using lvm to stripe is certainly more convenient.
You know you really are an arrogant person that doesn't tolerate anyone disagreeing with them. You are the embodyment of everything people talk about when they talk about the Linux community's elist attitude and I wish you would make at least a small attempt to change your attitude.
How have I been elitist? Did I tell you to get lost like elites like to do? Did I snub you or something? Only you can say that I made assumptions and not you? ???
On Tue, Sep 1, 2009 at 11:09 AM, Chan Chung Hang Christopherchristopher.chan@bradbury.edu.hk wrote:
I would NOT do that. You should like the md layer handle all things raid and let lvm do just volume management.
Your under the asumption that they are two different systems.
You're under the assumption that they are not.
http://en.m.wikipedia.org/wiki/Device_mapper
If you want I can forward LXR references to MD and LVM into the device mapper code or LKML references that talk about rewriting MD and LVM for device mapper.
md can make use of dm to get devices for its use but it certainly does not just ask dm to create a raid1 device. md does the actually raiding itself. Not dm.
Actually I am going to eat crow on this.
While device mapper has support for "fake raid" devices and managing RAID under those, the actual kernel RAID modules are still handled under linux-raid. Though there is an ongoing effort to bring these into device mapper, it still isn't there yet.
Md RAID and LVM are both interfaces to the device mapper system which handles the LBA translation, duplication and parity calculation.
Are they? Since when was md and dm the same thing? dm was added after md had had a long presence in the linux kernel...like since linux 2.0
Both MD RAID and LVM were rewritten to use the device mapper interface to mapped block devices back around the arrival of 2.6.
That does not equate to md and dm being the same thing. Like you say, 'TO USE' dm. When did that mean they are the same thing?
As I stated above I was wrong here.
I have said it before, but I'll say it again, how much I wish md RAID and LVM would merge to provide a single interface for creation of volume groups that support different RAID levels.
Good luck with that. If key Linux developers diss the zfs approach and vouch for the multi-layer approach, I do not ever see md and dm merging.
I'm not talking ZFS, I'm not talking about merging the file system, just the RAID and logical volume manager which could make designing installers and managing systems simpler.
Good luck taking Neil Brown out then. http://lwn.net/Articles/169142/ and http://lwn.net/Articles/169140/
Get rid of Neil Brown and md will disappear. I think.
People change over time and if a convincing argument can be made why device mapper and linux raid should merge code then I'm sure Neil would reconsider his stance.
To create a raid1+0 array, you first create the mirrors and then you create a striped array that consists of the mirror devices. There is another raid10 module that does its own thing with regards to 'raid10', is not supported by the installer and does not necessarily behave like raid1+0.
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
Oh? I have worked around it before even in the RH9 days. Just go into the shell (Hit F2), create what you want, go back to the installer. Are you so sure that anaconda does not support creating layered md devices?
I tested it and it doesn't work/isn't supported.
BTW, why are you talking about md devices now? I thought you said md and dm are the same?
You know what, let me try just that today, I have a new install to do, so I'll try pre-creating a RAID10 on install and report back. First I'll try layered MD devices and then I'll try creating a RAID10 md device and we'll see if it can even boot off them.
Let me just point out that I never said you can boot off a raid1+0 device. I only said that you can create a raid1+0 device at install time. /boot will have to be on a raid1 device. The raid1+0 device can be used for other filesystems including root or as a physical volume. Forget raid10, that module is not even available at install time with Centos 4 IIRC. Not sure about Centos 5.
My tests had a separate RAID1 for /boot to take the whole booting off of RAID10 out of the picture.
The problem I had with pre setting the layered MD devices for anaconda, is while I was able to do that after a couple reboots to get it to see the partitioning, anaconda didn't actually start the arrays until further in the install process, so it wasn't able to see the nested array and starting the arrays manually didn't help because anaconda wouldn't re-scan the devices again.
I would definitely avoid layered MD devices as it's more complicated to resolve disk failures.
Huh?
I do not see what part of 'cat /proc/mdstat' will confuse you. It will always report which md device had a problem and it will report which device, be they md devices (rare) or disks.
Having a complex setup is always more error prone to a simpler one. Always.
-_-
Both are still multilayered...just different codepaths/tech. I do not see how lvm is simpler than md.
Well running 2 layers of MD is not as trivial to setup and maintain as 1 layer of MD and 1 layer of LVM.
In my tests an LVM striped across two RAID1 devices gave the exact same performance as a RAID10, but it gave the added benefit of creating LVs with varying stripe segment sizes which is great for varying workloads.
Now that is complicating things. Is the problem in the dm layer or in the md layer...yada, yada
Not really, have multiple software or hardware RAID1s make a VG out of them, then create LVs. One doesn't have to do anything special if it isn't needed, but it's there and simple to do if you need to. Try changing the segment size of an existing software or hardware array when it's already setup.
Yeah, using lvm to stripe is certainly more convenient.
You know you really are an arrogant person that doesn't tolerate anyone disagreeing with them. You are the embodyment of everything people talk about when they talk about the Linux community's elist attitude and I wish you would make at least a small attempt to change your attitude.
How have I been elitist? Did I tell you to get lost like elites like to do? Did I snub you or something? Only you can say that I made assumptions and not you? ???
Maybe I'm too thin skinned, but your email tone on your earlier posts came across as a little smug, but then again it may just be me being too sensitive, your latest post didn't come across that way.
-Ross
On Tue, Sep 1, 2009 at 9:44 AM, Ross Walkerrswwalker@gmail.com wrote:
On Aug 31, 2009, at 7:59 PM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Ross Walker wrote:
Problem is the install program doesn't support setting up RAID10 or layered MD devices.
Oh? I have worked around it before even in the RH9 days. Just go into the shell (Hit F2), create what you want, go back to the installer. Are you so sure that anaconda does not support creating layered md devices? BTW, why are you talking about md devices now? I thought you said md and dm are the same?
You know what, let me try just that today, I have a new install to do, so I'll try pre-creating a RAID10 on install and report back. First I'll try layered MD devices and then I'll try creating a RAID10 md device and we'll see if it can even boot off them.
Ok, I verified that inside anaconda one cannot create layered MD RAID arrays because once one forms an array there is no choice to create a volume of type "Software RAID". The RAID choices are RAID0, RAID1, RAID5 or RAID6 during install, no RAID10.
I can create multiple RAID1s though, of type "LVM Physical Volume" and then create a volume group composed of those. I can then create a root LV and a swap LV, though these will not be striped because LVM doesn't default to striping PVs, but concatenating, so in order to stripe these I'll need to leave enough free space to create striped versions, dump and restore from the old root to the new root and then edit the fstab/grub, run mkinitrd and reboot. Not exactly convenient, but unfortunate due to LVM's default policy of concatenating PVs instead of striping them... oh well.
As far as creating a RAID10 at the command prompt, the dm-raid10 kernel module is missing from the install image, so no luck directly creating a RAID10, and after a couple of reboots I was able to create a layered setup, but anaconda didn't recognize it (either immediately, or after a reboot) to be able to perform an install on it because it doesn't start the arrays upon startup to be able to find the nested one.
So if it worked for you in RH9, it no longer works anymore.
Maybe because RH9 had a separate MD RAID implementation and not the device-mapper implementation.
-Ross
You know what, let me try just that today, I have a new install to do, so I'll try pre-creating a RAID10 on install and report back. First I'll try layered MD devices and then I'll try creating a RAID10 md device and we'll see if it can even boot off them.
Ok, I verified that inside anaconda one cannot create layered MD RAID arrays because once one forms an array there is no choice to create a volume of type "Software RAID". The RAID choices are RAID0, RAID1, RAID5 or RAID6 during install, no RAID10.
I can create multiple RAID1s though, of type "LVM Physical Volume" and then create a volume group composed of those. I can then create a root LV and a swap LV, though these will not be striped because LVM doesn't default to striping PVs, but concatenating, so in order to stripe these I'll need to leave enough free space to create striped versions, dump and restore from the old root to the new root and then edit the fstab/grub, run mkinitrd and reboot. Not exactly convenient, but unfortunate due to LVM's default policy of concatenating PVs instead of striping them... oh well.
As far as creating a RAID10 at the command prompt, the dm-raid10 kernel module is missing from the install image, so no luck directly creating a RAID10, and after a couple of reboots I was able to create a layered setup, but anaconda didn't recognize it (either immediately, or after a reboot) to be able to perform an install on it because it doesn't start the arrays upon startup to be able to find the nested one.
No surprise about raid10. I take it you tried this with Centos 5? Thanks for the testing. I cannot believe that it is no longer possible in Centos4/RHEL4 and later. I had done it too with Fedora Core 2 which is 2.6.4 based.
So if it worked for you in RH9, it no longer works anymore.
Maybe because RH9 had a separate MD RAID implementation and not the device-mapper implementation.
RH9 was 2.4 and had no fancy dm. BTW, md raid is still separate. But you have already said in your other post.
On Sun, Aug 30, 2009 at 9:53 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
Yet, I can't setup the RAID 10 they talk about, I only have RAID0, RAID1, RAID5 & RAID6 in the setup options, which is why I'm thinking of doing this another way.
Then maybe there is something wrong with your partitioning scheme or even physical layout. What drives do you have and how are they partitioned? How are they physically connected? _______________________________________________
Sure, that's the first thing to check :)
I have 4x 250GB HDD's, and I partitioned them as follows:
/dev/sda1 - 100MB type software RAID /dev/sda2 - 4096MB type swap /dev/sda3 - 240GB type software RAID
/dev/sdb1 - 100MB type software RAID /dev/sdb2 - 4096MB type swap /dev/sdb3 - 240GB type software RAID
/dev/hdb1 - 100MB type software RAID /dev/hdb2 - 4096MB type swap /dev/hdb3 - 240GB type software RAID
/dev/hdd1 - 100MB type software RAID /dev/hdd2 - 4096MB type swap /dev/hdd3 - 240GB type software RAID
Now, from here I should have been able to create a RAID 10 device from /sda3, sdb3, hdb3, hdd3 - but RAID 10 isn't available.
This page from openfiler.com clearly states the following:
"Openfiler supports RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10."
The page even shows the dialog box to create a RAID 10 group!
On Sun, Aug 30, 2009 at 9:50 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
This page from openfiler.com clearly states the following:
"Openfiler supports RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10."
The page even shows the dialog box to create a RAID 10 group! _______________________________________________
What I could gather, is that the particular setup is only available one I have it setup, meaning I can't have RAID 10 running on the underlying OS.
I don't know if this makes sense, but to get their RAID10, I first need to install it, then use the web interface to setup RAID 10. Instead I would like to have openfiler running on the RAID 10 setup as well.
Rudi Ahlers wrote:
On Sun, Aug 30, 2009 at 9:50 PM, Miguel Medalhamiguelmedalha@sapo.pt wrote:
This page from openfiler.com clearly states the following:
"Openfiler supports RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10."
The page even shows the dialog box to create a RAID 10 group! _______________________________________________
What I could gather, is that the particular setup is only available one I have it setup, meaning I can't have RAID 10 running on the underlying OS.
I don't know if this makes sense, but to get their RAID10, I first need to install it, then use the web interface to setup RAID 10. Instead I would like to have openfiler running on the RAID 10 setup as well.
You are probably better off if you don't mix the installed OS with the partitions you want to export, so you would use a small raid1 for the OS, then add everything else after it is up and running. I'd do it that way for other distributions too, even if I eventually want /home or /var on different raid partitions. It is easy enough to mount the new space under a temporary name, copy over the existing contents, rename the old directories and set up fstab to mount the new ones when you reboot.
I don't know if this makes sense, but to get their RAID10, I first need to install it, then use the web interface to setup RAID 10. Instead I would like to have openfiler running on the RAID 10 setup as well.
I don't think that you have a significant gain by doing so. Aren't you being a little too rigid there?
It seems to me that it makes much more sense to install the OS in a separate partition, maybe on RAID1, and then create a RAID10 for your data with the help of the openfiler interface.
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
As Oliver Ransom replied to you, RAID 1+0 (not to be confused with RAID 0+1) is RAID 10. mdadm has direct support for RAID 10. I am using it on CentOS 5.3 and it works really well.
You might be interested in this article:
"Why is RAID 1+0 better than RAID 0+1?" http://aput.net/~jheiss/raid10/
Miguel Medalha wrote:
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
As Oliver Ransom replied to you, RAID 1+0 (not to be confused with RAID 0+1) is RAID 10. mdadm has direct support for RAID 10. I am using it on CentOS 5.3 and it works really well.
RAID 1+0 is NOT RAID 10. raid 1+0 is achieved using the combination of raid1 and raid0 personalities. Raid10 is a different animal and has its own personality. (personality as reported by 'cat /proc/mdstat' aka md modules)
raid10 was only introduced in 2.6.9 and Oliver's link clearly shows that it is 'Non-standard' or not raid1+0.
You might be interested in this article:
"Why is RAID 1+0 better than RAID 0+1?" http://aput.net/~jheiss/raid10/
The whole raid1+0 or raid0+1 argument was really only relevant in the days of pata when one disk dying on one channel might take out the other disk on the same channel or the controller. Now that we are using SATA, it is MOOT.
On 31/08/2009, at 1:11 PM, Christopher Chan wrote:
Miguel Medalha wrote:
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
As Oliver Ransom replied to you, RAID 1+0 (not to be confused with RAID 0+1) is RAID 10. mdadm has direct support for RAID 10. I am using it on CentOS 5.3 and it works really well.
RAID 1+0 is NOT RAID 10. raid 1+0 is achieved using the combination of raid1 and raid0 personalities. Raid10 is a different animal and has its own personality. (personality as reported by 'cat /proc/mdstat' aka md modules)
raid10 was only introduced in 2.6.9 and Oliver's link clearly shows that it is 'Non-standard' or not raid1+0.
RAID 10 and 1+0 are referred to interchangeably in the Nested_RAID_levels article, "RAID 1+0, sometimes called RAID 1&0, or RAID 10".
I'm a bit confused now!
You might be interested in this article:
"Why is RAID 1+0 better than RAID 0+1?" http://aput.net/~jheiss/raid10/
The whole raid1+0 or raid0+1 argument was really only relevant in the days of pata when one disk dying on one channel might take out the other disk on the same channel or the controller. Now that we are using SATA, it is MOOT. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Oliver Ransom wrote:
On 31/08/2009, at 1:11 PM, Christopher Chan wrote:
Miguel Medalha wrote:
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
As Oliver Ransom replied to you, RAID 1+0 (not to be confused with RAID 0+1) is RAID 10. mdadm has direct support for RAID 10. I am using it on CentOS 5.3 and it works really well.
RAID 1+0 is NOT RAID 10. raid 1+0 is achieved using the combination of raid1 and raid0 personalities. Raid10 is a different animal and has its own personality. (personality as reported by 'cat /proc/mdstat' aka md modules)
raid10 was only introduced in 2.6.9 and Oliver's link clearly shows that it is 'Non-standard' or not raid1+0.
RAID 10 and 1+0 are referred to interchangeably in the Nested_RAID_levels article, "RAID 1+0, sometimes called RAID 1&0, or RAID 10".
I'm a bit confused now!
When it comes to Linux software raid, the raid10 module does things differently from what you get from a combination of raid1 + raid0. When it comes to hardware raid, raid10 is very likely to mean raid1+0.
The problem is that that chum behind the raid10 module chose a name that was not at that time strictly defined and people would use raid10 in their articles to mean raid1+0. The raid10 module can do something similar to what you get with raid1+0 but it can also do completely different things in the way it handles data. You can for example have three disks and get a 'raid10' array. You do not have to have an even number of disks to create a 'raid10' device with the raid10 module.
http://neil.brown.name/blog/20040827225440
BTW, the raid10 module does not have the same reputation as the raid1 and raid0 modules...even the raid5 module did not have a good reputation five years ago sometime around when raid10 was introduced and the raid5 module has been around for some time then.
You might be interested in this article:
"Why is RAID 1+0 better than RAID 0+1?" http://aput.net/~jheiss/raid10/
The whole raid1+0 or raid0+1 argument was really only relevant in the days of pata when one disk dying on one channel might take out the other disk on the same channel or the controller. Now that we are using SATA, it is MOOT.
No, it is not moot. Have you read the article? It has nothing to do with PATA or SATA drives but with probabilities of failure under normal and degraded state.
"Mathematically, the difference is that the chance of system failure with two drive failures in a RAID 0+1 system with two sets of drives is (n/2)/(n - 1) where n is the total number of drives in the system. The chance of system failure in a RAID 1+0 system with two drives per mirror is 1/(n - 1). So, using the 8 drive systems shown in the diagrams, the chance that losing a second drive would bring down the RAID system is 4/7 with a RAID 0+1 system and 1/7 with a RAID 1+0 system."
"Another difference between the two RAID configurations is performance when the system is in a degraded state, i.e. after it has lost one or more drives but has not lost the right combination of drives to completely fail."
RAID 1+0 is still more secure."
Miguel Medalha wrote:
You might be interested in this article:
"Why is RAID 1+0 better than RAID 0+1?" http://aput.net/~jheiss/raid10/
The whole raid1+0 or raid0+1 argument was really only relevant in the days of pata when one disk dying on one channel might take out the other disk on the same channel or the controller. Now that we are using SATA, it is MOOT.
No, it is not moot. Have you read the article? It has nothing to do with PATA or SATA drives but with probabilities of failure under normal and degraded state.
"Mathematically, the difference is that the chance of system failure with two drive failures in a RAID 0+1 system with two sets of drives is (n/2)/(n - 1) where n is the total number of drives in the system. The chance of system failure in a RAID 1+0 system with two drives per mirror is 1/(n - 1). So, using the 8 drive systems shown in the diagrams, the chance that losing a second drive would bring down the RAID system is 4/7 with a RAID 0+1 system and 1/7 with a RAID 1+0 system."
Oh sorry, I have never argued about eight drive systems years ago (didn't have them then, too poor) and there is no argument about raid1+0 being the way to do it beyond four drives. It is too obvious that stripping three drives and then mirroring them is more risky than making three mirrors and then stripping them. Any argument then about whether one should do raid0+1 were really limited to those who had four drive systems and never thought beyond four drives.
So it is really moot unless one ignores the obvious or fails to think.
"Another difference between the two RAID configurations is performance when the system is in a degraded state, i.e. after it has lost one or more drives but has not lost the right combination of drives to completely fail."
RAID 1+0 is still more secure."
Hear, hear. Man, I should leave the 90s back there.
Chan Chung Hang Christopher wrote:
Oh sorry, I have never argued about eight drive systems years ago (didn't have them then, too poor) and there is no argument about raid1+0 being the way to do it beyond four drives. It is too obvious that stripping three drives and then mirroring them is more risky than making three mirrors and then stripping them. Any argument then about whether one should do raid0+1 were really limited to those who had four drive systems and never thought beyond four drives.
So it is really moot unless one ignores the obvious or fails to think.
"Another difference between the two RAID configurations is performance when the system is in a degraded state, i.e. after it has lost one or more drives but has not lost the right combination of drives to completely fail."
RAID 1+0 is still more secure."
Hear, hear. Man, I should leave the 90s back there.
But note that drive capacity has gone up too, often eliminating the need for many-disk arrays. For example, you can go up to 2TB on a single drive, so a simple RAID1 mirror may be all you need, and if you can arrange the mount points to match the use pattern you may get better performance out of several separate raid1 partitions where the heads can seek independently instead of essentially tying them all together in a single array. A many-disk array may do better on artificial benchmarks accessing one big file, but that's not what most computers actually do - and raid1 has the advantages of not slowing down when a member fails and you can recover the data from any single drive.
On Aug 31, 2009, at 11:13 AM, Les Mikesell lesmikesell@gmail.com wrote:
Chan Chung Hang Christopher wrote:
Oh sorry, I have never argued about eight drive systems years ago (didn't have them then, too poor) and there is no argument about raid1+0 being the way to do it beyond four drives. It is too obvious that stripping three drives and then mirroring them is more risky than making three mirrors and then stripping them. Any argument then about whether one should do raid0+1 were really limited to those who had four drive systems and never thought beyond four drives.
So it is really moot unless one ignores the obvious or fails to think.
"Another difference between the two RAID configurations is performance when the system is in a degraded state, i.e. after it has lost one or more drives but has not lost the right combination of drives to completely fail."
RAID 1+0 is still more secure."
Hear, hear. Man, I should leave the 90s back there.
But note that drive capacity has gone up too, often eliminating the need for many-disk arrays. For example, you can go up to 2TB on a single drive, so a simple RAID1 mirror may be all you need, and if you can arrange the mount points to match the use pattern you may get better performance out of several separate raid1 partitions where the heads can seek independently instead of essentially tying them all together in a single array. A many-disk array may do better on artificial benchmarks accessing one big file, but that's not what most computers actually do - and raid1 has the advantages of not slowing down when a member fails and you can recover the data from any single drive.
Ah the larger capacity is double sided as it also increases the chance of a double failure in a single parity RAID as the resilver time is also substantially increased, so if doing parity on large drives you should really go for RAID6, which unfortunately means you really need 5 drives over 3 to get the same economy as a RAID5.
-Ross
Les Mikesell wrote:
But note that drive capacity has gone up too, often eliminating the need for many-disk arrays. For example, you can go up to 2TB on a single drive, so a simple RAID1 mirror may be all you need, and if you can arrange the mount points to match the use pattern you may get better performance out of several separate raid1 partitions where the heads can seek independently instead of essentially tying them all together in a single array. A many-disk array may do better on artificial benchmarks accessing one big file, but that's not what most computers actually do - and raid1 has the advantages of not slowing down when a member fails and you can recover the data from any single drive.
My storage array does that sort-of to the extreme, per drive there are hundreds of mini RAID arrays which help stripe the load better, distributes parity better(no dedicated parity disks), and rebuild from failures faster(750GB disk can be rebuilt in 3 hours with no impact to system performance).
Just ran a calculation on the number and there are 80,007 mini raid arrays(composed of 256MB logical disks) on my system with 200 750GB drives, so about 400 raid arrays per disk on average.
makes me a happy camper
nate
But note that drive capacity has gone up too, often eliminating the need for many-disk arrays. For example, you can go up to 2TB on a single drive, so a simple RAID1 mirror may be all you need, and if you can arrange the mount points to match the use pattern you may get better performance out of several separate raid1 partitions where the heads can seek independently instead of essentially tying them all together in a
That is assuming a multi-platter disk and that you can actually partition things in such a way that different heads get to exclusively handle partitions most of the time.
single array. A many-disk array may do better on artificial benchmarks accessing one big file, but that's not what most computers actually do - and raid1 has the advantages of not slowing down when a member fails and you can recover the data from any single drive.
We are still talking about raid1+0 here...no raid5 or what not you know.
Rudi Ahlers schrieb:
Hi,
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
Hi,
what is your purpose with this Level? I'm asking because I had done some raid level tests some time ago with hardware raid controllers/systems from 3ware, sun and iscsi-raidsystems and some softwareraid setups.
I haven't seen significant performance differences using different levels. (I'm not comparing the controllers.)
Regards,
Götz
what is your purpose with this Level? I'm asking because I had done some raid level tests some time ago with hardware raid controllers/systems from 3ware, sun and iscsi-raidsystems and some softwareraid setups.
The purpose of using RAID 10 is, of course, performance coupled with full redundancy.
I haven't seen significant performance differences using different levels. (I'm not comparing the controllers.)
Did you test RAID 10 also?
I am using software RAID 10 with CentOS 5.3 and I surely did see a *huge* performance increase with RAID 10 compared to RAID 1 only.
Miguel Medalha schrieb:
what is your purpose with this Level? I'm asking because I had done some raid level tests some time ago with hardware raid controllers/systems from 3ware, sun and iscsi-raidsystems and some softwareraid setups.
The purpose of using RAID 10 is, of course, performance coupled with full redundancy.
I haven't seen significant performance differences using different levels. (I'm not comparing the controllers.)
Did you test RAID 10 also?
I am using software RAID 10 with CentOS 5.3 and I surely did see a *huge* performance increase with RAID 10 compared to RAID 1 only.
I tested 1+0 vers 0+1 vers 5 on the 3ware, the sun build in raid system and using softwareraid on an other (older) server.
On the iscsi systems and the 3ware controller I tested raid 5 vers. 6 as well.
May be I should have mentioned that.
My conclusion on my systems was:
Level 5 is the best concerning performance/capacity.
Regards,
Götz
On Aug 30, 2009, at 1:02 PM, Götz Reinicke <goetz.reinicke@filmakademie.d e> wrote:
Miguel Medalha schrieb:
what is your purpose with this Level? I'm asking because I had done some raid level tests some time ago with hardware raid controllers/ systems from 3ware, sun and iscsi-raidsystems and some softwareraid setups.
The purpose of using RAID 10 is, of course, performance coupled with full redundancy.
I haven't seen significant performance differences using different levels. (I'm not comparing the controllers.)
Did you test RAID 10 also?
I am using software RAID 10 with CentOS 5.3 and I surely did see a *huge* performance increase with RAID 10 compared to RAID 1 only.
I tested 1+0 vers 0+1 vers 5 on the 3ware, the sun build in raid system and using softwareraid on an other (older) server.
On the iscsi systems and the 3ware controller I tested raid 5 vers. 6 as well.
May be I should have mentioned that.
My conclusion on my systems was:
Level 5 is the best concerning performance/capacity.
Then you were only testing sequential io an that only gets you so far. For random io nothing beats RAID10.
-Ross
Götz Reinicke wrote:
I am using software RAID 10 with CentOS 5.3 and I surely did see a *huge* performance increase with RAID 10 compared to RAID 1 only.
I tested 1+0 vers 0+1 vers 5 on the 3ware, the sun build in raid system and using softwareraid on an other (older) server.
On the iscsi systems and the 3ware controller I tested raid 5 vers. 6 as well.
May be I should have mentioned that.
My conclusion on my systems was:
Level 5 is the best concerning performance/capacity.
That's very strange, considering what the drives physically have to do for raid 5, especially on small writes. Did you have a very large number of drives in the array?
2009/8/30 Götz Reinicke goetz.reinicke@filmakademie.de:
Rudi Ahlers schrieb:
Hi,
Can someone please assist met with some software RAID 1+0 setup instructions? I have searched the web, but couldn't find any. I found a lot of RAID 10 setup instructions, but it doesn't help me.
Hi,
what is your purpose with this Level? I'm asking because I had done some raid level tests some time ago with hardware raid controllers/systems from 3ware, sun and iscsi-raidsystems and some softwareraid setups.
I haven't seen significant performance differences using different levels. (I'm not comparing the controllers.)
Regards,
Götz
With the limited resources (motherboard supports 2x SATA & 4x IDE - but I don't want to use more than 2 IDE HDD's), and small form factor chassis, I would rather sacrifice less space (460GB on 4x 250GB HDD's) and run RAID 10 for higher reliability & speed, then run RAID 5 and have higher risk of complete data loss.
RAID 10 can suffer 2 drive failures, while RAID 5 only one.