So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising.
I was able to mount the raid10 parition just fine.
Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault.
Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot?
Russ
Ruslan Sivak wrote:
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10. I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10). I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising. I was able to mount the raid10 parition just fine. Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault. Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot? Russ
Russ,
Nothing here to help you (again - :) just looking down the road a little. If you do get this thing working the way you want, will you be able to trust it to stay that way?
Toby Bluhm wrote:
Ruslan Sivak wrote:
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10. I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10). I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising. I was able to mount the raid10 parition just fine. Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault. Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot? Russ
Russ,
Nothing here to help you (again - :) just looking down the road a little. If you do get this thing working the way you want, will you be able to trust it to stay that way?
Well, it's been my experience, that in linux, unlike windows, it might take a while to get things the way you want, but once you do, you can pretty much trust it to stay that way.
So yea, this is what I'm looking to do here. I want to set up a system, that will live after 1 (or possibly 2) drive failures. I want to know what I need to do ahead of time, so that I can be confident in my set up, and know what to do in case disaster strikes.
Russ
Ruslan Sivak spake the following on 5/7/2007 1:44 PM:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10. I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10). I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising. I was able to mount the raid10 parition just fine. Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault. Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot? Russ
Russ,
Nothing here to help you (again - :) just looking down the road a little. If you do get this thing working the way you want, will you be able to trust it to stay that way?
Well, it's been my experience, that in linux, unlike windows, it might take a while to get things the way you want, but once you do, you can pretty much trust it to stay that way. So yea, this is what I'm looking to do here. I want to set up a system, that will live after 1 (or possibly 2) drive failures. I want to know what I need to do ahead of time, so that I can be confident in my set up, and know what to do in case disaster strikes.
Russ
If you have the hardware, or the money, you can make a system pretty durable. But you get to a point that the gains aren't worth the cost. You can get a system to 3 "9's" fairly easy, but the cost to get to 4 "9's" is much more. If you want something better than 4 "9's", you will have to look at clustering, because a single reboot in a month can shoot down your numbers.
If you want total reliability, you will need hot spares and a raid method that builds quickly, and you will need regular backups.
Scott Silva wrote:
Ruslan Sivak spake the following on 5/7/2007 1:44 PM:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10. I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly from the shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10). I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising. I was able to mount the raid10 parition just fine. Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault. Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot? Russ
Russ,
Nothing here to help you (again - :) just looking down the road a little. If you do get this thing working the way you want, will you be able to trust it to stay that way?
Well, it's been my experience, that in linux, unlike windows, it might take a while to get things the way you want, but once you do, you can pretty much trust it to stay that way. So yea, this is what I'm looking to do here. I want to set up a system, that will live after 1 (or possibly 2) drive failures. I want to know what I need to do ahead of time, so that I can be confident in my set up, and know what to do in case disaster strikes.
Russ
If you have the hardware, or the money, you can make a system pretty durable. But you get to a point that the gains aren't worth the cost. You can get a system to 3 "9's" fairly easy, but the cost to get to 4 "9's" is much more. If you want something better than 4 "9's", you will have to look at clustering, because a single reboot in a month can shoot down your numbers.
If you want total reliability, you will need hot spares and a raid method that builds quickly, and you will need regular backups.
I'm not looking for total reliability. I am building a low budget file/backup server. I would like it to be fairly reliable with good performance. Basically if 1 drive fails, I would like to still be up and running, even if it requires slight reconfigurations (ie recreating the swap partition).
If 2 drives fail, I would like to still be able to be up and running assuming I wasn't unlucky enough to have 2 drives fail in the same mirror set.
If 3 drives fail, I'm pretty much SOL.
The most important thing is that I can easily survive a single disk failure.
Russ
Ruslan Sivak spake the following on 5/7/2007 2:43 PM:
Scott Silva wrote:
Ruslan Sivak spake the following on 5/7/2007 1:44 PM:
Toby Bluhm wrote:
Ruslan Sivak wrote:
Ross S. W. Walker wrote:
> -----Original Message----- > From: centos-bounces@centos.org > [mailto:centos-bounces@centos.org] On > Behalf Of Ruslan Sivak > Sent: Monday, May 07, 2007 12:53 PM > To: CentOS mailing list > Subject: [CentOS] Anaconda doesn't support raid10 > > So after troubleshooting this for about a week, I was finally able > to create a raid 10 device by installing the system, copying the md > modules onto a floppy, and loading the raid10 module during the > install. > Now the problem is that I can't get it to show up in anaconda. It > detects the other arrays (raid0 and raid1) fine, but the raid10 > array won't show up. Looking through the logs (Alt-F3), I see the > following warning: > > WARNING: raid level RAID10 not supported, skipping md10. > I'm starting to hate the installer more and more. Why won't it let > me install on this device, even though it's working perfectly from > the shell? Why am I the only one having this problem? Is nobody > out there using md based raid10? Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10). I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising. I was able to mount the raid10 parition just fine. Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault. Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot? Russ
Russ,
Nothing here to help you (again - :) just looking down the road a little. If you do get this thing working the way you want, will you be able to trust it to stay that way?
Well, it's been my experience, that in linux, unlike windows, it might take a while to get things the way you want, but once you do, you can pretty much trust it to stay that way. So yea, this is what I'm looking to do here. I want to set up a system, that will live after 1 (or possibly 2) drive failures. I want to know what I need to do ahead of time, so that I can be confident in my set up, and know what to do in case disaster strikes.
Russ
If you have the hardware, or the money, you can make a system pretty durable. But you get to a point that the gains aren't worth the cost. You can get a system to 3 "9's" fairly easy, but the cost to get to 4 "9's" is much more. If you want something better than 4 "9's", you will have to look at clustering, because a single reboot in a month can shoot down your numbers.
If you want total reliability, you will need hot spares and a raid method that builds quickly, and you will need regular backups.
I'm not looking for total reliability. I am building a low budget file/backup server. I would like it to be fairly reliable with good performance. Basically if 1 drive fails, I would like to still be up and running, even if it requires slight reconfigurations (ie recreating the swap partition). If 2 drives fail, I would like to still be able to be up and running assuming I wasn't unlucky enough to have 2 drives fail in the same mirror set. If 3 drives fail, I'm pretty much SOL. The most important thing is that I can easily survive a single disk failure. Russ
I had 2 seperate 2 drive failures on 2 brand new HP servers. I burned the servers in over 2 weeks running various drive exercising duties like creating and deleting files and such, and they waited to fail within an hour of each other. Not even enough time for the hot spare to rebuild. Then I have lost 2 more drives in single event failures. HP was great with sending new drives, usually the next day, and stated that if one more drive failed from the original set they would replace the whole lot. But that doesn't get your data back, or keep the server going. We even ordered an extra drive per server, and I stuck them in the hardware closet( after some burn-in time) just to be safe.
That also relates to my adaptec raid controller nightmare, but have since been sleeping easy since I moved to 3ware's.
Ruslan Sivak wrote:
I'm not looking for total reliability. I am building a low budget file/backup server. I would like it to be fairly reliable with good performance. Basically if 1 drive fails, I would like to still be up and running, even if it requires slight reconfigurations (ie recreating the swap partition).
I like to keep things simple-minded and not fight with anadconda. During the install, put /boot, swap, and / on your first 2 drives as RAID1. After that works the way you want, build whatever layout you want with the rest of your space and either move your /home contents and mount point over or mount it somewhere else. A nice feature of this approach is that you can upgrade to pretty much any other version/distro by building a new set of system disks and swapping them in, keeping your data intact. I also like to use disks in swappable carriers and to keep a spare chassis around. That way you can use it for testing things and developing your next version but if your production motherboard fails you can just move the drives to it and keep going.
If 2 drives fail, I would like to still be able to be up and running assuming I wasn't unlucky enough to have 2 drives fail in the same mirror set. If 3 drives fail, I'm pretty much SOL. The most important thing is that I can easily survive a single disk failure.
If you can deal with the space constraints of partitions that match single disk sizes by mounting them in appropriate places it's hard to beat RAID1. If everything fries except one drive you can still recover the data that was on it - plus it gives you natural boundaries for backups which you shouldn't ignore just because you have raid.
Les Mikesell wrote:
Ruslan Sivak wrote:
I'm not looking for total reliability. I am building a low budget file/backup server. I would like it to be fairly reliable with good performance. Basically if 1 drive fails, I would like to still be up and running, even if it requires slight reconfigurations (ie recreating the swap partition).
I like to keep things simple-minded and not fight with anadconda. During the install, put /boot, swap, and / on your first 2 drives as RAID1. After that works the way you want, build whatever layout you want with the rest of your space and either move your /home contents and mount point over or mount it somewhere else. A nice feature of this approach is that you can upgrade to pretty much any other version/distro by building a new set of system disks and swapping them in, keeping your data intact. I also like to use disks in swappable carriers and to keep a spare chassis around. That way you can use it for testing things and developing your next version but if your production motherboard fails you can just move the drives to it and keep going.
I have 4 500GB drives. Seems kind of a waste to put just /boot swap and / on the first 2 drives.
If 2 drives fail, I would like to still be able to be up and running assuming I wasn't unlucky enough to have 2 drives fail in the same mirror set. If 3 drives fail, I'm pretty much SOL. The most important thing is that I can easily survive a single disk failure.
If you can deal with the space constraints of partitions that match single disk sizes by mounting them in appropriate places it's hard to beat RAID1. If everything fries except one drive you can still recover the data that was on it - plus it gives you natural boundaries for backups which you shouldn't ignore just because you have raid.
Unfortunately this is my backup server, and also file server. While I may move the file server part out to another box in the future, for now it's going to be serving two roles. I would like to be able to depend on it.
In the future I might set up a backup of this server to be on Amazon's S3. Is there a linux program that interfaces with it?
Russ
Ruslan Sivak wrote:
I like to keep things simple-minded and not fight with anadconda. During the install, put /boot, swap, and / on your first 2 drives as RAID1. After that works the way you want, build whatever layout you want with the rest of your space and either move your /home contents and mount point over or mount it somewhere else. A nice feature of this approach is that you can upgrade to pretty much any other version/distro by building a new set of system disks and swapping them in, keeping your data intact. I also like to use disks in swappable carriers and to keep a spare chassis around. That way you can use it for testing things and developing your next version but if your production motherboard fails you can just move the drives to it and keep going.
I have 4 500GB drives. Seems kind of a waste to put just /boot swap and / on the first 2 drives.
I typically use 36 Gig scsi's for the system. You can use that or even less for the first 3 partitions where you install, then add a 4th partition on the same pair of drives.
If you can deal with the space constraints of partitions that match single disk sizes by mounting them in appropriate places it's hard to beat RAID1. If everything fries except one drive you can still recover the data that was on it - plus it gives you natural boundaries for backups which you shouldn't ignore just because you have raid.
Unfortunately this is my backup server, and also file server. While I may move the file server part out to another box in the future, for now it's going to be serving two roles. I would like to be able to depend on it.
You are living very dangereously there. RAID can protect you from one of the more likely failures, but nowhere near all of them - and some will kill all the data in the box in one step.
In the future I might set up a backup of this server to be on Amazon's S3. Is there a linux program that interfaces with it? Russ
I'd toss two of the drives in some desktop linux box and run backuppc on it - and get an external drive to periodically make an offsite copy. If your data compresses well you could use drives about half the size for backuppc.
I have 4 500GB drives. Seems kind of a waste to put just /boot swap and / on the first 2 drives. Unfortunately this is my backup server, and also file server. While I may move the file server part out to another box in the future, for now it's going to be serving two roles. I would like to be able to depend on it. In the future I might set up a backup of this server to be on Amazon's S3. Is there a linux program that interfaces with it?
If this is a file server too, may I suggest that you keep system stuff separate from your data like Les says he does? Did you have to get real old stuff for your add-on 'raid' controller? A newer and less quirky si3124 based controller cannot be much more expensive than what you have given that you have a four port card.
Please put boot and /tmp (maybe 512MB/1GB each) on its own mirrored partition, then make two nice big mirrors from the rest and use lvm to stripe them. That way, you don't have to make a lot of partitions to balance usage to get a four big partitions to get your 'raid10' array. anaconda supports this kind of configuration easily and you can also create the volumes for swap, /, whatever_you_fancy_partition, maybe /opt, /usr/local, /home with anaconda.
I'm not sure what you mean by "get real old stuff for your controller". The controller is brand new, although the pc is a few years old. The controller is si3114 based. What's so quirky about it vs si3124?
Here's the way I plan to set things up. Please let me know if this is worse then what you suggest.
4 partitions per drive 1st partition - 200 mb 2nd partition - 250 mb 3rd partition - 5gb 4th partition - 745gb
Md0 raid 1 with 2 spares - 1st partition of all drives - /boot Md1 raid0 - 2nd partition of all drives - swap Md2 raid6 - 3rd partition of all drives - /
After install create Md10 raid10 - 4th partition of all drives - /data
What advantages, if any, would lvm have over this set up?
Russ Sent wirelessly via BlackBerry from T-Mobile.
-----Original Message----- From: Feizhou feizhou@graffiti.net Date: Tue, 08 May 2007 20:15:35 To:CentOS mailing list centos@centos.org Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
I have 4 500GB drives. Seems kind of a waste to put just /boot swap and / on the first 2 drives. Unfortunately this is my backup server, and also file server. While I may move the file server part out to another box in the future, for now it's going to be serving two roles. I would like to be able to depend on it. In the future I might set up a backup of this server to be on Amazon's S3. Is there a linux program that interfaces with it?
If this is a file server too, may I suggest that you keep system stuff separate from your data like Les says he does? Did you have to get real old stuff for your add-on 'raid' controller? A newer and less quirky si3124 based controller cannot be much more expensive than what you have given that you have a four port card.
Please put boot and /tmp (maybe 512MB/1GB each) on its own mirrored partition, then make two nice big mirrors from the rest and use lvm to stripe them. That way, you don't have to make a lot of partitions to balance usage to get a four big partitions to get your 'raid10' array. anaconda supports this kind of configuration easily and you can also create the volumes for swap, /, whatever_you_fancy_partition, maybe /opt, /usr/local, /home with anaconda. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, May 08, 2007 at 02:26:46PM +0000, Russ enlightened us:
I'm not sure what you mean by "get real old stuff for your controller". The controller is brand new, although the pc is a few years old. The controller is si3114 based. What's so quirky about it vs si3124?
Here's the way I plan to set things up. Please let me know if this is worse then what you suggest.
4 partitions per drive 1st partition - 200 mb 2nd partition - 250 mb 3rd partition - 5gb 4th partition - 745gb
Md0 raid 1 with 2 spares - 1st partition of all drives - /boot Md1 raid0 - 2nd partition of all drives - swap Md2 raid6 - 3rd partition of all drives - /
After install create Md10 raid10 - 4th partition of all drives - /data
What advantages, if any, would lvm have over this set up?
I would not make swap a RAID 0. Ever. It is fairly rare on systems to actually use swap, so they don't need to be *that* fast. And if you lose a drive, the system might chug along nicely until you try to access swap, in which case I imagine the machine would crap all over itself. I would make those partitions RAID 1, or perhaps a pair of RAID 1 md's with equal priority.
Matt
Russ wrote:
I'm not sure what you mean by "get real old stuff for your controller". The controller is brand new, although the pc is a few years old. The controller is si3114 based. What's so quirky about it vs si3124?
Here's the way I plan to set things up. Please let me know if this is worse then what you suggest.
4 partitions per drive 1st partition - 200 mb 2nd partition - 250 mb 3rd partition - 5gb 4th partition - 745gb
Md0 raid 1 with 2 spares - 1st partition of all drives - /boot
Suit yourself. I personally do not see the point of 2 spares when your system uses raid6.
Md1 raid0 - 2nd partition of all drives - swap
This is a no-no.
Md2 raid6 - 3rd partition of all drives - /
Overkill. Why have so many different raid running? Besides, raid5 has been iffy and I wonder how stable raid6 is.
After install create Md10 raid10 - 4th partition of all drives - /data
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Feizhou wrote:
Russ wrote:
Md2 raid6 - 3rd partition of all drives - /
Overkill. Why have so many different raid running? Besides, raid5 has been iffy and I wonder how stable raid6 is.
I think you're right. I've lost the raid every time when I pulled out the boot drive.
After install create Md10 raid10 - 4th partition of all drives - /data
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?) 2 250mb raid1 arrays over the 4 drives (2 drives each ) for swap rest of space in 2 raid 1 arrays lvm on top of the 2 raid1 arrays / 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient this is. Does this sound like a good solution?
Russ
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
You have four disks which will be paired into two pairs. If one pair goes, everything goes. Might as well use one pair for /boot and the other for /tmp.
2 250mb raid1 arrays over the 4 drives (2 drives each ) for swap
You can use a logical volume for swap. This is really not necessary.
rest of space in 2 raid 1 arrays lvm on top of the 2 raid1 arrays / 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
and snap shots.
I will pull out a drive tommorow and see how resilient this is. Does this sound like a good solution?
Actually, installing Open Solaris (nexenta distro -> www.gnusolaris.org) and using zfs would be much better and less of an administrative headache :D.
/me runs for cover.
Feizhou wrote:
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
You have four disks which will be paired into two pairs. If one pair goes, everything goes. Might as well use one pair for /boot and the other for /tmp.
I'm not quite sure I understand? This is raid1, not raid10. While I'm not sure exactly how raid1 works with 4 drives, I'm assuming everything is a copy of a copy of a copy... So how would 2 drives going out kill the whole raid1 device?
Russ
Ruslan Sivak wrote:
Feizhou wrote:
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
You have four disks which will be paired into two pairs. If one pair goes, everything goes. Might as well use one pair for /boot and the other for /tmp.
I'm not quite sure I understand? This is raid1, not raid10. While I'm not sure exactly how raid1 works with 4 drives, I'm assuming everything is a copy of a copy of a copy... So how would 2 drives going out kill the whole raid1 device?
NOT everything is raid1 now is it? Your data/system is on raid10 RIGHT?
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
What advantages, if any, would lvm have over this set up?
How about flexible filesystem resizing? If you did it the way I suggested: 512MB /boot, 512MB /tmp, you have like 960GB of space to carve anyway you like. You also get lvm snapshots which you won't get with raid seeing that this is supposed to be a backup server too.
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
You have four disks which will be paired into two pairs. If one pair goes, everything goes. Might as well use one pair for /boot and the other for /tmp.
I'm not quite sure I understand? This is raid1, not raid10. While I'm not sure exactly how raid1 works with 4 drives, I'm assuming everything is a copy of a copy of a copy... So how would 2 drives going out kill the whole raid1 device?
NOT everything is raid1 now is it? Your data/system is on raid10 RIGHT? _______________________________________________
Well apparently not. I decided to go with 2 raid1's with LVM striped on top of it. The previous comment was specifically about /boot on raid1. I have tested it, and it worked flawlessly even with 2 drives taken out. All my arrays held up. Of course if I took out one of the wrong drives, I would've lost my data, but I think the book partition would've been ok.
Russ
NOT everything is raid1 now is it? Your data/system is on raid10 RIGHT? _______________________________________________
Well apparently not. I decided to go with 2 raid1's with LVM striped on top of it. The previous comment was specifically about /boot on raid1.
The boot partition on its own is useless as the only thing you will find there are the kernel images and initrd image files.
I have tested it, and it worked flawlessly even with 2 drives taken out. All my arrays held up. Of course if I took out one of the wrong drives, I would've lost my data, but I think the book partition would've been ok.
So there really is no point in making /boot survive the loss of one of of the other raid1 arrays.
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
So there really is no point in making /boot survive the loss of one of of the other raid1 arrays.
Really depends on what you have on your boot partition. ;-)
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
So there really is no point in making /boot survive the loss of one of of the other raid1 arrays.
Really depends on what you have on your boot partition. ;-)
You mean in the initrd images :D
But seriously, when you need a /boot partition, it is only as useful as the rest of the system.
Feizhou wrote:
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
So there really is no point in making /boot survive the loss of one of of the other raid1 arrays.
Really depends on what you have on your boot partition. ;-)
You mean in the initrd images :D
But seriously, when you need a /boot partition, it is only as useful as the rest of the system.
This is true. In this particular case, I wanted the boot partition to survive if the data disks survived. Kinda annoying to have the data disks survive, but have to rebuild the system because the / or the /boot paritions didn't survive.
With 500GB drives, an extra 200MB partition hardly matters, while it will save me a lot of headaches should one or two drives fail (assuming it's the right drives, and the data partition is still there).
Russ
Ruslan Sivak wrote:
But seriously, when you need a /boot partition, it is only as useful as the rest of the system.
This is true. In this particular case, I wanted the boot partition to survive if the data disks survived. Kinda annoying to have the data disks survive, but have to rebuild the system because the / or the /boot paritions didn't survive. With 500GB drives, an extra 200MB partition hardly matters, while it will save me a lot of headaches should one or two drives fail (assuming it's the right drives, and the data partition is still there).
I didn't think you were duplicating / across all 4 in the layout you proposed. Thus the questions about putting /boot there. If your /boot doesn't work you can always boot the install cd in rescue mode to fix it but there's not much you can do about a missing /.
Les Mikesell wrote:
Ruslan Sivak wrote:
But seriously, when you need a /boot partition, it is only as useful as the rest of the system.
This is true. In this particular case, I wanted the boot partition to survive if the data disks survived. Kinda annoying to have the data disks survive, but have to rebuild the system because the / or the /boot paritions didn't survive. With 500GB drives, an extra 200MB partition hardly matters, while it will save me a lot of headaches should one or two drives fail (assuming it's the right drives, and the data partition is still there).
I didn't think you were duplicating / across all 4 in the layout you proposed. Thus the questions about putting /boot there. If your /boot doesn't work you can always boot the install cd in rescue mode to fix it but there's not much you can do about a missing /.
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if you can just set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
Russ
Ruslan Sivak wrote:
I didn't think you were duplicating / across all 4 in the layout you proposed. Thus the questions about putting /boot there. If your /boot doesn't work you can always boot the install cd in rescue mode to fix it but there's not much you can do about a missing /.
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if you can just set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
I don't think there is a limit. I have one set built with 2 250 gig internal drives plus one specified as 'missing'. I periodically connect a matching external firewire drive, add it to the array and let it sync. When it is finished I unmount the partition just long enough to fail the drive and remove it, and then rotate it offsite. This is for a backuppc archive which has millions of hardlinks from its pooling scheme that would make it difficult to copy with normal file oriented methods.
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if you can just set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Feizhou Sent: Thursday, May 10, 2007 11:21 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if
you can just
set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks
like it works.
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of, what else is he going to do with the 200MB at the beginning of each disk which he has because of partition symmetry across drives?
Makes sense to just dup the partition setup from one to the other and now with grub and a working /boot on each disk the order of the drives is no longer important, he can take all 4 out, play 4 disk monty, slap them back in and the system should come up without a problem.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Hi
On Fri 11-May-2007 at 09:38:22AM -0400, Ross S. W. Walker wrote:
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of, what else is he going to do with the 200MB at the beginning of each disk which he has because of partition symmetry across drives?
Makes sense to just dup the partition setup from one to the other and now with grub and a working /boot on each disk the order of the drives is no longer important, he can take all 4 out, play 4 disk monty, slap them back in and the system should come up without a problem.
FWIW this is what I did with the last server I built which had 4x500gb drives -- a RAID 1 /boot on 4 drives. The trick for this is to edit your grub.conf so that you can boot off any drive and run grub-install on all 4 drives, also you have to remember to manually edit your grub.conf after each kernel upgrade to add the 3 extra disks:
title CentOS (2.6.18-8.1.1.el5xen) Disk 0 root (hd0,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 1 root (hd1,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 2 root (hd2,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 3 root (hd3,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img
If I had read this thread before I set up this machine I'd have used RAID 6 from the rest of the space, but I used RAID 5 with a hot spare, with LVM on top of that.
Before the machine was moved to the colo I tried pulling disks out while it was running and this worked without a problem, which was nice :-)
Chris
Chris Croome wrote:
Hi
On Fri 11-May-2007 at 09:38:22AM -0400, Ross S. W. Walker wrote:
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of, what else is he going to do with the 200MB at the beginning of each disk which he has because of partition symmetry across drives?
Makes sense to just dup the partition setup from one to the other and now with grub and a working /boot on each disk the order of the drives is no longer important, he can take all 4 out, play 4 disk monty, slap them back in and the system should come up without a problem.
FWIW this is what I did with the last server I built which had 4x500gb drives -- a RAID 1 /boot on 4 drives. The trick for this is to edit your grub.conf so that you can boot off any drive and run grub-install on all 4 drives, also you have to remember to manually edit your grub.conf after each kernel upgrade to add the 3 extra disks:
title CentOS (2.6.18-8.1.1.el5xen) Disk 0 root (hd0,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 1 root (hd1,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 2 root (hd2,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img title CentOS (2.6.18-8.1.1.el5xen) Disk 3 root (hd3,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro root=/dev/VolGroup00/Root module /initrd-2.6.18-8.1.1.el5xen.img
If I had read this thread before I set up this machine I'd have used RAID 6 from the rest of the space, but I used RAID 5 with a hot spare, with LVM on top of that.
Before the machine was moved to the colo I tried pulling disks out while it was running and this worked without a problem, which was nice :-)
Chris
Chris,
I didn't have to install grub on any of the other volumes, and the server seemed to do well after drive failure (I pulled out drives 1 and 3). In my opinion, neither raid5 or raid6 makes sense with 4 drives, as you will get the same amount of space with raid10, but much better performance and availability (although raid6 is supposed to withstand 2 drive failures, in my tests it has not done well, and neither has raid5). As you might've read from the thread, if you want to put the / volume on the raid10, it will not be possible during the install, but you can set up 2 raid1 volumes, and do an LVM stripe across them which should yield comparable performance.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Friday, May 11, 2007 3:27 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Chris Croome wrote:
Hi
On Fri 11-May-2007 at 09:38:22AM -0400, Ross S. W. Walker wrote:
The entire use all four disks for /boot makes no sense if
two disks
belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of,
what else is he
going to do with the 200MB at the beginning of each disk
which he has
because of partition symmetry across drives?
Makes sense to just dup the partition setup from one to
the other and
now with grub and a working /boot on each disk the order
of the drives
is no longer important, he can take all 4 out, play 4 disk
monty, slap
them back in and the system should come up without a problem.
FWIW this is what I did with the last server I built which
had 4x500gb
drives -- a RAID 1 /boot on 4 drives. The trick for this is
to edit your
grub.conf so that you can boot off any drive and run
grub-install on all
4 drives, also you have to remember to manually edit your grub.conf after each kernel upgrade to add the 3 extra disks:
title CentOS (2.6.18-8.1.1.el5xen) Disk 0 root (hd0,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 1 root (hd1,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 2 root (hd2,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 3 root (hd3,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
If I had read this thread before I set up this machine I'd have used RAID 6 from the rest of the space, but I used RAID 5 with a
hot spare,
with LVM on top of that.
Before the machine was moved to the colo I tried pulling
disks out while
it was running and this worked without a problem, which was nice :-)
Chris
Chris,
I didn't have to install grub on any of the other volumes, and the server seemed to do well after drive failure (I pulled out drives 1 and 3). In my opinion, neither raid5 or raid6 makes sense with 4 drives, as you will get the same amount of space with raid10, but much better performance and availability (although raid6 is supposed to withstand 2 drive failures, in my tests it has not done well, and neither has raid5). As you might've read from the thread, if you want to put the / volume on the raid10, it will not be possible during the install, but you can set up 2 raid1 volumes, and do an LVM stripe across them which should yield comparable performance.
I think if you setup the 4 disk raid1 at boot grub gets installed on each as part of the install process.
You'd only need to do it yourself if you put a new disk to replace a failed one, then just run grub-install on it.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
I think if you setup the 4 disk raid1 at boot grub gets installed on each as part of the install process.
You'd only need to do it yourself if you put a new disk to replace a failed one, then just run grub-install on it.
-Ross
Why is this exactly? Does it have something to do with the boot sector or something? Doesn't the raid1 take care of mirroring everything on that partition (with the exception of the boot sector of the drive?)
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Friday, May 11, 2007 3:34 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
I think if you setup the 4 disk raid1 at boot grub gets installed on each as part of the install process.
You'd only need to do it yourself if you put a new disk to replace a failed one, then just run grub-install on it.
-Ross
Why is this exactly? Does it have something to do with the boot sector or something? Doesn't the raid1 take care of mirroring everything on that partition (with the exception of the boot sector of the drive?)
Exactly, grub is installed on sector 0, the boot sector, and that sector isn't touched when a new drive is partitioned and thus not part of the mirrored data.
You may also have to mark the /boot partitions on each drive as bootable too, but grub may just ignore that flag... I can't remember.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of, what else is he going to do with the 200MB at the beginning of each disk which he has because of partition symmetry across drives?
Personally I think he is being overly paranoid about more than 2 disks failing at once since the odds are really slim that will happen, and not paranoid enough about all 4 disks failing because most things that would cause 2 to die (or be erased/corrupted/whatever) will kill all 4 of them.
Makes sense to just dup the partition setup from one to the other and now with grub and a working /boot on each disk the order of the drives is no longer important, he can take all 4 out, play 4 disk monty, slap them back in and the system should come up without a problem.
One thing that might prove useful later is to leave the space for a duplicate system (/) partition as a raid1 on the 3rd and 4th drives that you don't use yet. Then when you want to upgrade to the next OS rev or a different distribution, install on this unused partition and configure grub to dual-boot. If you have any problem you can have the old version back in the time it takes to reboot.
Feizhou wrote:
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if you can just set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
Russ
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
so, is your / on a 4 spindle mirror too?
N-way mirrors require every write to be done N times.
John R Pierce wrote:
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
so, is your / on a 4 spindle mirror too?
N-way mirrors require every write to be done N times.
No, / is on a 2 raid 1 arrays which are striped using LVM. And the N times write penalty doesn't really apply to boot, since it rarely gets written to.
Russ
Ruslan Sivak wrote:
John R Pierce wrote:
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
so, is your / on a 4 spindle mirror too?
N-way mirrors require every write to be done N times.
No, / is on a 2 raid 1 arrays which are striped using LVM. And the N times write penalty doesn't really apply to boot, since it rarely gets written to.
so, if the LVM is built from the 2 raid1 sets md1 and md2, and both drives of md1 are down, its down. ditto if both drives of md2 are down. so why not just put the /boot on the same spindles as md1 ?
btw, LVM isn't really striping, its more like globbing. or scattering.
John R Pierce wrote:
Ruslan Sivak wrote:
John R Pierce wrote:
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
so, is your / on a 4 spindle mirror too?
N-way mirrors require every write to be done N times.
No, / is on a 2 raid 1 arrays which are striped using LVM. And the N times write penalty doesn't really apply to boot, since it rarely gets written to.
so, if the LVM is built from the 2 raid1 sets md1 and md2, and both drives of md1 are down, its down. ditto if both drives of md2 are down. so why not just put the /boot on the same spindles as md1 ?
btw, LVM isn't really striping, its more like globbing. or scattering.
Well the reason not to put /boot on the same spindles as md1 has already been mentioned a few times. Basically flexibility.
What do you mean LVM doesn't really do striping? What does globbing mean? Does it mean there is no performance difference between striping LVM and just concatenating 2 raid1's?
Russ
Well the reason not to put /boot on the same spindles as md1 has already been mentioned a few times. Basically flexibility. What do you mean LVM doesn't really do striping? What does globbing mean? Does it mean there is no performance difference between striping LVM and just concatenating 2 raid1's?
conventional stripesets tend to use a stripe size around 32k or 128k bytes. LVM tends to use a fairly large PE size, often 32MB. LVM -is- striping these PE's, if you told it to do that, but its striping with this very large chunk size, which means far fewer individual disk operation will utilize both logical drives. If you have lots of concurrent disk accesses, this may not matter.
John R Pierce wrote:
Well the reason not to put /boot on the same spindles as md1 has already been mentioned a few times. Basically flexibility. What do you mean LVM doesn't really do striping? What does globbing mean? Does it mean there is no performance difference between striping LVM and just concatenating 2 raid1's?
conventional stripesets tend to use a stripe size around 32k or 128k bytes. LVM tends to use a fairly large PE size, often 32MB. LVM -is- striping these PE's, if you told it to do that, but its striping with this very large chunk size, which means far fewer individual disk operation will utilize both logical drives. If you have lots of concurrent disk accesses, this may not matter. _______________________________________________
The PE could be changed to a lower #. Would this negativelly affect LVM somehow?
Russ
Ruslan Sivak wrote:
John R Pierce wrote:
Well the reason not to put /boot on the same spindles as md1 has already been mentioned a few times. Basically flexibility. What do you mean LVM doesn't really do striping? What does globbing mean? Does it mean there is no performance difference between striping LVM and just concatenating 2 raid1's?
conventional stripesets tend to use a stripe size around 32k or 128k bytes. LVM tends to use a fairly large PE size, often 32MB. LVM -is- striping these PE's, if you told it to do that, but its striping with this very large chunk size, which means far fewer individual disk operation will utilize both logical drives. If you have lots of concurrent disk accesses, this may not matter. _______________________________________________
The PE could be changed to a lower #. Would this negativelly affect LVM somehow?
the way LVM works, LV's -can- be built from any random set of PE's, so it has to use mapping tables (this is so you can grow LVs onto new space and so forth). Smaller PE means more PEs, means bigger mapping tables. I'm not sure I could quantify the significance of that by the seat of my pants.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of John R Pierce Sent: Friday, May 11, 2007 5:57 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Well the reason not to put /boot on the same spindles as md1 has already been mentioned a few times. Basically flexibility. What do you mean LVM doesn't really do striping? What does
globbing
mean? Does it mean there is no performance difference between striping LVM and just concatenating 2 raid1's?
conventional stripesets tend to use a stripe size around 32k or 128k bytes. LVM tends to use a fairly large PE size, often 32MB. LVM -is- striping these PE's, if you told it to do that, but its striping with this very large chunk size, which means far fewer individual disk operation will utilize both logical drives. If you have lots of concurrent disk accesses, this may not matter.
LVM interleaving uses 64K chunk by default and writes these chunks across the extents on the physical volumes in a round robin fashion. The chunk size can be changed with the -I option and supports 4K to 1M.
Check out lvmcreate and the -i option.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Yes, if two disks belonging to the same mirror go down, I lose my data. But if two disks which are not from the same mirror go down, I would like to be able to boot up without any problems. And as someone else mentioned, what else am I going to do with those 200mb? I'd rather maintain symmetry and not have to worry which disks /boot is on, as I'll know that if the data drive survives, then I won't have any problems with the boot drive either.
ext2 /tmp?
Anyhow, suit yourself :)
Feizhou wrote:
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if you can just set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what? _______________________________________________
Feizhou,
After thinking about it for a little bit, I see your point. If one of the arrays fails, the data drive is gone, and there is no point keeping boot after that. This means it suffices to put /boot on the first raid1 array. However, as was mentioned elsewhere in this thread, by mirroring the boot partition on all 4 drives, I don't have to worry about which order the drives are plugged in, and I can preserve disk symmetry. Since the cost of the drive is about $0.25 a GB, I think wasting the $0.10 is worth the flexibility and peace of mind.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Friday, May 11, 2007 3:32 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Feizhou wrote:
Originally I had boot on 2 drives raid1, with 2 more drives being hotspares. Then I realized that this makes no sense, if
you can just
set up a raid1 with 4 drives (where each drive is a copy of each other). Didn't really know if it was supported, but looks like it works.
The entire use all four disks for /boot makes no sense if two disks belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can
have three
disks fail and still have a working /boot. For what? _______________________________________________
Feizhou,
After thinking about it for a little bit, I see your point. If one of the arrays fails, the data drive is gone, and there is no point keeping boot after that. This means it suffices to put /boot on the first raid1 array. However, as was mentioned elsewhere in this thread, by mirroring the boot partition on all 4 drives, I don't have to worry about which order the drives are plugged in, and I can preserve disk symmetry. Since the cost of the drive is about $0.25 a GB, I think wasting the $0.10 is worth the flexibility and peace of mind.
Don't worry about it, the 4 partition raid1 makes perfect sense in your setup not for the reason you gave, but for:
1) simplicity 2) symmetry 3) flexibility 4) low to no overhead, 99.99% read partitions
It provides no downside and a lot of added bonuses.
So how is the performance on the striped LVM LVs?
Do you see the 120MB/s throughput?
Random I/O should also be as good as your drives allow.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ruslan Sivak wrote:
Feizhou wrote:
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
So there really is no point in making /boot survive the loss of one of of the other raid1 arrays.
Really depends on what you have on your boot partition. ;-)
You mean in the initrd images :D
But seriously, when you need a /boot partition, it is only as useful as the rest of the system.
This is true. In this particular case, I wanted the boot partition to survive if the data disks survived. Kinda annoying to have the data disks survive, but have to rebuild the system because the / or the /boot paritions didn't survive. With 500GB drives, an extra 200MB partition hardly matters, while it will save me a lot of headaches should one or two drives fail (assuming it's the right drives, and the data partition is still there).
Your minimum data and system disks are two. One from each mirror. So if you put /boot on either pair, you will always have a working /boot and working data/system access. There is absolutely no point in having /boot on all four disks. You do not get any benefit at all.
Ruslan Sivak wrote:
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
What't the point of putting this on more than 2 drives?
/ 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient this is. Does this sound like a good solution?
It is versatile, if you don't know where the additional space will be needed but don't think mounting it as separate partitions in subdirectories will be handy. I forgot to mention the other reason I like straight RAID1 installs - you can easily clone a machine with all of its current software by pulling a drive, booting it in a new machine and rebuilding the raids on both.
Am Mittwoch, 9. Mai 2007 schrieb Les Mikesell:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
What't the point of putting this on more than 2 drives?
What's the point in not putting it on all available drives?
I have a similar setup here on one machine and I like the fact that each disc is partitionied in exactly the same way and I could use each disc for booting if I swapped them around. The added redundancy is a nice bonus.
Sure it wastes some space. But hey, what are a few megabytes when you have hundreds of gigabytes on the data partition. It's just not worth to introduce additional complexity just to gain 400mb IMHO.
Les Mikesell wrote:
Ruslan Sivak wrote:
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes 4 copies of the data?)
What't the point of putting this on more than 2 drives?
Well for one thing, if 2 drives fail and it doens't get a chance to rebuild, then I still have 2 good drives. Another thing is if a drive fails and the spare is in the wrong location, and the spare becomes the boot drive, it won't be able to boot, but if all 4 drives are copies of each other, then everything is well and good.
/ 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient this is. Does this sound like a good solution?
It is versatile, if you don't know where the additional space will be needed but don't think mounting it as separate partitions in subdirectories will be handy. I forgot to mention the other reason I like straight RAID1 installs - you can easily clone a machine with all of its current software by pulling a drive, booting it in a new machine and rebuilding the raids on both.
I don't see why I can't pull out 2 drives out of this install (like 1 and 3), put them into another machine and let it rebuild itself.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Wednesday, May 09, 2007 10:36 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Les Mikesell wrote:
Ruslan Sivak wrote:
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes
4 copies of
the data?)
What't the point of putting this on more than 2 drives?
Well for one thing, if 2 drives fail and it doens't get a chance to rebuild, then I still have 2 good drives. Another thing is if a drive fails and the spare is in the wrong location, and the spare becomes the boot drive, it won't be able to boot, but if all 4 drives are copies of each other, then everything is well and good.
/ 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient
this is. Does
this sound like a good solution?
It is versatile, if you don't know where the additional
space will be
needed but don't think mounting it as separate partitions in subdirectories will be handy. I forgot to mention the
other reason I
like straight RAID1 installs - you can easily clone a
machine with all
of its current software by pulling a drive, booting it in a new machine and rebuilding the raids on both.
I don't see why I can't pull out 2 drives out of this install (like 1 and 3), put them into another machine and let it rebuild itself.
I figured out how to create interleaved LVs on the install, it is a little PITA though.
Start the install, create all your RAIDs and VGs and LVs as before, and move on to the next step, once they have been committed and formatted before the package installation, reboot and start over.
Then when you get to the partitioning section, select "Custom" and go back to that screen.
Then jump to tty2 (or serial console) and start the shell, go into lvm, by typing 'lvm'.
Within lvm you will need to remove the existing LVs with:
lvremove <VGname>/<LVname>
for each LV created, swap, root, etc.
Then re-create the LVs with:
lvcreate -L <size in MB>M -i 2 -n <LVname> <VGname>
Do it for each LV, the -i 2, says interleave (strip) it across 2 PVs.
Once that's done you can then hit the <Back> button and then go back in to 'Custom' to have it refresh the setup. Just choose to format each in their types and move along.
Remember to choose 'Custom Layout' each time so you don't fubar your hard work!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross,
Thanks for doing the research. Would I not be able to do the same thing by booting into rescue mode or something and pre-creating the volumes?
Also have you noticed a difference between standard and interleaved set up? In my crude tests with hdparm -t, my numbers go from about 50 to about 70 on a raid1 to raid10.
Also intreating to note that with 4 drives in a hardware raid10 on another box, I was getting upwards of 200mb/s with similar drives (750gb seagates sata vs 500gb wd sata that I'm using now).
Why such a big discreptancy? Is it possible I'm hitting the limits of the PCI bus?
Russ Sent wirelessly via BlackBerry from T-Mobile.
-----Original Message----- From: "Ross S. W. Walker" rwalker@medallion.com Date: Wed, 9 May 2007 11:40:42 To:"CentOS mailing list" centos@centos.org Subject: RE: [CentOS] Re: Anaconda doesn't support raid10
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Wednesday, May 09, 2007 10:36 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Les Mikesell wrote:
Ruslan Sivak wrote:
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes
4 copies of
the data?)
What't the point of putting this on more than 2 drives?
Well for one thing, if 2 drives fail and it doens't get a chance to rebuild, then I still have 2 good drives. Another thing is if a drive fails and the spare is in the wrong location, and the spare becomes the boot drive, it won't be able to boot, but if all 4 drives are copies of each other, then everything is well and good.
/ 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient
this is. Does
this sound like a good solution?
It is versatile, if you don't know where the additional
space will be
needed but don't think mounting it as separate partitions in subdirectories will be handy. I forgot to mention the
other reason I
like straight RAID1 installs - you can easily clone a
machine with all
of its current software by pulling a drive, booting it in a new machine and rebuilding the raids on both.
I don't see why I can't pull out 2 drives out of this install (like 1 and 3), put them into another machine and let it rebuild itself.
I figured out how to create interleaved LVs on the install, it is a little PITA though.
Start the install, create all your RAIDs and VGs and LVs as before, and move on to the next step, once they have been committed and formatted before the package installation, reboot and start over.
Then when you get to the partitioning section, select "Custom" and go back to that screen.
Then jump to tty2 (or serial console) and start the shell, go into lvm, by typing 'lvm'.
Within lvm you will need to remove the existing LVs with:
lvremove <VGname>/<LVname>
for each LV created, swap, root, etc.
Then re-create the LVs with:
lvcreate -L <size in MB>M -i 2 -n <LVname> <VGname>
Do it for each LV, the -i 2, says interleave (strip) it across 2 PVs.
Once that's done you can then hit the <Back> button and then go back in to 'Custom' to have it refresh the setup. Just choose to format each in their types and move along.
Remember to choose 'Custom Layout' each time so you don't fubar your hard work!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
You could do the same thing in rescue mode, but this is a destructive operation, which means it'll wipe the current contents of those LVs into the river Styx, so you will need to re-install the OS, so why not do it in install?
It will be simpler and more flexible to create your swap area in an LV too then a separate partition, you will also see improvement from the interleaving in swap (if for God's sake you need to see it!).
Your not hitting the limits of your PCI bus yet. You are most likely suffering from a poor SATA controller, not all SATA controllers are the same, newer better ones do NCQ tagged command queuing to SATA drives that support it, which allows overlapping IO commands to be issued.
Also SATA drives can vary greatly in performance from one model to another from a given manufacturer, so look at the drive specs on the manufacturer's web site.
For sequential IO look at sustained data transfer rate.
For random IO look at the total read/write seek time.
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Cheers,
-Ross
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Russ Sent: Wednesday, May 09, 2007 11:51 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross,
Thanks for doing the research. Would I not be able to do the same thing by booting into rescue mode or something and pre-creating the volumes?
Also have you noticed a difference between standard and interleaved set up? In my crude tests with hdparm -t, my numbers go from about 50 to about 70 on a raid1 to raid10.
Also intreating to note that with 4 drives in a hardware raid10 on another box, I was getting upwards of 200mb/s with similar drives (750gb seagates sata vs 500gb wd sata that I'm using now).
Why such a big discreptancy? Is it possible I'm hitting the limits of the PCI bus?
Russ Sent wirelessly via BlackBerry from T-Mobile.
-----Original Message----- From: "Ross S. W. Walker" rwalker@medallion.com Date: Wed, 9 May 2007 11:40:42 To:"CentOS mailing list" centos@centos.org Subject: RE: [CentOS] Re: Anaconda doesn't support raid10
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Wednesday, May 09, 2007 10:36 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Les Mikesell wrote:
Ruslan Sivak wrote:
Yea, I think for these reasons I will use lvm. I have set up a system as follows:
/boot raid 1 200mb 4 drives no spares (I guess this makes
4 copies of
the data?)
What't the point of putting this on more than 2 drives?
Well for one thing, if 2 drives fail and it doens't get a chance to rebuild, then I still have 2 good drives. Another thing is if a drive fails and the spare is in the wrong location, and the spare becomes the boot drive, it won't be able to boot, but if all 4 drives are copies of each other, then everything is well and good.
/ 10gb on lvm /data 50gb on lvm /backup 250gb on lvm
rest of space left free to allow for resizing and adding of partitions with lvm
I will pull out a drive tommorow and see how resilient
this is. Does
this sound like a good solution?
It is versatile, if you don't know where the additional
space will be
needed but don't think mounting it as separate partitions in subdirectories will be handy. I forgot to mention the
other reason I
like straight RAID1 installs - you can easily clone a
machine with all
of its current software by pulling a drive, booting it in a new machine and rebuilding the raids on both.
I don't see why I can't pull out 2 drives out of this
install (like 1
and 3), put them into another machine and let it rebuild itself.
I figured out how to create interleaved LVs on the install, it is a little PITA though.
Start the install, create all your RAIDs and VGs and LVs as before, and move on to the next step, once they have been committed and formatted before the package installation, reboot and start over.
Then when you get to the partitioning section, select "Custom" and go back to that screen.
Then jump to tty2 (or serial console) and start the shell, go into lvm, by typing 'lvm'.
Within lvm you will need to remove the existing LVs with:
lvremove <VGname>/<LVname>
for each LV created, swap, root, etc.
Then re-create the LVs with:
lvcreate -L <size in MB>M -i 2 -n <LVname> <VGname>
Do it for each LV, the -i 2, says interleave (strip) it across 2 PVs.
Once that's done you can then hit the <Back> button and then go back in to 'Custom' to have it refresh the setup. Just choose to format each in their types and move along.
Remember to choose 'Custom Layout' each time so you don't fubar your hard work!
-Ross
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it?
Feizhou wrote:
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
He was probably hinting at me for top posting. Unfortunately, sometimes I write from the blackberry, which only allows top posting. Take it up with RIM.
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it? _______________________________________________
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
Russ
Ruslan Sivak wrote:
Feizhou wrote:
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
He was probably hinting at me for top posting. Unfortunately, sometimes I write from the blackberry, which only allows top posting. Take it up with RIM.
Hence the smiley.
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it? _______________________________________________
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on those platters which make up for the lack in rpm. NCQ should even up the random ability of SATA disks versus SCSI drives but that support has only become available lately on Linux and you also need the right hardware (besides the right disks).
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on those platters which make up for the lack in rpm. NCQ should even up the random ability of SATA disks versus SCSI drives but that support has only become available lately on Linux and you also need the right hardware (besides the right disks).
SAS and SCSI really has it's place when you need random access with lots of IOs per second, i.e. Fileserver, Database Server. We upgraded our Fileserver (NFS, Samba) from SATA SW Raid to SCSI HW Raid and the difference is HUGE. One the old system a single user doing a large file copy could bring the system almost to a halt. On the new system you do not even notice if one user does a similar operation. However plugging one of the same SCSI discs into your average PC will not give you much advantage.
There is also a line of SATA discs that aim for the low-end server market, the WD-Raptors. They spin at 10.000 rpm and give much better random access performance than normal SATA drives. The price point is very attractive compared to SCSI and SAS. Great alternative for a tight budget.
Here is my favorite site for comparing drives. Has nice background articles too: http://www.storagereview.com/
regards, Andreas Micklei
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on those platters which make up for the lack in rpm. NCQ should even up the random ability of SATA disks versus SCSI drives but that support has only become available lately on Linux and you also need the right hardware (besides the right disks).
SAS and SCSI really has it's place when you need random access with lots of IOs per second, i.e. Fileserver, Database Server. We upgraded our Fileserver (NFS, Samba) from SATA SW Raid to SCSI HW Raid and the difference is HUGE. One the old system a single user doing a large file copy could bring the system almost to a halt. On the new system you do not even notice if one user does a similar operation. However plugging one of the same SCSI discs into your average PC will not give you much advantage.
There is also a line of SATA discs that aim for the low-end server market, the WD-Raptors. They spin at 10.000 rpm and give much better random access performance than normal SATA drives. The price point is very attractive compared to SCSI and SAS. Great alternative for a tight budget.
Here is my favorite site for comparing drives. Has nice background articles too: http://www.storagereview.com/
I've always wanted a dollars to dollars comparison instead of comparing single components, and I've always thought that a bunch of RAM could make up for slow disks in a lot of situations. Has anyone done any sort of tests that would confirm whether a typical user would get better performance from spending that several hundred dollars premium for scsi on additional ram instead? Obviously this will depend to a certain extend on the applications and how much having additional cache can help it, but unless you are continuously writing new data, most things can live in cache - especially for machines that run continuously.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Les Mikesell Sent: Thursday, May 10, 2007 9:41 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
Probably not, but is SATA really much worse then SCSI or
SAS? I did
some testing on a dell PE 2950 of 750GB SATA's vs SAS and
SCSI drives,
and the SATA drives seem to be faster at least at first
glance. I don't
have good numbers from the SCSI tests, but at least for
sequantial, I'm
getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on
those platters
which make up for the lack in rpm. NCQ should even up the
random ability
of SATA disks versus SCSI drives but that support has only become available lately on Linux and you also need the right
hardware (besides
the right disks).
SAS and SCSI really has it's place when you need random
access with lots of
IOs per second, i.e. Fileserver, Database Server. We
upgraded our Fileserver
(NFS, Samba) from SATA SW Raid to SCSI HW Raid and the
difference is HUGE.
One the old system a single user doing a large file copy
could bring the
system almost to a halt. On the new system you do not even
notice if one user
does a similar operation. However plugging one of the same
SCSI discs into
your average PC will not give you much advantage.
There is also a line of SATA discs that aim for the low-end
server market, the
WD-Raptors. They spin at 10.000 rpm and give much better
random access
performance than normal SATA drives. The price point is
very attractive
compared to SCSI and SAS. Great alternative for a tight budget.
Here is my favorite site for comparing drives. Has nice
background articles
I've always wanted a dollars to dollars comparison instead of comparing single components, and I've always thought that a bunch of RAM could make up for slow disks in a lot of situations. Has anyone done any sort of tests that would confirm whether a typical user would get better performance from spending that several hundred dollars premium for scsi on additional ram instead? Obviously this will depend to a certain extend on the applications and how much having additional cache can help it, but unless you are continuously writing new data, most things can live in cache - especially for machines that run continuously.
RAM will never make up for it cause user's are always accessing files that are just outside of cache in size, especially if you have a lot of files open and if the disks are slow then cache will starve to keep up.
Always strive to get the best quality for the dollar even if quality costs more, because poor performance always makes IT skills look bad.
Better to scale down a project and use quality components then to use lesser quality components and end up with a solution that can't perform.
SATA is good for it's size, data-warehousing, document imaging, etc.
SCSI/SAS is good for it's performance, transactional systems, huge multi-user file access, latency sensitive data.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
I've always wanted a dollars to dollars comparison instead of comparing single components, and I've always thought that a bunch of RAM could make up for slow disks in a lot of situations. Has anyone done any sort of tests that would confirm whether a typical user would get better performance from spending that several hundred dollars premium for scsi on additional ram instead? Obviously this will depend to a certain extend on the applications and how much having additional cache can help it, but unless you are continuously writing new data, most things can live in cache - especially for machines that run continuously.
RAM will never make up for it cause user's are always accessing files that are just outside of cache in size, especially if you have a lot of files open and if the disks are slow then cache will starve to keep up.
I'm not convinced 'never' is the right answer here although you are of course right that cache can't solve all problems. Most of the speed issues I see on general purpose machines are really from head contention where a hundred different applications and/or users each want the head to be in a different place at the same time and end up waiting for each other's seeks. If some large percent of those requests can resolve from cache you speed up all the others. It's a hard thing to benchmark in ways that match real world use, though.
Always strive to get the best quality for the dollar even if quality costs more, because poor performance always makes IT skills look bad.
This isn't really a quality issue, it's about the tradeoff between a drive system that does somewhat better at actually handling lots of concurrent seek requests vs. cache to avoid the need to do many of those seeks at all. For the cases where the cache works, it will be hundreds of times faster - where it doesn't, the slower drive might be tens of times slower.
Better to scale down a project and use quality components then to use lesser quality components and end up with a solution that can't perform.
If you have an unlimited budget, you'd get both a scsi disk system with a lot of independent heads _and_ load the box with RAM. If you don't, you may have to choose one or the other.
SATA is good for it's size, data-warehousing, document imaging, etc.
SCSI/SAS is good for it's performance, transactional systems, huge multi-user file access, latency sensitive data.
No argument there, but the biggest difference is in how well they deal with concurrent seek requests. If you have to live with SATA due to the cost difference, letting the OS have some RAM for its intelligent cache mechanisms will sometimes help. I just wish there were some benchmark test values that would help predict how much.
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
He was probably hinting at me for top posting. Unfortunately, sometimes I write from the blackberry, which only allows top posting. Take it up with RIM.
Hence the smiley.
I know you meant it in a joking way. I'm kinda pissed at RIM though for not letting me reply properly on my blackberry.
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it? _______________________________________________
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on those platters which make up for the lack in rpm. NCQ should even up the random ability of SATA disks versus SCSI drives but that support has only become available lately on Linux and you also need the right hardware (besides the right disks).
How would I know if my system is using NCQ? I think my drives and card should support it.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Thursday, May 10, 2007 2:39 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Feizhou wrote:
Ruslan Sivak wrote:
Feizhou wrote:
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
He was probably hinting at me for top posting. Unfortunately, sometimes I write from the blackberry, which only allows top posting. Take it up with RIM.
Hence the smiley.
I know you meant it in a joking way. I'm kinda pissed at RIM though for not letting me reply properly on my blackberry.
I understand I often use my BB too and get a lot of flak for top-posting.
F@ck em
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it? _______________________________________________
Probably not, but is SATA really much worse then SCSI or
SAS? I did
some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests,
but at least
for sequantial, I'm getting a better speed off the SATAs.
sequential will be better than SCSI due to the packing on those platters which make up for the lack in rpm. NCQ should even up the random ability of SATA disks versus SCSI drives but that
support has
only become available lately on Linux and you also need the right hardware (besides the right disks).
How would I know if my system is using NCQ? I think my drives and card should support it.
Get the controller make/model and look it up, then get the drives make/model and look it up. If they both support NCQ then you should be good.
While your there get the sustained data transfer rate, total read seek time (includes rotational latency) and total write seek time.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
How would I know if my system is using NCQ? I think my drives and card should support it.
Get the controller make/model and look it up, then get the drives make/model and look it up. If they both support NCQ then you should be good.
While your there get the sustained data transfer rate, total read seek time (includes rotational latency) and total write seek time.
-Ross
I can get all that data, but can I actually test it somehow? Does linux know anything about NCQ, or is everything abstracted to the controller?
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Thursday, May 10, 2007 3:44 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
How would I know if my system is using NCQ? I think my drives and card should support it.
Get the controller make/model and look it up, then get the drives make/model and look it up. If they both support NCQ then you should be good.
While your there get the sustained data transfer rate, total read seek time (includes rotational latency) and total write seek time.
-Ross
I can get all that data, but can I actually test it somehow? Does linux know anything about NCQ, or is everything abstracted to the controller?
Good question, not knowing the answer I did a quick google and this came to the top:
http://linux-ata.org/driver-status.html
another good one,
http://blog.kovyrin.net/2006/08/11/turn-on-ncq-on-ich-linux/
Looks like support wasn't added until 2.6.18 and isn't widely supported until 2.6.19 and 2.6.20.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
I can get all that data, but can I actually test it somehow? Does linux know anything about NCQ, or is everything abstracted to the controller?
Good question, not knowing the answer I did a quick google and this came to the top:
http://linux-ata.org/driver-status.html
another good one,
http://blog.kovyrin.net/2006/08/11/turn-on-ncq-on-ich-linux/
Looks like support wasn't added until 2.6.18 and isn't widely supported until 2.6.19 and 2.6.20.
-Ross
Ross,
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Thursday, May 10, 2007 4:31 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
I can get all that data, but can I actually test it somehow? Does linux know anything about NCQ, or is everything abstracted to the controller?
Good question, not knowing the answer I did a quick google and this came to the top:
http://linux-ata.org/driver-status.html
another good one,
http://blog.kovyrin.net/2006/08/11/turn-on-ncq-on-ich-linux/
Looks like support wasn't added until 2.6.18 and isn't widely supported until 2.6.19 and 2.6.20.
-Ross
Ross,
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
How did it go creating the interleaved LVs?
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Thursday, May 10, 2007 4:31 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
I can get all that data, but can I actually test it somehow? Does linux know anything about NCQ, or is everything abstracted to the controller?
Good question, not knowing the answer I did a quick google and this came to the top:
http://linux-ata.org/driver-status.html
another good one,
http://blog.kovyrin.net/2006/08/11/turn-on-ncq-on-ich-linux/
Looks like support wasn't added until 2.6.18 and isn't widely supported until 2.6.19 and 2.6.20.
-Ross
Ross,
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
How did it go creating the interleaved LVs?
-Ross
It worked... I think.
I already had the LVM partitions set up, but when I booted up into the install, and went to the shell, I couldn't see them (although the installer saw them). I had to do raidstart on all my md devices, and then scan by doing somethign liek vgscan and lvscan, and then the devices showed up. So I deleted them and re-added them, per your instructions, and when I printed out the config, looks like they were striping. I was then able to install on it. So I don't think the reboot step is necessary, you just need to precreate the config manually in the shell first.
Thanks for your help.
Russ
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
How much did your si3114 cost? You can get a si3124 card for under 100USD.
Feizhou wrote:
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
How much did your si3114 cost? You can get a si3124 card for under 100USD.
I think it was around $25 shipped.
Ruslan Sivak wrote:
Thank you for the links. Looks like my controller doesn't support NCQ :-(. I have the SIL 3114 based card. Doesn't look like there are any cheap alternatives on the PCI bus, but I think I can live with the performance of this system.
Russ
Promise SATA300 TX4 ~ $60 @ newegg
Fresh install of centos 5 x85_64:
sata_promise 0000:01:0a.0: version 1.04 ACPI: PCI Interrupt Link [APC2] enabled at IRQ 17 GSI 20 sharing vector 0x3A and IRQ 20 ACPI: PCI Interrupt 0000:01:0a.0[A] -> Link [APC2] -> GSI 17 (level, low) -> IRQ 58 ata5: SATA max UDMA/133 cmd 0xFFFFC20000020200 ctl 0xFFFFC20000020238 bmdma 0x0 irq 58 ata6: SATA max UDMA/133 cmd 0xFFFFC20000020280 ctl 0xFFFFC200000202B8 bmdma 0x0 irq 58 ata7: SATA max UDMA/133 cmd 0xFFFFC20000020300 ctl 0xFFFFC20000020338 bmdma 0x0 irq 58 ata8: SATA max UDMA/133 cmd 0xFFFFC20000020380 ctl 0xFFFFC200000203B8 bmdma 0x0 irq 58 scsi4 : sata_promise ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata5.00: ATA-8, max UDMA7, 976773168 sectors: LBA48 NCQ (depth 0/32) ata5.00: configured for UDMA/133 scsi5 : sata_promise ata6: SATA link down (SStatus 0 SControl 300) scsi6 : sata_promise ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata7.00: ATA-8, max UDMA7, 976773168 sectors: LBA48 NCQ (depth 0/32) ata7.00: configured for UDMA/133 scsi7 : sata_promise ata8: SATA link down (SStatus 0 SControl 300)
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Thursday, May 10, 2007 1:43 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Feizhou wrote:
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
He was probably hinting at me for top posting. Unfortunately, sometimes I write from the blackberry, which only allows top posting. Take it up with RIM.
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it? _______________________________________________
Probably not, but is SATA really much worse then SCSI or SAS? I did some testing on a dell PE 2950 of 750GB SATA's vs SAS and SCSI drives, and the SATA drives seem to be faster at least at first glance. I don't have good numbers from the SCSI tests, but at least for sequantial, I'm getting a better speed off the SATAs.
SATA and SCSI/SAS should give comparable single work-load sequential numbers, but most SCSI/SAS have better seek times so random IO will be better on those. Also SCSI/SAS support tagged command queuing, which allows multiple overlapping IOs so you will tend to see better mixed workload performance compared to SATA (multi-user environment).
Having said that some of the new SATA models that support NCQ, the SATA version of TCQ, and that use some of the same SCSI/SAS onboard processing (Western Digital Raptors) can approach or equal SCSI/SAS mixed load performance, but their price also approaches or equals SCSI/SAS and their spindle speeds still do not top 10K so random will not be as good.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
SATA and SCSI/SAS should give comparable single work-load sequential numbers, but most SCSI/SAS have better seek times so random IO will be better on those. Also SCSI/SAS support tagged command queuing, which allows multiple overlapping IOs so you will tend to see better mixed workload performance compared to SATA (multi-user environment).
Having said that some of the new SATA models that support NCQ, the SATA version of TCQ, and that use some of the same SCSI/SAS onboard processing (Western Digital Raptors) can approach or equal SCSI/SAS mixed load performance, but their price also approaches or equals SCSI/SAS and their spindle speeds still do not top 10K so random will not be as good.
Nope. SATA drives with NCQ support are in the same price range as non NCQ drives, if not the same price, which is by far less costly than SCSI/SAS drives. Also, due to the incredible packing to get 200GB-750GB of space onto the platter, the lower RPM is made up for especially at the higher densities.
A 73GB 15K RPM SCSI/SAS drive is in the same price range as a 750GB 7.5K RPM NCQ capable SATA drive. Ten times the density at half the speed. I reckon it would give the SCSI drive a run for its money in regard to random access times.
The SCSI drive: Spindle Speed 15000 rpm Average latency 2.0 msec Random read seek time 3.50 msec Random write seek time 4.0 msec
The SATA drive: Spindle Speed 7200 rpm Native Command Queuing Y Average latency 4.16 msec Random read seek time <8.5 msec Random write seek time <10.0 msec Maximum interface transfer rate 300 Mbytes/sec
Without NCQ enabled, it will take twice as long as the scsi drive. With NCQ enabled, the game changes.
Compare to a 10K scsi drive: Spindle Speed 10,000 rpm Sustained data transfer rate 80 Mbytes/sec. Average latency 3.0 msec Random read seek time 4.9 msec Random write seek time 5.4 msec Maximum interface transfer rate 320 Mbytes/sec
NCQ SATA is almost a no-brainer.
NCQ SATA is almost a no-brainer.
Feizhou,
It is only a no brainer if you have no brains right?
And I am quite certain that you do have some brains.
7200 rpm consumer sata drives are not designed for the same load and duty cycle as a 15k scsi unit...
Plus, in general, you would need a special aftermarket controller and then again, what about hot swapping etc etc
- rh
-- Abba Communications Spokane, WA www.abbacomm.net
Abba Communications - www.abbacomm.net wrote:
NCQ SATA is almost a no-brainer.
Feizhou,
It is only a no brainer if you have no brains right?
And I am quite certain that you do have some brains.
7200 rpm consumer sata drives are not designed for the same load and duty cycle as a 15k scsi unit...
ROTFL. I am sorry but 'consumer' 7200 RPM PATA/SATA drives can very well do 24/7 for years. Did you read Google's report? There is NO DIFFERENCE in terms of endurance between SCSI/PATA/SATA drives.
I worked for an outfit that handled over 40 million mailboxes and tackled around 200 million smtp transactions daily. My responsibilities were the mail delivery system with over 30 boxes under my direct adminstration. I did not get more trouble/failures from PATA or SATA drive boxes than SCSI boxes.
Plus, in general, you would need a special aftermarket controller and then again, what about hot swapping etc etc
Where have you been in the last few years? SATA is hot swappable from its conception. Almost any SATA controller can do hotswap. Support for hot swapping under Linux for SCSI had been a hack job for admins and only recently has that changed. SATA hot plug support is in Centos 5 for AHCI compliant chipsets, Silicon Image chipsets which covers quite a few onboard SATA motherboards.
The problem is not my mush but yours. Yours is stuck in the last decade.
Am Freitag, 11. Mai 2007 schrieb Feizhou:
SATA and SCSI/SAS should give comparable single work-load sequential numbers, but most SCSI/SAS have better seek times so random IO will be better on those. Also SCSI/SAS support tagged command queuing, which allows multiple overlapping IOs so you will tend to see better mixed workload performance compared to SATA (multi-user environment).
Having said that some of the new SATA models that support NCQ, the SATA version of TCQ, and that use some of the same SCSI/SAS onboard processing (Western Digital Raptors) can approach or equal SCSI/SAS mixed load performance, but their price also approaches or equals SCSI/SAS and their spindle speeds still do not top 10K so random will not be as good.
Nope. SATA drives with NCQ support are in the same price range as non NCQ drives, if not the same price, which is by far less costly than SCSI/SAS drives. Also, due to the incredible packing to get 200GB-750GB of space onto the platter, the lower RPM is made up for especially at the higher densities.
A 73GB 15K RPM SCSI/SAS drive is in the same price range as a 750GB 7.5K RPM NCQ capable SATA drive. Ten times the density at half the speed. I reckon it would give the SCSI drive a run for its money in regard to random access times.
The SCSI drive: Spindle Speed 15000 rpm Average latency 2.0 msec Random read seek time 3.50 msec Random write seek time 4.0 msec
The SATA drive: Spindle Speed 7200 rpm Native Command Queuing Y Average latency 4.16 msec Random read seek time <8.5 msec Random write seek time <10.0 msec Maximum interface transfer rate 300 Mbytes/sec
Without NCQ enabled, it will take twice as long as the scsi drive. With NCQ enabled, the game changes.
Compare to a 10K scsi drive: Spindle Speed 10,000 rpm Sustained data transfer rate 80 Mbytes/sec. Average latency 3.0 msec Random read seek time 4.9 msec Random write seek time 5.4 msec Maximum interface transfer rate 320 Mbytes/sec
NCQ SATA is almost a no-brainer.
Sorry, but you are comparing apples and oranges.
Sure the SATA drives look good on paper, and sure they perform well in applications without lot's of parrallel IO/s and seeks around the whole disk. For a more useful comparison regarding fileservers or databaseservers look into the performance database of storagereview.com:
http://www.storagereview.com/comparison.html
Select for example "IOMeter File Server - 16 I/O". The top drives are all 15k SCSI or SAS, followed by 10K SCSI or SAS, followed by the raptors and than the rest of the SATA drives.
Sure the SATA drives are still acceptable for a wide range of server applications, especially since you can use lots and lots because of their attractive price point.
But still... apples and oranges.
regards, Andreas Micklei
Sorry, but you are comparing apples and oranges.
Sure the SATA drives look good on paper, and sure they perform well in applications without lot's of parrallel IO/s and seeks around the whole disk. For a more useful comparison regarding fileservers or databaseservers look into the performance database of storagereview.com:
Thanks.
http://www.storagereview.com/comparison.html
Select for example "IOMeter File Server - 16 I/O". The top drives are all 15k SCSI or SAS, followed by 10K SCSI or SAS, followed by the raptors and than the rest of the SATA drives.
So I see. A 2.5" drive (higher density platters? probably all use the same density just smaller platters come to think of it...) running at the highest speed taking the crown. Current SATA versus older 15k RPM SCSI generation then :D
Sure the SATA drives are still acceptable for a wide range of server applications, especially since you can use lots and lots because of their attractive price point.
Yeah.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Feizhou Sent: Thursday, May 10, 2007 1:22 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
Hey look at me! I'm top-posting!!! Nanny-nanny-poo-poo
Come get me Trolls!
Please do not top post. :)
I deserved that!
SATA drives typically do 60-70MBs, interleaved you should see 120-140MB/s on sequential. Random IO on SATA usually sucks too badly to even talk about...
Eh? It cannot be worse than PATA drives now can it?
Yes, true PATA is definately on the low end, but RLL/ESDI can probably get in even under those... If you can even get those to work under current kernels...
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 4:00 PM To: CentOS mailing list Subject: Re: [CentOS] Anaconda doesn't support raid10
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was
finally able to
create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the
raid10 array
won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly
from the
shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising.
I was able to mount the raid10 parition just fine.
Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault.
Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot?
Ok, my bad raid5/6 can be created during install even if OS can't boot from it.
I guess raid10 is the red headed stepchild of anaconda...
I suggest this:
/dev/md0 raid1, 128MB partition, all 4 drives, for /boot /dev/md1 raid1, rest of drive space, first 2 drives, for lvm /dev/md2 raid1, rest of drive space, second 2 drives, for lvm
lvm volgroup CentOS, comprised of /dev/md1 and /dev/md2 logical vol1, root, interleave 2, mount /, 16GB logical vol2, swap, interleave 2, swapfs, 4GB
This will provide the same performance and fail-over as a raid10.
If you remove the first disk and boot make sure BIOS is set to boot off of disk2!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 4:00 PM To: CentOS mailing list Subject: Re: [CentOS] Anaconda doesn't support raid10
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 07, 2007 12:53 PM To: CentOS mailing list Subject: [CentOS] Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was
finally able to
create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the
raid10 array
won't show up. Looking through the logs (Alt-F3), I see the following warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't it let me install on this device, even though it's working perfectly
from the
shell? Why am I the only one having this problem? Is nobody out there using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
after installing, I was able to create a raid10 device and successfully mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first drive - Box refuses to boot. Going into rescue mode, I was able to mount /boot, was not able to mount the swap drive (as to be expected, as it's a raid0), was also not able to mount the / for some reason, which is a little surprising.
I was able to mount the raid10 parition just fine.
Maybe I messed up somewhere along the line. I'll try again, but it's disheartening to see that a raid6 array would die after one drive failure, even if it was somehow my fault.
Also assuming that the raid5 array could be recovered, what would I do with the swap partition? Would I just recreate it from the space in the leftover drives and would that be all that I need to boot?
Ok, my bad raid5/6 can be created during install even if OS can't boot from it.
I guess raid10 is the red headed stepchild of anaconda...
I suggest this:
/dev/md0 raid1, 128MB partition, all 4 drives, for /boot /dev/md1 raid1, rest of drive space, first 2 drives, for lvm /dev/md2 raid1, rest of drive space, second 2 drives, for lvm
lvm volgroup CentOS, comprised of /dev/md1 and /dev/md2 logical vol1, root, interleave 2, mount /, 16GB logical vol2, swap, interleave 2, swapfs, 4GB
This will provide the same performance and fail-over as a raid10.
If you remove the first disk and boot make sure BIOS is set to boot off of disk2!
-Ross
I don't seem to be able to control the interleave through anaconda. Is this something that can be done post install?
Also I'm not very comfortable using LVM yet. Just getting used to md.
Russ
On 5/7/07, Ruslan Sivak rsivak@istandfor.com wrote:
Ross S. W. Walker wrote:
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
My guess is that software Raid10 is still in the eats its children stage. Most of the time I found it reliable, but I remember the kernel developers having lots of race condition problems at times. In a limited memory install region that is anaconda.. these might creep up more often to be only supported by hand.
Stephen John Smoogen wrote:
On 5/7/07, Ruslan Sivak rsivak@istandfor.com wrote:
Ross S. W. Walker wrote:
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
My guess is that software Raid10 is still in the eats its children stage. Most of the time I found it reliable, but I remember the kernel developers having lots of race condition problems at times. In a limited memory install region that is anaconda.. these might creep up more often to be only supported by hand.
Well it's not even supported by hand. Anaconda flat out refuses to let you use it. Even when I create the device by hand (be it by loading the raid10 module manually and doing mdadm, or by making 2 raid1's and putting a raid0 on top of them), anaconda just flat out refuses to see the final device.
Russ
Ruslan Sivak spake the following on 5/7/2007 2:22 PM:
Stephen John Smoogen wrote:
On 5/7/07, Ruslan Sivak rsivak@istandfor.com wrote:
Ross S. W. Walker wrote:
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
My guess is that software Raid10 is still in the eats its children stage. Most of the time I found it reliable, but I remember the kernel developers having lots of race condition problems at times. In a limited memory install region that is anaconda.. these might creep up more often to be only supported by hand.
Well it's not even supported by hand. Anaconda flat out refuses to let you use it. Even when I create the device by hand (be it by loading the raid10 module manually and doing mdadm, or by making 2 raid1's and putting a raid0 on top of them), anaconda just flat out refuses to see the final device.
Russ
I don't think anaconda will support it or raid 6 for the foreseeable future. You have to throw up an upstream complaint, and I doubt you would see anything till Fedora Core 8 or so. RedHat's stance seems to be that if you need that kind of reliability, buy hardware raid that does raid 10
Scott Silva wrote:
Ruslan Sivak spake the following on 5/7/2007 2:22 PM:
Stephen John Smoogen wrote:
On 5/7/07, Ruslan Sivak rsivak@istandfor.com wrote:
Ross S. W. Walker wrote:
Whether or not it was designed to create a Raid5/raid10, it allows the creating of raid5 and raid6 during install. It doesn't, however, allow the use of raid10 even if it's created in the shell outside of anaconda (or if you have an old installation on a raid10).
My guess is that software Raid10 is still in the eats its children stage. Most of the time I found it reliable, but I remember the kernel developers having lots of race condition problems at times. In a limited memory install region that is anaconda.. these might creep up more often to be only supported by hand.
Well it's not even supported by hand. Anaconda flat out refuses to let you use it. Even when I create the device by hand (be it by loading the raid10 module manually and doing mdadm, or by making 2 raid1's and putting a raid0 on top of them), anaconda just flat out refuses to see the final device.
Russ
I don't think anaconda will support it or raid 6 for the foreseeable future. You have to throw up an upstream complaint, and I doubt you would see anything till Fedora Core 8 or so. RedHat's stance seems to be that if you need that kind of reliability, buy hardware raid that does raid 10
Anaconda does now support raid6. The only reason I didn't go with it is that I heard that performance would be very bad on it.
Russ
Am Montag, 7. Mai 2007 schrieb Ruslan Sivak:
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
NEVER EVER use raid0 for swap if you want reliability. If one drive fails the virtual memory gets corrupted and the machine will crash horribly (tm). Besides creating sepearte swap partitions on different physical discs will give you the same kind of performance, so using striping on a swap parition is kind useless for gaining performance.
I suggest using raid-1 or raid-6 for swap, so the machine can stay up if one drive fails.
Andreas Micklei wrote:
Am Montag, 7. Mai 2007 schrieb Ruslan Sivak:
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
NEVER EVER use raid0 for swap if you want reliability. If one drive fails the virtual memory gets corrupted and the machine will crash horribly (tm). Besides creating sepearte swap partitions on different physical discs will give you the same kind of performance, so using striping on a swap parition is kind useless for gaining performance.
I suggest using raid-1 or raid-6 for swap, so the machine can stay up if one drive fails.
Interesting thing... I build the following set up:
/boot on raid1 swap on raid0 / on raid6 /data on 2 lvm raid1's.
I shut down and plucked out one of the drives (3rd one I believe). Booted back up, everything was fine. Even swap (I think). I, rebooted, put in the old drive, hot added the partitions and everything rebuilt beautifully. (again not sure about swap).
I decided to run one more test. I plucked out the first (boot) drive. Upon reboot, I got greeted by GRUB all over the screen. Upon booting into rescue mode, it couldn't find any partitions. I was able to mount boot, and it let me recreate the raid1 partitions, but no luck with raid6. This is the second time that this has happened. Am I doing' something wrong? Seems when I pluck out the first drive, the drive letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb and sdd becomes sdc).
What's the proper repair method for a raid6 in this case? Or should I just avoid raid6, and put / on 2 an LVM of 2 raid1's? Any way to set up interleaving (although testing raid1 vs raid10 with hdparm -t gives only marginal performance improvement).
Russ
Am Dienstag, 8. Mai 2007 schrieb Ruslan Sivak:
Andreas Micklei wrote:
Am Montag, 7. Mai 2007 schrieb Ruslan Sivak:
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb) raid0 for swap (1GB) raid6 for / (10GB)
NEVER EVER use raid0 for swap if you want reliability. If one drive fails the virtual memory gets corrupted and the machine will crash horribly (tm). Besides creating sepearte swap partitions on different physical discs will give you the same kind of performance, so using striping on a swap parition is kind useless for gaining performance.
I suggest using raid-1 or raid-6 for swap, so the machine can stay up if one drive fails.
Interesting thing... I build the following set up:
/boot on raid1 swap on raid0 / on raid6 /data on 2 lvm raid1's.
Again:
http://tldp.org/HOWTO/Software-RAID-HOWTO-2.html#ss2.3
I shut down and plucked out one of the drives (3rd one I believe). Booted back up, everything was fine. Even swap (I think). I, rebooted, put in the old drive, hot added the partitions and everything rebuilt beautifully. (again not sure about swap).
Swap probably was not used at this time, or else your machine would have crashed. RAID-0 does not degrade when you plug out one disc, it simply fails. So the effect when swap is in use is the same as a RAM module going bad.
I decided to run one more test. I plucked out the first (boot) drive. Upon reboot, I got greeted by GRUB all over the screen. Upon booting into rescue mode, it couldn't find any partitions. I was able to mount boot, and it let me recreate the raid1 partitions, but no luck with raid6. This is the second time that this has happened. Am I doing' something wrong? Seems when I pluck out the first drive, the drive letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb and sdd becomes sdc).
What's the proper repair method for a raid6 in this case? Or should I just avoid raid6, and put / on 2 an LVM of 2 raid1's? Any way to set up interleaving (although testing raid1 vs raid10 with hdparm -t gives only marginal performance improvement).
I haven't played with software RAID-6 and only use software RAID-5 on one machine currently (RAID-1 for boot). I am also not very familar with LVM, so I can't be of much help i fear. However, I find the Linux Software RAID HOWTO a very valuable resource, although it is a few years old:
http://tldp.org/HOWTO/Software-RAID-HOWTO.html
regards, Andreas Micklei
Ruslan Sivak wrote:
Interesting thing... I build the following set up:
/boot on raid1 swap on raid0 / on raid6 /data on 2 lvm raid1's.
I shut down and plucked out one of the drives (3rd one I believe). Booted back up, everything was fine. Even swap (I think). I, rebooted, put in the old drive, hot added the partitions and everything rebuilt beautifully. (again not sure about swap).
well, if your drive has the courtesy to die while your computer is shutdown, then you know it will boot. but if it fails while the system is running, I can pretty much guarantee you'll get a kernel panic.
of course, a raid0 with a missing member simply doesn't exist, so undoubtably if you'd checked, you'd have found your system was running without any swap at all.
unclear if your raid0 rebuilt itself, the mdadm --misc --detail /dev/mdX command will show the status.
Ruslan Sivak wrote:
Interesting thing... I build the following set up:
/boot on raid1 swap on raid0
Swap on raid1 has a chance of working through a drive failure. Raid0 doesn't.
/ on raid6
Does the installer do that?
/data on 2 lvm raid1's.
If you are going to use LVM you don't have to match your partitions across all 4 drives. Put /boot, swap, / on raid1 on the 1st 2 drives with another raid1 for the rest of the space. Then make a raid1 using partitions that fill your 3rd and 4th drive and combine the two large raid1's in LVM. That leaves it so you can expand if you want to add more drives.
I shut down and plucked out one of the drives (3rd one I believe). Booted back up, everything was fine. Even swap (I think). I, rebooted, put in the old drive, hot added the partitions and everything rebuilt beautifully. (again not sure about swap).
I decided to run one more test. I plucked out the first (boot) drive. Upon reboot, I got greeted by GRUB all over the screen. Upon booting into rescue mode, it couldn't find any partitions. I was able to mount boot, and it let me recreate the raid1 partitions, but no luck with raid6. This is the second time that this has happened. Am I doing' something wrong? Seems when I pluck out the first drive, the drive letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb and sdd becomes sdc).
The only thing that should care about about this is grub. Everything else should autodetect.
What's the proper repair method for a raid6 in this case? Or should I just avoid raid6, and put / on 2 an LVM of 2 raid1's?
I'd put / on one raid1 with no LVM. And personally, I'd do the same with the rest of the space and deal with the extra partition by mounting it somewhere. LVM avoids the need for that, but at the expense of no longer being able to recover data from any single drive.
Les Mikesell wrote:
Ruslan Sivak wrote:
Interesting thing... I build the following set up:
/boot on raid1 swap on raid0
Swap on raid1 has a chance of working through a drive failure. Raid0 doesn't.
/ on raid6
Does the installer do that?
/data on 2 lvm raid1's.
If you are going to use LVM you don't have to match your partitions across all 4 drives. Put /boot, swap, / on raid1 on the 1st 2 drives with another raid1 for the rest of the space. Then make a raid1 using partitions that fill your 3rd and 4th drive and combine the two large raid1's in LVM. That leaves it so you can expand if you want to add more drives.
I shut down and plucked out one of the drives (3rd one I believe). Booted back up, everything was fine. Even swap (I think). I, rebooted, put in the old drive, hot added the partitions and everything rebuilt beautifully. (again not sure about swap).
I decided to run one more test. I plucked out the first (boot) drive. Upon reboot, I got greeted by GRUB all over the screen. Upon booting into rescue mode, it couldn't find any partitions. I was able to mount boot, and it let me recreate the raid1 partitions, but no luck with raid6. This is the second time that this has happened. Am I doing' something wrong? Seems when I pluck out the first drive, the drive letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb and sdd becomes sdc).
The only thing that should care about about this is grub. Everything else should autodetect.
What's the proper repair method for a raid6 in this case? Or should I just avoid raid6, and put / on 2 an LVM of 2 raid1's?
I'd put / on one raid1 with no LVM.
Call that raid1 #0.
Put the rest of disk1 & 2 into raid1 #1 , put all of disk 3 & 4 into raid1 #2 , put raid1 #1 & raid1 #2 into LVM. So your LVM size should be something less than (2 * disk) - OS, everything mirrored.
And personally, I'd do the same with the rest of the space and deal with the extra partition by mounting it somewhere. LVM avoids the need for that, but at the expense of no longer being able to recover data from any single drive.