I have a general question on software RAID performance. If using two IDE disks on RAID 1: is there much of a difference performancewise if I put both on one IDE port as master and slave or should I better put them on seperate ports. I remember from olde days that it was recommended to put CD and hard disk on different ports to maximize the throughput between them and as RAID1 is theoretically writing to both disks at the same time I guess putting them on different ports might be an advantage. Is it one or is it negligible?
Kai
on 10/18/2007 7:57 AM Kai Schaetzl spake the following:
I have a general question on software RAID performance. If using two IDE disks on RAID 1: is there much of a difference performancewise if I put both on one IDE port as master and slave or should I better put them on seperate ports. I remember from olde days that it was recommended to put CD and hard disk on different ports to maximize the throughput between them and as RAID1 is theoretically writing to both disks at the same time I guess putting them on different ports might be an advantage. Is it one or is it negligible?
Kai
The speed issue is not the problem. The problem is that if one disk dies while the system is running, it usually locks up the ide channel until a reboot, so the other drive would also be locked. That sort of kills one of the reasons for raid. You can pick up PCI IDE cards cheap, put the drives on their own ports.
Scott Silva wrote on Thu, 18 Oct 2007 12:51:38 -0700:
The speed issue is not the problem. The problem is that if one disk dies while the system is running, it usually locks up the ide channel until a reboot, so the other drive would also be locked.
Ah, I see, that makes sense.
That sort of kills one of the reasons
for raid. You can pick up PCI IDE cards cheap, put the drives on their own ports.
Well, there are two ports on the board. But room in the 1U boxes is scarce, so I started out by just adding another disk and refrained from dragging another IDE cable thru the holes inside the box. Well, I have already two with this master/slave setup. Documentation says that mdadm.conf holds all the information, but it doesn't specify any harddisk devices. Are the UUIDs really all it needs to reassemble the array? So, it doesn't matter (for RAID) that the drive moves from hdb to hdc?
Kai
on 10/18/2007 3:54 PM Kai Schaetzl spake the following:
Scott Silva wrote on Thu, 18 Oct 2007 12:51:38 -0700:
The speed issue is not the problem. The problem is that if one disk dies while the system is running, it usually locks up the ide channel until a reboot, so the other drive would also be locked.
Ah, I see, that makes sense.
That sort of kills one of the reasons
for raid. You can pick up PCI IDE cards cheap, put the drives on their own ports.
Well, there are two ports on the board. But room in the 1U boxes is scarce, so I started out by just adding another disk and refrained from dragging another IDE cable thru the holes inside the box. Well, I have already two with this master/slave setup. Documentation says that mdadm.conf holds all the information, but it doesn't specify any harddisk devices. Are the UUIDs really all it needs to reassemble the array? So, it doesn't matter (for RAID) that the drive moves from hdb to hdc?
Kai
As long as the partitions were raid-autostart (fd I believe) it should pick it up. Having a CDrom on a channel shouldn't be a problem as long as the drive supports UDMA, which I think the modern drives support. And losing the CD along with the failed drive is not a big deal.
You still need to reboot to replace the failed drive, but the system should run until you get there. Think about also putting your swap on raid if the box swaps occasionally.
Scott Silva wrote on Thu, 18 Oct 2007 16:05:09 -0700:
As long as the partitions were raid-autostart (fd I believe) it should pick it up.
Yeah, it wasn't a problem at all.
Having a CDrom on a channel shouldn't be a problem as long as the drive supports UDMA, which I think the modern drives support. And losing the CD along with the failed drive is not a big deal.
No CDROM, anyway :-)
You still need to reboot to replace the failed drive, but the system should run until you get there. Think about also putting your swap on raid if the box swaps occasionally.
Yeah, I thought about that before partitioning and first wanted to exclude swap. Obviously, this wouldn't be a problem when a disk goes away on booting, but if it is in the middle of operation and there's just a few data that a program needs urgently .. I'm using a simple scheme now. There are only two partitions, one holds boot and one holds LVM with /, /tmp, /var and swap.
I also installed grub to both drives and since I couldn't find my old information how to do that I googled again for it and found that the page with the information doesn't exist anymore. A *lot* of sites point to it, so this is a real pita. I found it finally via archive.org. I wonder if it shouldn't get included in original text in the wiki and the FAQ better point to that? (You can't link directly to the page at archive.org and one can't be sure that it will stay available there.) http://www.centos.org/modules/smartfaq/faq.php?faqid=47 (the last link)
Kai
on 10/22/2007 7:32 AM Kai Schaetzl spake the following:
Scott Silva wrote on Thu, 18 Oct 2007 16:05:09 -0700:
As long as the partitions were raid-autostart (fd I believe) it should pick it up.
Yeah, it wasn't a problem at all.
Having a CDrom on a channel shouldn't be a problem as long as the drive supports UDMA, which I think the modern drives support. And losing the CD along with the failed drive is not a big deal.
No CDROM, anyway :-)
You still need to reboot to replace the failed drive, but the system should run until you get there. Think about also putting your swap on raid if the box swaps occasionally.
Yeah, I thought about that before partitioning and first wanted to exclude swap. Obviously, this wouldn't be a problem when a disk goes away on booting, but if it is in the middle of operation and there's just a few data that a program needs urgently .. I'm using a simple scheme now. There are only two partitions, one holds boot and one holds LVM with /, /tmp, /var and swap.
If your swap is on LVM, and LVM is on a raid 1 array, then effectively your swap is on raid, so no worries.
I also installed grub to both drives and since I couldn't find my old information how to do that I googled again for it and found that the page with the information doesn't exist anymore. A *lot* of sites point to it, so this is a real pita. I found it finally via archive.org. I wonder if it shouldn't get included in original text in the wiki and the FAQ better point to that? (You can't link directly to the page at archive.org and one can't be sure that it will stay available there.) http://www.centos.org/modules/smartfaq/faq.php?faqid=47 (the last link)
Kai
Scott Silva wrote on Mon, 22 Oct 2007 10:10:58 -0700:
If your swap is on LVM, and LVM is on a raid 1 array, then effectively your swap is on raid, so no worries.
Yeah, it just works fine.
Kai