Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
Dave spake the following on 3/27/2007 3:13 PM:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I have used software raid many times, and still use it, although on a more limited basis. It is a very mature technology, and IMHO it is better than all of the fakeraid controllers I have ever seen.
Scott Silva wrote:
Dave spake the following on 3/27/2007 3:13 PM:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I have used software raid many times, and still use it, although on a more limited basis. It is a very mature technology, and IMHO it is better than all of the fakeraid controllers I have ever seen.
Been running software raid-5 (knoppix) on a workgroup server for over a year. Had a drive fail a couple weeks ago, everything kept right on going.
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
thats easy. SATA. one device per channel. you can get 8 ports or more of SATA on a PCI-X or PCI-Express x4 card. parallel IDE is dead.
except, I've never heard of any such limitation, other than the potential performance bottleneck.
John R Pierce wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
thats easy. SATA. one device per channel. you can get 8 ports or more of SATA on a PCI-X or PCI-Express x4 card. parallel IDE is dead.
except, I've never heard of any such limitation, other than the potential performance bottleneck.
One drive going down on a IDE channel can take the whole channel due to the controller being confused or what not. This is why you get RAID (real and fake) boards with 4 or more channels on which to connect a single IDE drive.
tblader spake the following on 3/29/2007 6:16 AM:
Scott Silva wrote:
Dave spake the following on 3/27/2007 3:13 PM:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I have used software raid many times, and still use it, although on a more limited basis. It is a very mature technology, and IMHO it is better than all of the fakeraid controllers I have ever seen.
Been running software raid-5 (knoppix) on a workgroup server for over a year. Had a drive fail a couple weeks ago, everything kept right on going.
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
You can run 2 drives on each controller, especially with raid 1, but you lose some of the benefits. With mirroring, you could have the primary slave mirror the secondary master and so on. That way if the primary slave goes down and takes the primary master with it by locking up the channel, your array will still function somewhat. Not the best way, but it will work. Don't do raid5 this way.
On 3/29/07, tblader tblader@flambeau.com wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
Can you please confirm the same or refer to some documentation indicating that ? I have recently added two HDDs as the slave devices on Primary and Secondary IDE channels configured as software raid devices where the two master devices on these IDE channels were already being used for software raid. Now I have 4 devices on two IDE channels with partition pairs configured as RAID-1 devices. So far all the raid devices are working fine though I do expect a deterioration in performance because of 4 devices on two channels. I would like to reconfirm if "single ide device on each ide channel" is true for software raid. Comments please.
Thanks,
Hi,
Software RAID doesn't care about primary or secondary, as you found out. You might care.
The risk is how one of the two drives on a single IDE channel might fail. For example, let's say the interface on a drive fails and electrically shorts the signals on the cable. This would prevent the other working drive from communicating on the channel as well. This has never happened to me and it is completely up to your comfort level where the redundancy ends.
Thanks, Jim
Manish Kathuria wrote:
On 3/29/07, tblader tblader@flambeau.com wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
Can you please confirm the same or refer to some documentation indicating that ? I have recently added two HDDs as the slave devices on Primary and Secondary IDE channels configured as software raid devices where the two master devices on these IDE channels were already being used for software raid. Now I have 4 devices on two IDE channels with partition pairs configured as RAID-1 devices. So far all the raid devices are working fine though I do expect a deterioration in performance because of 4 devices on two channels. I would like to reconfirm if "single ide device on each ide channel" is true for software raid. Comments please.
Thanks,
Manish Kathuria wrote:
On 3/29/07, tblader tblader@flambeau.com wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
Can you please confirm the same or refer to some documentation indicating that ? I have recently added two HDDs as the slave devices on Primary and Secondary IDE channels configured as software raid devices where the two master devices on these IDE channels were already being used for software raid. Now I have 4 devices on two IDE channels with partition pairs configured as RAID-1 devices. So far all the raid devices are working fine though I do expect a deterioration in performance because of 4 devices on two channels. I would like to reconfirm if "single ide device on each ide channel" is true for software raid. Comments please.
It is just a performance issue, at least on hardware that works as intended. IDE controllers can only issue one command at a time so you can't even overlap waiting for the seeks to complete on two drives on the same cable - but it won't break anything.
Manish Kathuria spake the following on 3/29/2007 10:29 AM:
On 3/29/07, tblader tblader@flambeau.com wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
Can you please confirm the same or refer to some documentation indicating that ? I have recently added two HDDs as the slave devices on Primary and Secondary IDE channels configured as software raid devices where the two master devices on these IDE channels were already being used for software raid. Now I have 4 devices on two IDE channels with partition pairs configured as RAID-1 devices. So far all the raid devices are working fine though I do expect a deterioration in performance because of 4 devices on two channels. I would like to reconfirm if "single ide device on each ide channel" is true for software raid. Comments please.
Thanks,
It isn't so much a problem of performance, but if one drive on a channel dies, it usually will lock the whole channel and stop the other drive on that channel. If you swapped the secondary master and slave, you could lessen the problems you might have. That way, if the secondary master died, taking the secondary slave down with it until you remove the bad drive, you would still have both arrays running (degraded, but running).
Manish Kathuria spake the following on 3/29/2007 10:29 AM:
On 3/29/07, tblader tblader@flambeau.com wrote:
One drawback is software raid wants a single ide device on each ide channel. This means buying an ide card (if you want more than two devices) which doesn't always play well with the rest of the system (4+ ide devices). Some mainboards might have more than two ide channels, but I just opted for the cheap ide card.
Can you please confirm the same or refer to some documentation indicating that ? I have recently added two HDDs as the slave devices on Primary and Secondary IDE channels configured as software raid devices where the two master devices on these IDE channels were already being used for software raid. Now I have 4 devices on two IDE channels with partition pairs configured as RAID-1 devices. So far all the raid devices are working fine though I do expect a deterioration in performance because of 4 devices on two channels. I would like to reconfirm if "single ide device on each ide channel" is true for software raid. Comments please.
Thanks,
http://tldp.org/HOWTO/Software-RAID-HOWTO-4.html
http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07408.html
Hi,
Software RAID works well for me. I have three CentOS systems running low-cost IDE RAID-1 configurations. I have had a drive fail in a RAID-1 configuration and things ran along just fine until I got the spare replaced. I didn't realize a drive had failed until I read my email mid-morning and there was a message from the monitor. :-)
There are a lot of tutorials and help out there on this topic for you.
Thanks, Jim
Dave wrote:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Dave wrote:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I've been using software RAID with Linux on and off since 1998-ish and it has *never* bitten me in the behind. That said, the 3Ware controllers are so cheap these days I just find it easier to buy one of those and let the card do all the heavy lifting so I don't have to worry about all the software RAIDisms.
Cheers,
Dave wrote:
] Hello, ] I've got a 4.4 box that i'd like to implement software raid on. Does ] anyone have any experiences with this? ] Thanks. ] Dave.
i can swear by Linux software RAID 100%... although the majority of my experience is with RAID-1 mirrors... (and an occasional RAID-5 set...) it has saved my systems several times (at least a dozen) in the past decade... and in that time (using it on many systems all along) i have not lost any data or a single system due to disk failure... (it's always been super easy to replace the drive and return the RAID to fully operational status)
in fact, in many ways i prefer it to hardware RAID... as the software RAID monitoring facility is always quick and accurate in alerting you to any problems...
in general, i will often chose software RAID on server system disks over many onboard RAID alternatives... and while usually we invest in a hardware RAID array for data... i have to mention... efficient monitoring and status reporting on the large hardware arrays is ALWAYS more of a pain than the default handling i enjoy on the software RAIDed system disks...
B. Karhan simon@pop.psu.edu PRI/SSRI Unix Administrator
I've been using software RAID with Linux on and off since 1998-ish and it has *never* bitten me in the behind. That said, the 3Ware controllers are so cheap these days I just find it easier to buy one of those and let the card do all the heavy lifting so I don't have to worry about all the software RAIDisms. Cheers,
yes, Linux software raid for raid1, raid0, raid10 is pretty safe.
I would not bet on linux software raid raid5 though. I must say I have never used it but I have heard plenty of horror stories about raid5 with Linux software raid code. Maybe things have changed now.
Hello, My thanks to everyone for many helpful replies on this topic. Right now this box will be ide raid1 mirroring, i've got two drives in it, although at some point probably within the next six months or so i'm going to expand with another drive. I do not have experience with hardware raid cards, although i've heard plenty of good things about them, financially they're just not in my budget and i've implemented software raid1 on other Unix systems before, and that has saved me much hard times, in fact one box is currently running off the slave drive, the master drive died and it's waiting for me to replace it, so i quite like software raid and wanted to implement it on this new CentOS box. Thanks again. Dave.
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
Dave wrote:
Hello, My thanks to everyone for many helpful replies on this topic. Right now this box will be ide raid1 mirroring, i've got two drives in it, although at some point probably within the next six months or so i'm going to expand with another drive. I do not have experience with hardware raid cards, although i've heard plenty of good things about them, financially they're just not in my budget and i've implemented software raid1 on other Unix systems before, and that has saved me much hard times, in fact one box is currently running off the slave drive, the master drive died and it's waiting for me to replace it, so i quite like software raid and wanted to implement it on this new CentOS box. Thanks again. Dave.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
Feizhou spake the following on 3/29/2007 8:42 AM:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
I just put in some 9550SX's to replace some very crappy Adaptec 2810SA's and they seem to scream. Maybe the Adaptecs were even worse.
Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
I keep hearing about this "slowness" thing. Here is a bonnie++ run on one of my spare machines. This particular machine is also running 4 copies of Folding@Home in the background during the test. The array being tested is on a 3Ware 9550 card with 8 500gig barracudas. The filesystem is a standard ext3 filesystem with zero tweaks. It doesn't appear that slow to me, but perhaps I'm just easy to please.
[ritz@localhost data]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73575592 2868764 66909020 5% / /dev/hda1 101086 16078 79789 17% /boot /dev/sda1 3844873244 561019948 3088544960 16% /data tmpfs 1035584 0 1035584 0% /dev/shm [ritz@localhost data]$ whereis bonnie++ bonnie++: /usr/sbin/bonnie++ /usr/share/man/man8/bonnie++.8.gz [ritz@localhost data]$ clear [ritz@localhost data]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73575592 2868764 66909020 5% / /dev/hda1 101086 16078 79789 17% /boot /dev/sda1 3844873244 561019948 3088544960 16% /data tmpfs 1035584 0 1035584 0% /dev/shm [ritz@localhost data]$ /usr/sbin/bonnie++ -f Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP localhost.locald 4G 203807 96 169178 65 392171 63 611.2 2 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ localhost.localdomain,4G,,,203807,96,169178,65,,,392171,63,611.2,2,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
On Thu, 29 Mar 2007 at 12:17pm, chrism@imntv.com wrote
I keep hearing about this "slowness" thing. Here is a bonnie++ run on one of my spare machines. This particular machine is also running 4 copies of Folding@Home in the background during the test. The array being tested is on a 3Ware 9550 card with 8 500gig barracudas. The filesystem is a standard ext3 filesystem with zero tweaks. It doesn't appear that slow to me, but perhaps I'm just easy to please.
The problem was particularly pronounced on the 75xx/85xx series cards (which were not exactly RAID5 speed demons). XFS blew away ext3 on hardware RAID5 arrays on those cards. I don't have any benchmarks handy on 95xx series cards, bit I vaguely recall that XFS still performed better, but not by anywhere near the same margin.
Joshua Baker-LePain wrote:
On Thu, 29 Mar 2007 at 12:17pm, chrism@imntv.com wrote
I keep hearing about this "slowness" thing. Here is a bonnie++ run on one of my spare machines. This particular machine is also running 4 copies of Folding@Home in the background during the test. The array being tested is on a 3Ware 9550 card with 8 500gig barracudas. The filesystem is a standard ext3 filesystem with zero tweaks. It doesn't appear that slow to me, but perhaps I'm just easy to please.
The problem was particularly pronounced on the 75xx/85xx series cards (which were not exactly RAID5 speed demons). XFS blew away ext3 on hardware RAID5 arrays on those cards. I don't have any benchmarks handy on 95xx series cards, bit I vaguely recall that XFS still performed better, but not by anywhere near the same margin.
The 75xx and 85xx boards suffered from lack of onboard memory which meant that they had to wait for the disks to be done before returning A-OKAY. I am talking about RAID5 arrays as raid1, raid0 and raid10 arrays on these boards were okay. XFS should still wipe the floor with ext3 on the 95xx series on raid5 arrays due to problems with the code in the kernel between ext3 and the 3ware driver.
chrism@imntv.com wrote:
Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
argh...i missed the raid5 on 3ware part.
I keep hearing about this "slowness" thing. Here is a bonnie++ run on one of my spare machines. This particular machine is also running 4 copies of Folding@Home in the background during the test. The array being tested is on a 3Ware 9550 card with 8 500gig barracudas. The filesystem is a standard ext3 filesystem with zero tweaks. It doesn't appear that slow to me, but perhaps I'm just easy to please.
<shrug> You are free to not believe or to believe. Maybe the latest Centos 4 kernel has a fix or something but ext3 + 3ware raid5 = slow while XFS would fly. Are your 8 500GB configured in raid5 or raid10? If it is raid10, then yes, it will fly. You switch to raid5 however and you will experience a slow down. This is not limited to 3ware boards lower than 95xx which suffer from lack of onboard memory to make things fly.
Feizhou wrote:
chrism@imntv.com wrote:
Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
argh...i missed the raid5 on 3ware part.
Ah. No, I am using RAID0 since this is just scratch space for editing large uncompressed video files.
Disks are so cheap and high enough capacity now that I don't even bother with RAID5 anymore. I still don't think (even if I was using RAID5) that I would stray from the "vendor supported" ext3 filesystem.
Cheers,
argh...i missed the raid5 on 3ware part.
Ah. No, I am using RAID0 since this is just scratch space for editing large uncompressed video files.
Man a 3ware for RAID0? :D
I cannot wait for port multiplexer support on libsata (or is it libata?) in Centos 5. An elcheapo AHCI or whatever SATA controller + Linux software raid will own 3ware cards on a price/performance basis :D
Disks are so cheap and high enough capacity now that I don't even bother with RAID5 anymore. I still don't think (even if I was using RAID5) that I would stray from the "vendor supported" ext3 filesystem.
Not when you need that fileserver to serve. I'd go with JFS. XFS is too risky even if the code was not iffy. ext3 + 3ware RAID5 arrays is that bad. IIRC XFS was somewhere like 4-5 times faster than ext3 and no, i am not talking about 3ware 75xx/85xx boards. I am talking 3ware 95xx boards with onboard RAM.
Feizhou wrote:
argh...i missed the raid5 on 3ware part.
Ah. No, I am using RAID0 since this is just scratch space for editing large uncompressed video files.
Man a 3ware for RAID0? :D
I cannot wait for port multiplexer support on libsata (or is it libata?) in Centos 5. An elcheapo AHCI or whatever SATA controller + Linux software raid will own 3ware cards on a price/performance basis :D
An 8-port 9550SX card is only US$500. Why on earth would I mess around with software RAID when that card is going to make things a plug and play proposition? It just isn't worth the hassle.
Disks are so cheap and high enough capacity now that I don't even bother with RAID5 anymore. I still don't think (even if I was using RAID5) that I would stray from the "vendor supported" ext3 filesystem.
Not when you need that fileserver to serve. I'd go with JFS. XFS is too risky even if the code was not iffy. ext3 + 3ware RAID5 arrays is that bad. IIRC XFS was somewhere like 4-5 times faster than ext3 and no, i am not talking about 3ware 75xx/85xx boards. I am talking 3ware 95xx boards with onboard RAM. _______________________________________________
If you need to serve files in an enterprise situation and you *cannot* afford a $500 RAID card, then I think you've got more pressing issues. :)
Cheers,
chrism@imntv.com wrote:
An 8-port 9550SX card is only US$500. Why on earth would I mess around with software RAID when that card is going to make things a plug and play proposition? It just isn't worth the hassle.
What hassle? You have to define the layout somewhere. Using disk druid during the install or an mdadm command later isn't any more hassle than any other way you might do it.
Not when you need that fileserver to serve. I'd go with JFS. XFS is too risky even if the code was not iffy. ext3 + 3ware RAID5 arrays is that bad. IIRC XFS was somewhere like 4-5 times faster than ext3 and no, i am not talking about 3ware 75xx/85xx boards. I am talking 3ware 95xx boards with onboard RAM. _______________________________________________
If you need to serve files in an enterprise situation and you *cannot* afford a $500 RAID card, then I think you've got more pressing issues. :)
But what's the point? Unless you are running raid 5, which is pretty pointless itself with cheap disk space, the controller isn't really doing any work for you. For sizes where it is practical I happen to like software RAID1 which will let you recover data from any single drive on a machine with any controller that can access it.
On Thu, 29 Mar 2007 at 12:42pm, Les Mikesell wrote
chrism@imntv.com wrote:
If you need to serve files in an enterprise situation and you *cannot* afford a $500 RAID card, then I think you've got more pressing issues. :)
But what's the point? Unless you are running raid 5, which is pretty pointless itself with cheap disk space, the controller isn't really doing any
You know, the whole "disk is cheap, so why use RAID5?" argument just doesn't wash with me. Sure, disk *is* cheap. But some of us need every GB we can get for our money (well, given I'm spending grant money, it's actually *your* money too (if you live in the US)).
To demonstrate, let's look at a 24 drive system (3ware has a 24 port 9650 board). Newegg has 500GB WD RE2 drives for $160. So for $3840 in drives I can get:
a) 6TB RAID10 => $0.64/GB
or
b) 10.5TB RAID6 w/ hot spare => $0.37/GB
Umm, I'll take 75% more space for the same money, TYVM.
Joshua Baker-LePain wrote:
You know, the whole "disk is cheap, so why use RAID5?" argument just doesn't wash with me. Sure, disk *is* cheap. But some of us need every GB we can get for our money (well, given I'm spending grant money, it's actually *your* money too (if you live in the US)).
To demonstrate, let's look at a 24 drive system (3ware has a 24 port 9650 board). Newegg has 500GB WD RE2 drives for $160. So for $3840 in drives I can get:
a) 6TB RAID10 => $0.64/GB
or
b) 10.5TB RAID6 w/ hot spare => $0.37/GB
Umm, I'll take 75% more space for the same money, TYVM.
c) 12TB RAID0 w/no redundancy => $0.32/GB
When my scratch data increases in importance, I'll have to investigate that new fangled RAID 6 thang. :) Does RAID6 suffer from this performance degradation bogey man when used with ext3? Isn't RAID6 just RAID5 with a redundant parity stripe across the drives?
Cheers,
chrism@imntv.com wrote:
Joshua Baker-LePain wrote:
You know, the whole "disk is cheap, so why use RAID5?" argument just doesn't wash with me. Sure, disk *is* cheap. But some of us need every GB we can get for our money (well, given I'm spending grant money, it's actually *your* money too (if you live in the US)).
To demonstrate, let's look at a 24 drive system (3ware has a 24 port 9650 board). Newegg has 500GB WD RE2 drives for $160. So for $3840 in drives I can get:
a) 6TB RAID10 => $0.64/GB
or
b) 10.5TB RAID6 w/ hot spare => $0.37/GB
Umm, I'll take 75% more space for the same money, TYVM.
did those prices factor in the drive bay infrastructure for 24 drives with cabling, redundant power supplies, etc?
c) 12TB RAID0 w/no redundancy => $0.32/GB
When my scratch data increases in importance, I'll have to investigate that new fangled RAID 6 thang. :) Does RAID6 suffer from this performance degradation bogey man when used with ext3? Isn't RAID6 just RAID5 with a redundant parity stripe across the drives?
btw, I would NOT build a 20-something raid5/6 set. the rebuild times would be massively slow, opening a large window for double drive failure. Before you say 'nah, would never happen', check out phpbb.com, they lost their web server and forums to a double failure last month, and yes, they had a hotspare so the rebuild started immediately.
The large SAN vendors usually don't recommend building raid5 sets larger than 6-8 disks, and will stripe or concatenate multiple of those on the typical SAN with 100s of spindles. Myself, I'll stick with RAID10 for anything critical.
On Thu, 29 Mar 2007 at 12:13pm, John R Pierce wrote
chrism@imntv.com wrote:
Joshua Baker-LePain wrote:
You know, the whole "disk is cheap, so why use RAID5?" argument just doesn't wash with me. Sure, disk *is* cheap. But some of us need every GB we can get for our money (well, given I'm spending grant money, it's actually *your* money too (if you live in the US)).
To demonstrate, let's look at a 24 drive system (3ware has a 24 port 9650 board). Newegg has 500GB WD RE2 drives for $160. So for $3840 in drives I can get:
a) 6TB RAID10 => $0.64/GB
or
b) 10.5TB RAID6 w/ hot spare => $0.37/GB
Umm, I'll take 75% more space for the same money, TYVM.
did those prices factor in the drive bay infrastructure for 24 drives with cabling, redundant power supplies, etc?
Given 3840=160*24, no. ;) But those prices would be the same however you configure the drives.
btw, I would NOT build a 20-something raid5/6 set. the rebuild times would be massively slow, opening a large window for double drive failure. Before you say 'nah, would never happen', check out phpbb.com, they lost their web server and forums to a double failure last month, and yes, they had a hotspare so the rebuild started immediately.
The large SAN vendors usually don't recommend building raid5 sets larger than 6-8 disks, and will stripe or concatenate multiple of those on the typical SAN with 100s of spindles. Myself, I'll stick with RAID10 for anything critical.
Would that I had the money to and still get the space I need. Even doing 2 12 disk RAID6 sets (each with a hot spare) gets you 9TB which is 50% more space for the same money as RAID10.
To try to cut this debate short (hah!), this all boils down to a simple CBA. For me, I need massive amounts of fairly reliable, fairly fast space at as good a price as I can manage. RAID5/6 on systems backed up to tape (oops, I seem to be crossing threads) fulfills those requirements. For me. YMMV. No guarantees. Offer not valid in NV or NY. Do not taunt Happy Fun Ball(TM).
Joshua Baker-LePain wrote:
The large SAN vendors usually don't recommend building raid5 sets larger than 6-8 disks, and will stripe or concatenate multiple of those on the typical SAN with 100s of spindles. Myself, I'll stick with RAID10 for anything critical.
Would that I had the money to and still get the space I need. Even doing 2 12 disk RAID6 sets (each with a hot spare) gets you 9TB which is 50% more space for the same money as RAID10.
You are omitting the cost of the raid controller here. For your 12+ ports you don't have much other choice except a dedicated network device. For the size that normal people need - or that you might use on other machines, you might have the option of running raid10 (or 0 + LVM) in software on the motherboard ports plus some dumb $20/port cards and buying several extra drives.
On Thu, 29 Mar 2007 at 3:50pm, Les Mikesell wrote
Joshua Baker-LePain wrote:
The large SAN vendors usually don't recommend building raid5 sets larger than 6-8 disks, and will stripe or concatenate multiple of those on the typical SAN with 100s of spindles. Myself, I'll stick with RAID10 for anything critical.
Would that I had the money to and still get the space I need. Even doing 2 12 disk RAID6 sets (each with a hot spare) gets you 9TB which is 50% more space for the same money as RAID10.
You are omitting the cost of the raid controller here. For your 12+ ports you don't have much other choice except a dedicated network device. For the size
What do you mean by a dedicated network device -- do you mean a NAS? Not true. See, e.g., http://www.siliconmechanics.com/i10219/amd-storage-server.php.
that normal people need - or that you might use on other machines, you might
Are you implying I'm not normal? ;)
have the option of running raid10 (or 0 + LVM) in software on the motherboard ports plus some dumb $20/port cards and buying several extra drives.
On that note, what cheap add-in SATA controllers have folks had good luck with? I haven't tried *too* hard, but the couple I've tried were far less than stable.
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
Joshua Baker-LePain wrote:
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
That is my understanding too. Perhaps I'm just lazy, but I like being able to just plug all the drives into the controller and have everything just work. I am not saying software RAID is bad, but I just don't have the time to fiddle with it anymore when a cheap and stable hardware RAID card is readily available and is well supported with "out of the box" kernels.
Cheers,
Joshua Baker-LePain wrote:
Would that I had the money to and still get the space I need. Even doing 2 12 disk RAID6 sets (each with a hot spare) gets you 9TB which is 50% more space for the same money as RAID10.
You are omitting the cost of the raid controller here. For your 12+ ports you don't have much other choice except a dedicated network device. For the size
What do you mean by a dedicated network device -- do you mean a NAS? Not true. See, e.g., http://www.siliconmechanics.com/i10219/amd-storage-server.php.
that normal people need - or that you might use on other machines, you might
Are you implying I'm not normal? ;)
9 TB in one box seems a little extreme. My storage is a little more distributed.
have the option of running raid10 (or 0 + LVM) in software on the motherboard ports plus some dumb $20/port cards and buying several extra drives.
On that note, what cheap add-in SATA controllers have folks had good luck with? I haven't tried *too* hard, but the couple I've tried were far less than stable.
I have a 4-port promise SATA card that seems OK but I haven't really used it that much.
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
I've had very good luck with hot-swap scsi drives with software raid. One box last booted in mid-2003 has had a couple of drives replaced and re-synced with no interruption in service. (An IBM netfinity running RH 7.3 if anyone cares).
With parallel IDE a drive failure has a fair chance of locking up the controller and any other drive on the same cable but SATA shouldn't have that problem.
As a counterpoint, with software raid you can mirror to about anything of the same size, not just something in the same box from the same vendor. As an example, I have one server with a pair of internal IDE drives mirrored with a third 'missing' member and can connect an external firewire drive, sync it in, then remove it without shutting down. You could also mirror to iscsi partitions on another machine if you wanted.
On Thu, 29 Mar 2007 17:49:02 -0500, Les Mikesell wrote:
On that note, what cheap add-in SATA controllers have folks had good luck with? I haven't tried *too* hard, but the couple I've tried were far less than stable.
I have a 4-port promise SATA card that seems OK but I haven't really used it that much.
Same here. I installed a Promise 4-port SATA300 TX4 in an older machine that does not have on board SATA and set up software RAID1 with 2 500GB drives. The other 2 ports are for future expansion.
Akemi
On that note, what cheap add-in SATA controllers have folks had good luck with? I haven't tried *too* hard, but the couple I've tried were far less than stable.
I have recently got a 4 eSATA port card based on the si3124 chipset. Unfortunately there is no support yet on the driver side for port multipliers so I have only used single drive eSATA cases. Seamless. However, this is not on Centos...at least not yet.
The one I got look almost like the one below save for the sticker on the chipset.
http://www.firewire-1394.com/esata-sata-ii-4-port-raid.htm
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
Did you just pull the thing out? I believe there was supposed to be some mark down procedure before you actually pull the disk out.
Besides, it sounds weird...isn't the disk disappearing when it dies more or less the same?
I will be testing yanking out a drive on my box. I will let you know how it goes.
Feizhou spake the following on 3/29/2007 11:24 PM:
On that note, what cheap add-in SATA controllers have folks had good luck with? I haven't tried *too* hard, but the couple I've tried were far less than stable.
I have recently got a 4 eSATA port card based on the si3124 chipset. Unfortunately there is no support yet on the driver side for port multipliers so I have only used single drive eSATA cases. Seamless. However, this is not on Centos...at least not yet.
The one I got look almost like the one below save for the sticker on the chipset.
http://www.firewire-1394.com/esata-sata-ii-4-port-raid.htm
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
Did you just pull the thing out? I believe there was supposed to be some mark down procedure before you actually pull the disk out.
Besides, it sounds weird...isn't the disk disappearing when it dies more or less the same?
I will be testing yanking out a drive on my box. I will let you know how it goes.
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
you can hot swap a SCSI drive thats on a SCA (80 pin) connector as that connector is specifically designed for it. IDEALLY, there's a scsi backplane processor chip on the SCSI backplane, often SCSI device 15, which notifies the host (or raid controller) of the drive changes, but thats not absolutely neccessary.
SCA and SATA connectors intentionally have shorter signal and power, and longer ground pins so when they are hotplugged the signals are sequenced properly.
There is one advantage of hardware RAID that hasn't been mentioned yet, and that's hot-swap. Last time I tried, software RAID fell over when a HDD suddenly disappeared.
Did you just pull the thing out? I believe there was supposed to be some mark down procedure before you actually pull the disk out.
Besides, it sounds weird...isn't the disk disappearing when it dies more or less the same?
I will be testing yanking out a drive on my box. I will let you know how it goes.
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
I don't try yanking non hot-swap drives. I watched a guy try to connect an IDE cable while the box was on and on his lap. He got an electric shock. 8-|
Anyway, a pull power plug and a pull eSATA cable test both resulted in the expected degradation of mirror. Plugging in power or eSATA cable and then bringing the disk back online worked as expected. Open Solaris zpool mirror.
No echo > /proc special knowledge needed either.
Feizhou wrote:
I don't try yanking non hot-swap drives. I watched a guy try to connect an IDE cable while the box was on and on his lap. He got an electric shock. 8-|
Aggh pooo!
An electic shock from 12 volts?
More likely it was static, and nothing in particular to do with what he was doing (apart from handling an earthed metal object).
John Summerfield wrote:
Feizhou wrote:
I don't try yanking non hot-swap drives. I watched a guy try to connect an IDE cable while the box was on and on his lap. He got an electric shock. 8-|
Aggh pooo!
An electic shock from 12 volts?
More likely it was static, and nothing in particular to do with what he was doing (apart from handling an earthed metal object).
I dunno, I think it had to do with his er aim :D
The best part was he muttered that did not happen the last time :D
On 3/31/07, Scott Silva ssilva@sgvwater.com wrote:
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
Does this mean I cannot hotswap a SATA drive in a software RAID array even if I have it in a hotswap bay with a SATA hotswap backplane?
Cen Tos wrote:
On 3/31/07, Scott Silva ssilva@sgvwater.com wrote:
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
Does this mean I cannot hotswap a SATA drive in a software RAID array even if I have it in a hotswap bay with a SATA hotswap backplane?
Software raid has mdadm commands to fail/remove/add partitions, so it is up to the hardware to be able recognize the new working drive. There are scsi-specific commands to remove and re-detect drives and firewire/usb normally do it automatically (but not always the way you expect). I'm not sure how SATA fits into this picture. If the service is so critical that you can't run on the single mirror until a convenient time to swap with a reboot you might want a hot spare so you don't have worry about getting a new drive recognized with the machine running. Even if you didn't dedicate it to an array, if the spare drive is in the machine you can easily partition it to match and add it with a few commands.
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
Les Mikesell wrote:
Cen Tos wrote:
On 3/31/07, Scott Silva ssilva@sgvwater.com wrote:
It is the breaking of electrical connections that it trips on. Even if a drive fails, it is still commected, and the system can ignore it. AFAIR you can't hot swap a regular SCSI drive either.
Does this mean I cannot hotswap a SATA drive in a software RAID array even if I have it in a hotswap bay with a SATA hotswap backplane?
Software raid has mdadm commands to fail/remove/add partitions, so it is up to the hardware to be able recognize the new working drive. There are scsi-specific commands to remove and re-detect drives and firewire/usb normally do it automatically (but not always the way you expect). I'm not sure how SATA fits into this picture. If the service is so critical that you can't run on the single mirror until a convenient time to swap with a reboot you might want a hot spare so you don't have worry about getting a new drive recognized with the machine running. Even if you didn't dedicate it to an array, if the spare drive is in the machine you can easily partition it to match and add it with a few commands.
On Sunday 01 April 2007 05:15 am, William Warren wrote:
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
From time to time (not now) I have a machine on my testbed I can test this on.
Should I break the software RAID first? I'd think so.
Do I have to remake the RAID automatically? I'd think so.
Can I physically damage either drive?
I'd think not.
So I'm not sure I understand the problem as long as the (sata) drive is in a hotswap bay.
Anything I'm missing?
Jeff
Jeff Lasman wrote:
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
From time to time (not now) I have a machine on my testbed I can test this on.
Should I break the software RAID first? I'd think so.
No, you'd expect a failure to automatically kick the drive out of the raid and go on about its business.
Do I have to remake the RAID automatically? I'd think so.
Yes, you have to 'mdadm --add ...' to make the replacement sync back up.
Can I physically damage either drive?
I'd think not.
So I'm not sure I understand the problem as long as the (sata) drive is in a hotswap bay.
Anything I'm missing?
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions. If the partitions are recognized, mdadm will be able to add them.
Les Mikesell spake the following on 4/1/2007 11:37 AM:
Jeff Lasman wrote:
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
From time to time (not now) I have a machine on my testbed I can test this on.
Should I break the software RAID first? I'd think so.
No, you'd expect a failure to automatically kick the drive out of the raid and go on about its business.
Do I have to remake the RAID automatically? I'd think so.
Yes, you have to 'mdadm --add ...' to make the replacement sync back up.
Can I physically damage either drive?
I'd think not.
So I'm not sure I understand the problem as long as the (sata) drive is in a hotswap bay.
Anything I'm missing?
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions. If the partitions are recognized, mdadm will be able to add them.
I think you would need a hot-swap capable controller also in addition to the hot swap drive cage. Some controllers can be made to re-scan the bus, some can't. But a card that is designed to support hot swap will have this feature.
of course if you need a card to do this then you don't need MD raid(unless the card is a FRAID card)
Scott Silva wrote:
Les Mikesell spake the following on 4/1/2007 11:37 AM:
Jeff Lasman wrote:
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
From time to time (not now) I have a machine on my testbed I can test this on.
Should I break the software RAID first? I'd think so.
No, you'd expect a failure to automatically kick the drive out of the raid and go on about its business.
Do I have to remake the RAID automatically? I'd think so.
Yes, you have to 'mdadm --add ...' to make the replacement sync back up.
Can I physically damage either drive?
I'd think not.
So I'm not sure I understand the problem as long as the (sata) drive is in a hotswap bay.
Anything I'm missing?
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions. If the partitions are recognized, mdadm will be able to add them.
I think you would need a hot-swap capable controller also in addition to the hot swap drive cage. Some controllers can be made to re-scan the bus, some can't. But a card that is designed to support hot swap will have this feature.
William Warren spake the following on 4/2/2007 5:16 PM:
of course if you need a card to do this then you don't need MD raid(unless the card is a FRAID card)
Scott Silva wrote:
Les Mikesell spake the following on 4/1/2007 11:37 AM:
Jeff Lasman wrote:
if the hardware can hotswap then you should be ok if i am reading what les said correctly.
From time to time (not now) I have a machine on my testbed I can test this on.
Should I break the software RAID first? I'd think so.
No, you'd expect a failure to automatically kick the drive out of the raid and go on about its business.
Do I have to remake the RAID automatically? I'd think so.
Yes, you have to 'mdadm --add ...' to make the replacement sync back up.
Can I physically damage either drive?
I'd think not.
So I'm not sure I understand the problem as long as the (sata) drive is in a hotswap bay.
Anything I'm missing?
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions. If the partitions are recognized, mdadm will be able to add them.
I think you would need a hot-swap capable controller also in addition to the hot swap drive cage. Some controllers can be made to re-scan the bus, some can't. But a card that is designed to support hot swap will have this feature.
I think that is the whole point. Hotswap may work with some devices on mdraid, but if you have the hardware that will hotswap, you probably have a real raid card and won't need mdraid anyway. And if you have a system that is so critical that you can't reboot to change a failed drive, you shouldn't be using mdraid.
I think that is the whole point. Hotswap may work with some devices on mdraid, but if you have the hardware that will hotswap, you probably have a real raid card and won't need mdraid anyway. And if you have a system that is so critical that you can't reboot to change a failed drive, you shouldn't be using mdraid.
SATA drives and controllers are designed to be hotswappable (except the very first generation controllers that only provided PATA compatibility mode for SATA drives) and the controllers are not necessarily hardware raid ones.
On Sunday 01 April 2007 11:37 am, Les Mikesell wrote:
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions.
Thinking more about this ...
I most likely wouldn't have a drive "ready" as all our servers don't have the same configuration.
If the partitions are recognized, mdadm will be able to add them.
Could I create partitions on a new drive (fdisk) and then do either a mount or remount? And then start rebuilding the RAID?
Jeff
Jeff Lasman wrote:
On Sunday 01 April 2007 11:37 am, Les Mikesell wrote:
The question is whether the kernel will notice the new drive when you add it. It might be possible to swap in an exact match and get away with the setup detected at boot time but that doesn't sound very healthy. Can you hotswap a new SATA drive that wasn't present at boot time and have the kernel notice the new drive device and its partitions.
Thinking more about this ...
I most likely wouldn't have a drive "ready" as all our servers don't have the same configuration.
If the partitions are recognized, mdadm will be able to add them.
Could I create partitions on a new drive (fdisk) and then do either a mount or remount? And then start rebuilding the RAID?
Yes, but there is still the issue of whether the drive will be recognized at all when hotplugged. Once the drive is assigned a device name by the kernel you will be able to fdisk it. You don't need to mount it because the md device will be mounted, not the disk partitions.
On Tuesday 03 April 2007 05:58 am, Les Mikesell wrote:
Yes, but there is still the issue of whether the drive will be recognized at all when hotplugged. Once the drive is assigned a device name by the kernel you will be able to fdisk it. You don't need to mount it because the md device will be mounted, not the disk partitions.
Thanks, Les.
I see where I'm going to have to try this some day soon. All our systems with hotswap bays are in service; I'll try the experiment on the next one.
Jeff
Jeff Lasman spake the following on 4/3/2007 7:40 AM:
On Tuesday 03 April 2007 05:58 am, Les Mikesell wrote:
Yes, but there is still the issue of whether the drive will be recognized at all when hotplugged. Once the drive is assigned a device name by the kernel you will be able to fdisk it. You don't need to mount it because the md device will be mounted, not the disk partitions.
Thanks, Les.
I see where I'm going to have to try this some day soon. All our systems with hotswap bays are in service; I'll try the experiment on the next one.
Jeff
It seems unusual to have hotswap cages and not have a hotswap capable controller. I think SAS is hot-swappable, but I really don't think IDE or SATA is, and SCSI might be if it is newer.
On Tuesday 03 April 2007 08:14 am, Scott Silva wrote:
It seems unusual to have hotswap cages and not have a hotswap capable controller.
Which is why I want to do some testing.
I think SAS is hot-swappable, but I really don't think IDE or SATA is, and SCSI might be if it is newer.
Definitely SATA. Definitely hotswap.
http://www.serversdirect.com/config.asp?config_id=SDR-5015M-T+B
We've been using this configuration for some time; some of the details may have changed.
According to linuxmafia:
http://linuxmafia.com/faq/Hardware/sata.html#intel-ich7
It's fake RAID and we've always set it up as software RAID.
(Maybe we shouldn't be?)
Jeff
Jeff Lasman spake the following on 4/3/2007 8:42 AM:
On Tuesday 03 April 2007 08:14 am, Scott Silva wrote:
It seems unusual to have hotswap cages and not have a hotswap capable controller.
Which is why I want to do some testing.
I think SAS is hot-swappable, but I really don't think IDE or SATA is, and SCSI might be if it is newer.
Definitely SATA. Definitely hotswap.
http://www.serversdirect.com/config.asp?config_id=SDR-5015M-T+B
We've been using this configuration for some time; some of the details may have changed.
According to linuxmafia:
http://linuxmafia.com/faq/Hardware/sata.html#intel-ich7
It's fake RAID and we've always set it up as software RAID.
(Maybe we shouldn't be?)
Jeff
You are better off with software raid unless you buy those systems with the add-in 3ware card. Looking around the net, the only software raid solution that I see that says it supports hot-swap and auto-reconstruction is EVMS (evms.sourceforge.net). But RedHat (and CentOS) don't ship with EVMS.
I couldn't find anything to say whether the linux MD code does or doesn't support hotswap.
Scott Silva wrote:
You are better off with software raid unless you buy those systems with the add-in 3ware card. Looking around the net, the only software raid solution that I see that says it supports hot-swap and auto-reconstruction is EVMS (evms.sourceforge.net). But RedHat (and CentOS) don't ship with EVMS.
I couldn't find anything to say whether the linux MD code does or doesn't support hotswap.
The MD code talks to linux partition devices and doesn't need to know anything about hotswaping itself. A hotswap cage just means that the power and data connections are made in an order that won't hurt the hardware, leaving the question of whether the kernel will notice the new disk device when you plug it in. If it does, and either assigns devices to existing partitions or you fdisk it, then you'll be able to add it into a software array with mdadm.
What I have some success with (and very limited sample) are the commands of the form:
# echo "scsi add-single-device <h> <b> <t> <l>" > /proc/scsi/scsi
See Section 8.3 (proc interface) of the SCSI HowTO doc for details and more information ..... (http://tldp.org/HOWTO/SCSI-2.4-HOWTO/mlproc.html)
It works for SATA disk (which use the SCSI subsystem) in the few cases I've had to deal with ...
YMMV ...
Rich
Feizhou wrote:
Richard Karhuse wrote:
What I have some success with (and very limited sample) are the commands of the form:
# echo "scsi add-single-device <h> <b> <t> <l>" > /proc/scsi/scsi
Ah, the magic Linux scsi hotswap command.
I thought that was disabled in standard kernel drivers in kernel 2.6 due to a kernel design decision that /proc wasn't suitable for this sort of thing, and that /sys should be used instead. I had problems with a fiberchannel SAN system not being able to 'discover' new LUNs with the stock qlogic drivers in 2.6 (rhel4) even though this works fine in 2.4 (rhel3)
Jeff Lasman spake the following on 4/3/2007 8:42 AM:
On Tuesday 03 April 2007 08:14 am, Scott Silva wrote:
It seems unusual to have hotswap cages and not have a hotswap capable controller.
Which is why I want to do some testing.
I think SAS is hot-swappable, but I really don't think IDE or SATA is, and SCSI might be if it is newer.
Definitely SATA. Definitely hotswap.
http://www.serversdirect.com/config.asp?config_id=SDR-5015M-T+B
We've been using this configuration for some time; some of the details may have changed.
According to linuxmafia:
http://linuxmafia.com/faq/Hardware/sata.html#intel-ich7
It's fake RAID and we've always set it up as software RAID.
(Maybe we shouldn't be?)
Jeff
Looking at the manual for that system, the drives are hot-swappable - even with the onboard controller, but I am not sure if the linux software raid supports hotswap. I guess you will have to test it.
Looking at the manual for that system, the drives are hot-swappable - even with the onboard controller, but I am not sure if the linux software raid supports hotswap. I guess you will have to test it.
Linux software raid cannot due to partitioning issues or rather its support of partitions for raid devices. The disk might have to be first partitioned before it can be added to the array whereas hardware raid cards use the whole disk and do not have to worry about partitioning and therefore can make use of the disk immediately without any squabbles.
Feizhou wrote:
Looking at the manual for that system, the drives are hot-swappable - even with the onboard controller, but I am not sure if the linux software raid supports hotswap. I guess you will have to test it.
Linux software raid cannot due to partitioning issues or rather its support of partitions for raid devices. The disk might have to be first partitioned before it can be added to the array whereas hardware raid cards use the whole disk and do not have to worry about partitioning and therefore can make use of the disk immediately without any squabbles.
You wouldn't necessarily want the md driver to automatically write on top of a new drive you added - unless you already had it included as a hot spare.
Jeff Lasman wrote:
On Tuesday 03 April 2007 05:58 am, Les Mikesell wrote:
Yes, but there is still the issue of whether the drive will be recognized at all when hotplugged. Once the drive is assigned a device name by the kernel you will be able to fdisk it. You don't need to mount it because the md device will be mounted, not the disk partitions.
Thanks, Les.
I see where I'm going to have to try this some day soon. All our systems with hotswap bays are in service; I'll try the experiment on the next one.
Usually the 'dmesg' command will show newly detected devices. I'm just not sure how it works with SATA.
chrism@imntv.com wrote:
Feizhou wrote:
argh...i missed the raid5 on 3ware part.
Ah. No, I am using RAID0 since this is just scratch space for editing large uncompressed video files.
Man a 3ware for RAID0? :D
I cannot wait for port multiplexer support on libsata (or is it libata?) in Centos 5. An elcheapo AHCI or whatever SATA controller + Linux software raid will own 3ware cards on a price/performance basis :D
An 8-port 9550SX card is only US$500. Why on earth would I mess around with software RAID when that card is going to make things a plug and play proposition? It just isn't worth the hassle.
Yes, a 3ware card + large case will always be cheaper than trying add external storage via eSATA. I forgot the environment :P
As for hassle, I guess it is only a hassle on Linux...
Disks are so cheap and high enough capacity now that I don't even bother with RAID5 anymore. I still don't think (even if I was using RAID5) that I would stray from the "vendor supported" ext3 filesystem.
Not when you need that fileserver to serve. I'd go with JFS. XFS is too risky even if the code was not iffy. ext3 + 3ware RAID5 arrays is that bad. IIRC XFS was somewhere like 4-5 times faster than ext3 and no, i am not talking about 3ware 75xx/85xx boards. I am talking 3ware 95xx boards with onboard RAM. _______________________________________________
If you need to serve files in an enterprise situation and you *cannot* afford a $500 RAID card, then I think you've got more pressing issues. :)
Er...I was referring to not using ext3 on a 500USD RAID card that it does not gell with.
On Thu, March 29, 2007 11:42 am, Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
Has anyone tried ZFS Filesystem for FUSE/Linux? http://www.wizy.org/wiki/ZFS_on_FUSE I use it on Sun boxes and it's just awsome, especially when combining with zones.
On Thu, 29 Mar 2007 19:46:05 -0400 (EDT), Paul wrote:
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
Has anyone tried ZFS Filesystem for FUSE/Linux? http://www.wizy.org/wiki/ZFS_on_FUSE I use it on Sun boxes and it's just awsome, especially when combining with zones.
It's still beta (at least when used with FUSE). I doubt many people running CentOS will be interested in subjecting critical servers to a filesystem that's still under development.
Miark
Paul wrote:
On Thu, March 29, 2007 11:42 am, Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem...
Has anyone tried ZFS Filesystem for FUSE/Linux? http://www.wizy.org/wiki/ZFS_on_FUSE I use it on Sun boxes and it's just awsome, especially when combining with zones.
...why that and not just install (Open) Solaris?
the mistake folks make is running md raid on a 3ware..that's slow. dump md raid if you are on something like a 3ware.
Feizhou wrote:
William Warren wrote:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
don't know if it is still the case but ext3 + 3ware + ext3 = ultra slow. you'd want to use XFS or some other filesystem... _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
William Warren spake the following on 3/29/2007 4:58 AM:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
150 what? Euro's maybe. I don't see 3ware raid5 capable cards here for less than $350 US
280.00 for the 7500 @ Newegg.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Scott Silva Sent: Thursday, March 29, 2007 9:07 AM To: centos@centos.org Subject: [CentOS] Re: software raid
William Warren spake the following on 3/29/2007 4:58 AM:
as soon as you goto software raid 5 you'll get a good performance hit. Head for the hardware raid cards then. You can get the 3ware ide cards for 150 or less now.
150 what? Euro's maybe. I don't see 3ware raid5 capable cards here for less than $350 US
Dave,
I've been running softraid on my personal server for over a year now. My current config is five 250g sata drives.
One catch I ran into was online growth of the array. I needed to compile a later kernel version as well as the mdadm tool. So for that reason, I wouldn't recommend soft-raid5 in a production environment. However, I've had no problems so far, and have expanded my storage array several times now. (LVM2 makes it super easy)
I have soft-raid1 running on 'untouched' production servers, without issue.
Gordon
On 3/27/07, Dave dmehler26@woh.rr.com wrote:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Dave wrote:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I'm going to be setting up a machine at home, for keeping backup copies of my data & software. For various reasons (work being one) the "server" is going to be a dual boot W2K/Linux machine, and I'll have MacOSX, W2K, and Linux clients accessing this over the network. I'll probably just fire up NFS for this. I have a 60GB drive, and 2 80GB drives for it. I'll be adding in a Rosewill RC-200 PCI ATA/133 card for the 80GB drives. It says it does RAID, but that is probably one of those fake-RAID deals. Anybody use this card? Since it's likely not real RAID, my plan is to have on 80GB drive formatted as ext3, then use rsync daily to back that up to the other 80GB drive. Then I can use something like EXT2 IFS (from fs-driver.org) in W2K if/when I need to access the backup drives. I thought about a software RAID1 in Linux ext3 format, but the EXT2IFS driver probably wouldn't be able to read that, right? Then, to make things interesting, I have an external 300GB drive with NTFS format. This contains the portable copy of the data store. :) Not too much of an issue, since Linux can read NTFS fine. Or am I wrong here? Should I flip this to Ext3, considering my other thoughts on this setup?
David A. Woyciesjes spake the following on 3/29/2007 6:29 AM:
Dave wrote:
Hello, I've got a 4.4 box that i'd like to implement software raid on. Does anyone have any experiences with this? Thanks. Dave.
I'm going to be setting up a machine at home, for keeping backup
copies of my data & software. For various reasons (work being one) the "server" is going to be a dual boot W2K/Linux machine, and I'll have MacOSX, W2K, and Linux clients accessing this over the network. I'll probably just fire up NFS for this. I have a 60GB drive, and 2 80GB drives for it. I'll be adding in a Rosewill RC-200 PCI ATA/133 card for the 80GB drives. It says it does RAID, but that is probably one of those fake-RAID deals. Anybody use this card? Since it's likely not real RAID, my plan is to have on 80GB drive formatted as ext3, then use rsync daily to back that up to the other 80GB drive. Then I can use something like EXT2 IFS (from fs-driver.org) in W2K if/when I need to access the backup drives. I thought about a software RAID1 in Linux ext3 format, but the EXT2IFS driver probably wouldn't be able to read that, right? Then, to make things interesting, I have an external 300GB drive with NTFS format. This contains the portable copy of the data store. :) Not too much of an issue, since Linux can read NTFS fine. Or am I wrong here? Should I flip this to Ext3, considering my other thoughts on this setup?
The best common denominator would be fat32 on the external. Linux, Windows, and I think even the Macs can read and write to it. The biggest limit to fat32 is the maximum of 2 gig file sizes. Have you thought about just looking for an old PII PC in a garage sale and just making it a server? You could use something as simple as Freenas and make it a network storage point.
Scott Silva wrote:
David A. Woyciesjes spake the following on 3/29/2007 6:29 AM:
I'm going to be setting up a machine at home, for keeping backup
copies of my data & software... the "server" is going to be a dual boot W2K/Linux machine, and I'll have MacOSX, W2K, and Linux clients accessing this over the network... ... I have a 60GB drive, and 2 80GB drives for it... ...I have an external 300GB drive with NTFS format...
The best common denominator would be fat32 on the external. Linux, Windows, and I think even the Macs can read and write to it. The biggest limit to fat32 is the maximum of 2 gig file sizes. Have you thought about just looking for an old PII PC in a garage sale and just making it a server? You could use something as simple as Freenas and make it a network storage point.
Thought about it, and discarded it. IIRC, there is a ~32GB partition limit for FAT32. Or at least WinXP won't create them bigger than that. Considering the files I'll be storing, I don't want to deal with 3+ different partitions on the external drive. :) Freenas sounds interesting. I'll have to have a look at it. Part of the current (not definitive) plan is I would be using this a a workstation too. Keep the number of machines to a minimum. Rest of the basic idea: 1 All workstations could back up to either the external drive, or the server. 2 The server will sync with the external drive, at least once a week, using either unison(Linux) or SyncBack(Windows). 3 The server will rsync the main 80GB drive to the backup 80GB drive nightly.
Yes, I know that I could get myself into trouble with the 300GB external drive, and only 80GB of backup space on the server. I will probably change the external drive to an 80GB NTFS partition, so I can't accidentally overfill the available space. I suppose then I could create an 80GB Ext3 partition, and some others for further backup & testing...
On Thu, Mar 29, 2007 at 01:43:46PM -0400, David A. Woyciesjes enlightened us:
Scott Silva wrote:
David A. Woyciesjes spake the following on 3/29/2007 6:29 AM:
I'm going to be setting up a machine at home, for keeping backup copies of my data & software... the "server" is going to be a dual boot W2K/Linux machine, and I'll have MacOSX, W2K, and Linux clients accessing this over the network... ... I have a 60GB drive, and 2 80GB drives for it... ...I have an external 300GB drive with NTFS format...
The best common denominator would be fat32 on the external. Linux, Windows, and I think even the Macs can read and write to it. The biggest limit to fat32 is the maximum of 2 gig file sizes. Have you thought about just looking for an old PII PC in a garage sale and just making it a server? You could use something as simple as Freenas and make it a network storage point.
Thought about it, and discarded it. IIRC, there is a ~32GB partition limit for FAT32. Or at least WinXP won't create them bigger than that. Considering the files I'll be storing, I don't want to deal with 3+ different partitions on the external drive. :)
Time for a number check:
http://en.wikipedia.org/wiki/FAT32 claims 4GB filesize, and 8TB partition size. the 32GB partition limit is a WinXP-ism to make people use NTFS.
Matt
http://en.wikipedia.org/wiki/FAT32 claims 4GB filesize, and 8TB partition size. the 32GB partition limit is a WinXP-ism to make people use NTFS.
a 300 Gb fat32 would have either an obscenely large fat table, or an obscenely large cluster size. if you used 4k clusters, each 'fat' table would be 300 megabytes, this has to be sequentially scanned to calcuate freespace, and it has to be scanned to find free blocks for file and directory allocations. If you used 32k byte clusters, this would be reduced to 37 megabytes for the FAT, but then even the tiniest files would waste 32 k bytes.
FAT also has no support for file ownership or access rights. It has no journaling, so any abnormal events such as unexpected/sudden reboots WILL result in lost freespace (orphaned files/fragments), AND its prone to crosslinking which is very hard to repair. FAT was designed for floppy disks and hard disks that were a few megabytes back in the early 80s. It has no way of grouping cluster allocations together, so it has a very strong tendancy to extreme fragmentation, and as the FAT tables are quite large on a filesystem this size, requires frequent extra seeks to locate the next block. 4GB is an absolute limit on size of a single file (so, no DVD ISO images, no large TARs, etc). Directories are sequentially scanned only, so large directories that spill over a few clusters become excruciatingly slow to even open files from.
John R Pierce wrote:
http://en.wikipedia.org/wiki/FAT32 claims 4GB filesize, and 8TB partition size. the 32GB partition limit is a WinXP-ism to make people use NTFS.
a 300 Gb fat32 would have either an obscenely large fat table, or an obscenely large cluster size. if you used 4k clusters, each 'fat' table would be 300 megabytes, this has to be sequentially scanned to calcuate freespace, and it has to be scanned to find free blocks for file and directory allocations. If you used 32k byte clusters, this would be reduced to 37 megabytes for the FAT, but then even the tiniest files would waste 32 k bytes.
FAT also has no support for file ownership or access rights. It has no journaling, so any abnormal events such as unexpected/sudden reboots WILL result in lost freespace (orphaned files/fragments), AND its prone to crosslinking which is very hard to repair. FAT was designed for floppy disks and hard disks that were a few megabytes back in the early 80s. It has no way of grouping cluster allocations together, so it has a very strong tendancy to extreme fragmentation, and as the FAT tables are quite large on a filesystem this size, requires frequent extra seeks to locate the next block. 4GB is an absolute limit on size of a single file (so, no DVD ISO images, no large TARs, etc). Directories are sequentially scanned only, so large directories that spill over a few clusters become excruciatingly slow to even open files from.
All goo information. I'm probably going to keep Ext3 on the 2 80GB server drives, with Ext2IFS loaded on the W2K install of the server. Probably have an 80GB FAT32 partition on the external drive. I suppose I could also then 2 other 80GB partitions on is also, NTFS and Ext3... Anybody confused yet?
Basically, I don't want to lose anything to a drive crash...
Matt Hyclak wrote:
On Thu, Mar 29, 2007 at 01:43:46PM -0400, David A. Woyciesjes enlightened us:
Scott Silva wrote:
The best common denominator would be fat32 on the external. Linux, Windows, and I think even the Macs can read and write to it. The biggest limit to fat32 is the maximum of 2 gig file sizes...
Thought about it, and discarded it. IIRC, there is a ~32GB partition limit for FAT32. Or at least WinXP won't create them bigger than that. Considering the files I'll be storing, I don't want to deal with 3+ different partitions on the external drive. :)
Time for a number check:
http://en.wikipedia.org/wiki/FAT32 claims 4GB filesize, and 8TB partition size. the 32GB partition limit is a WinXP-ism to make people use NTFS.
I stand corrected. And this time I don't mind being told I'm wrong...;-) And if you're like me (don't trust Wikipedia as far as you can throw it) here's the info from MS directly... http://support.microsoft.com/kb/184006/en-us http://support.microsoft.com/kb/314463/EN-US/
I think I may just use FAT32 then... :) And yes, I am aware of the fact that FAT32 does nothing with user permissions...