Hi,
I'm considering setting up my Centos Desktop machine for RAID 1. I read a lot of good info at this site:http://linuxmafia.com/faq/Hardware/sata.html#intel-vitesse about differences in fakeraid and real raid cards.
The hardware I plan on installing this RAID card into is an Intel DP35DP motherboard with the Intel E4500 dual core processor, and I have two Mator 500 gig SATA hard drives.
Can anyone recommend a good “real raid” card for my Linux? What I am looking for is to plug in a RAID controller card out of the box, and without having to load any drivers onto my Centos 5.1 box, have the Real hardware RAID card automatically do all the work, mirror my hard drive onto the second backup drive and do all the work for me.
Do such cards exist? If so which model /manufacturers do you recommend? Any experiences/info/insights on hardware RAID cards good or bad on centos boxes would be appreciated.
_________________________________________________________________ Climb to the top of the charts! Play the word scramble challenge with star power. http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_jan
Therese Trudeau wrote:
Do such cards exist? If so which model /manufacturers do you recommend? Any experiences/info/insights on hardware RAID cards good or bad on centos boxes would be appreciated.
3Ware 8000-series cards are probably the most compatible going back at least 3 years. 9000-series cards are faster/better and CentOS 5.1 should have full support for them.
For me, in SATA RAID cards it's 3ware or nothing. Been using them for more than 8 years now.
I picked up a extra 8006-2 (2-port RAID) a couple weeks ago for about $120 as a spare for my home system that has a 8006-2.
nate
Do such cards exist? If so which model /manufacturers do you recommend? Any experiences/info/insights on hardware RAID cards good or bad on centos boxes would be appreciated.
3Ware 8000-series cards are probably the most compatible going back at least 3 years. 9000-series cards are faster/better and CentOS 5.1 should have full support for them.
For me, in SATA RAID cards it's 3ware or nothing. Been using them for more than 8 years now.
I picked up a extra 8006-2 (2-port RAID) a couple weeks ago for about $120 as a spare for my home system that has a 8006-2.
nate
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself? _________________________________________________________________ Helping your favorite cause is as easy as instant messaging. You IM, we give. http://im.live.com/Messenger/IM/Home/?source=text_hotmail_join
Therese Trudeau wrote:
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself?
Drivers are required for all storage adapters(RAID or not). 3Ware handles raid in hardware, not in software, it has a bios which you'd typically use to configure the array, you can boot off of the array, etc.
3Ware also offers a management tool for linux (CLI and/or web based) which allows for monitoring, and controlling the adapter's configuration settings.
3Ware has had their linux drivers in the kernel for at least... 8 years now? maybe longer. So any linux distro should have no trouble detecting the card. The latest 9650 cards are pretty new and use a new driver, which may or may not be supported, CentOS 5.1 should work with it fine though(support for CentOS 4 was added almost a year ago, I think with v4.5)
They also support hot swap, provided the interface to the disk supports it(typically a hot swap backplane).
nate
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself?
Drivers are required for all storage adapters(RAID or not). 3Ware handles raid in hardware, not in software, it has a bios which you'd typically use to configure the array, you can boot off of the array, etc.
3Ware also offers a management tool for linux (CLI and/or web based) which allows for monitoring, and controlling the adapter's configuration settings.
3Ware has had their linux drivers in the kernel for at least... 8 years now? maybe longer. So any linux distro should have no trouble detecting the card. The latest 9650 cards are pretty new and use a new driver, which may or may not be supported, CentOS 5.1 should work with it fine though(support for CentOS 4 was added almost a year ago, I think with v4.5)
They also support hot swap, provided the interface to the disk supports it(typically a hot swap backplane).
Great thanks for that info Nate, I just checked out their web site, looks like the 9500S-4LP would suit my needs for a desktop machine.
I've been leary about desktop RAID cards because a few years ago, I bought an adaptec 1210SA RAID card which supposedly does RAID 1 I never could get the darn thing to work in my old windows machine and years later found out it is really a fake raid card. It's been collecting dust ever since may as well throw it out. The drivers it required never worked with W2K.
But the Centos server I use has adaptec SCSI RAID controller in it, I guess on the high end for SCSI RAID, adaptec is known for good raid cards, but the one I bought sure did nothing for me for my desktop.
_________________________________________________________________ Need to know the score, the latest news, or you need your Hotmail®-get your "fix". http://www.msnmobilefix.com/Default.aspx
on 3-9-2008 5:36 PM Therese Trudeau spake the following:
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself?
Drivers are required for all storage adapters(RAID or not). 3Ware handles raid in hardware, not in software, it has a bios which you'd typically use to configure the array, you can boot off of the array, etc.
3Ware also offers a management tool for linux (CLI and/or web based) which allows for monitoring, and controlling the adapter's configuration settings.
3Ware has had their linux drivers in the kernel for at least... 8 years now? maybe longer. So any linux distro should have no trouble detecting the card. The latest 9650 cards are pretty new and use a new driver, which may or may not be supported, CentOS 5.1 should work with it fine though(support for CentOS 4 was added almost a year ago, I think with v4.5)
They also support hot swap, provided the interface to the disk supports it(typically a hot swap backplane).
Great thanks for that info Nate, I just checked out their web site, looks like the 9500S-4LP would suit my needs for a desktop machine.
I've been leary about desktop RAID cards because a few years ago, I bought an adaptec 1210SA RAID card which supposedly does RAID 1 I never could get the darn thing to work in my old windows machine and years later found out it is really a fake raid card. It's been collecting dust ever since may as well throw it out. The drivers it required never worked with W2K.
But the Centos server I use has adaptec SCSI RAID controller in it, I guess on the high end for SCSI RAID, adaptec is known for good raid cards, but the one I bought sure did nothing for me for my desktop.
Adaptec makes both true HW raid and re-sells fakeraid cards. I guess they wanted a piece of both pies. But 3ware only makes HW raid cards AFAIK.
Adaptec makes both true HW raid and re-sells fakeraid cards. I guess they wanted a piece of both pies. But 3ware only makes HW raid cards AFAIK.
How well do you think the adaptecSATA raid cards stack up against the Areca and 3ware RAID cards? I'm going to buy two raid cards over the weekend.
_________________________________________________________________ Helping your favorite cause is as easy as instant messaging. You IM, we give. http://im.live.com/Messenger/IM/Home/?source=text_hotmail_join
on 3-14-2008 5:37 AM Therese Trudeau spake the following:
Adaptec makes both true HW raid and re-sells fakeraid cards. I guess they wanted a piece of both pies. But 3ware only makes HW raid cards AFAIK.
How well do you think the adaptecSATA raid cards stack up against the Areca and 3ware RAID cards? I'm going to buy two raid cards over the weekend.
I had 2 adaptec 28220SA's (actually still have them) and they were the biggest pieces of cr@p I had. They regularly trashed drives, and hard locked regularly. After replacing the cards with 3ware 9550's, no problems. Maybe it was a problem between the sata drives and the card, but who has time to fight that. Hopefully they are better now. I might try them again in a Windows box, but their linux support wasn't very good.
Most of the things in this email, from me are a personal opinoin, but I do spend a fair bit of time with these sort of things, these days.
Therese Trudeau wrote:
3Ware 8000-series cards are probably the most compatible going back at least 3 years. 9000-series cards are faster/better and CentOS 5.1 should have full support for them.
I would'nt bother with a 3ware 8000 or a 3ware 9000 card these days, if you really do want to get 3ware, get atleast a 9650. And anything less than a 9550 should be considered only if you get a really good deal off ebay. And remember that battery backup unit.
For me, in SATA RAID cards it's 3ware or nothing. Been using them for more than 8 years now.
I used to think the same for a long time, till I started using Areca raid cards. Now, I rate 3ware well behind Areca on performance, reliability and ease of use. If you are doing raid-5 or raid-6 the performance difference is quite noticeable ( I've just recently switched my desktop from a 3ware 9650 to Areca 1220, and got a near 8% improvement in write performance, and 12% on read - raid5 5 spindles ).
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself?
Drivers for both 3ware and Areca are included in the CentOS-5.1 kernels.
Btw, you might want to keep an eye on some of the not-that-expensive highpoint rocketraid, hey have some fairly decent stuff coming out these days. The issues with them however, the drivers have only recently gone into the mainline upstream kernel - and their userland tools are not quite there yet. But if you need something for 2 to 5 drivers, they are an option worth considering ( they do have drivers for centos-4 and centos-5 on their website ).
I would'nt bother with a 3ware 8000 or a 3ware 9000 card these days, if you really do want to get 3ware, get atleast a 9650. And anything less than a 9550 should be considered only if you get a really good deal off ebay. And remember that battery backup unit.
I'm just really looking for a RAID card that will do RAID 1, with four drive capacity, i.e., a master drive with the OS and applications installed and mirrored, and a slave drive for data and photos, graphic design, video, etc also mirrored. What would battery built into a RAID card do for me?
For me, in SATA RAID cards it's 3ware or nothing. Been using them for more than 8 years now.
I used to think the same for a long time, till I started using Areca raid cards. Now, I rate 3ware well behind Areca on performance, reliability and ease of use. If you are doing raid-5 or raid-6 the performance difference is quite noticeable ( I've just recently switched my desktop from a 3ware 9650 to Areca 1220, and got a near 8% improvement in write performance, and 12% on read - raid5 5 spindles ).
So you reccomend Areca, good thanks I'll check them out too. How are they for RAID 1?
So these cards are just plug n play? Just plug them in, no software or drivers required, all mirroring is managed by firmware built into the card RAID card itself?
Drivers for both 3ware and Areca are included in the CentOS-5.1 kernels.
Btw, you might want to keep an eye on some of the not-that-expensive highpoint rocketraid, hey have some fairly decent stuff coming out these days. The issues with them however, the drivers have only recently gone into the mainline upstream kernel - and their userland tools are not quite there yet. But if you need something for 2 to 5 drivers, they are an option worth considering ( they do have drivers for centos-4 and centos-5 on their website ).
Will check them out too Thanks Karanbir
_________________________________________________________________ Need to know the score, the latest news, or you need your Hotmail®-get your "fix". http://www.msnmobilefix.com/Default.aspx
Therese Trudeau wrote:
I'm just really looking for a RAID card that will do RAID 1, with four drive capacity, i.e., a master drive with the OS and applications installed and mirrored, and a slave drive for data and photos, graphic design, video, etc also mirrored. What would battery built into a RAID card do for me?
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
Also, make sure whatever raid hardware you decide to invest in supports multiple raid sets ( thats what you seem to want - and not all raid cards do that ) - both 3ware and Areca do support this.
Are you considering using this as a backup system and not doing any off machine backups ? if so, consider the possibility of actually loosing the raid card itself : Ideally you want the raid metadata sitting on the disks rather than the raid card, so you can replace the card and be back in action. Again, not all cards support this out of the box.
For me, in SATA RAID cards it's 3ware or nothing. Been using them for more than 8 years now.
I used to think the same for a long time, till I started using Areca raid cards. Now, I rate 3ware well behind Areca on performance, reliability and ease of use. If you are doing raid-5 or raid-6 the performance difference is quite noticeable ( I've just recently switched my desktop from a 3ware 9650 to Areca 1220, and got a near 8% improvement in write performance, and 12% on read - raid5 5 spindles ).
So you reccomend Areca, good thanks I'll check them out too. How are they for RAID 1?
To be honest, its been a very long time since I used a raid-1 setup, and I am not sure if I'd bother with it now. If you have 4 drives, might as well raid-5 them. You still get the ability to loose 1 drive at a time, and have a hot-spare : while ending up with the same storage capacity.
btw, if you have 4 drives, make sure they are as similar in specifications as possible - however, try and get different batch numbers / production runs. Drives that were made in the same batch, have been stored and stock under the exact same conditions, shipped out together, used and put into production together - have a very very high probability of also failing together :D
I'm just really looking for a RAID card that will do RAID 1, with four drive capacity, i.e., a master drive with the OS and applications installed and mirrored, and a slave drive for data and photos, graphic design, video, etc also mirrored. What would battery built into a RAID card do for me?
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
Pardon my ignorance, what is write back caching and BBU?
Also, make sure whatever raid hardware you decide to invest in supports multiple raid sets ( thats what you seem to want - and not all raid cards do that ) - both 3ware and Areca do support this.
Do you mean by multiple raid sets a raid card that can do either RAID 0, RAID 1, RAID 5 etc?
Are you considering using this as a backup system and not doing any off machine backups ? if so, consider the possibility of actually loosing the raid card itself : Ideally you want the raid metadata sitting on the disks rather than the raid card, so you can replace the card and be back in action. Again, not all cards support this out of the box.
Yes, this is just to back up my hard drives on my desktop machine, no remote backups. Do both the 3ware and the Areca store meta data on disk?
So you reccomend Areca, good thanks I'll check them out too. How are they for RAID 1?
To be honest, its been a very long time since I used a raid-1 setup, and I am not sure if I'd bother with it now. If you have 4 drives, might as well raid-5 them. You still get the ability to loose 1 drive at a time, and have a hot-spare : while ending up with the same storage capacity.
Again pardon my ignorance, what is a hot spare? A blank drive connected in the RAID 5 setup that can be written to in case one of the other 3 drives fail?
btw, if you have 4 drives, make sure they are as similar in specifications as possible - however, try and get different batch numbers / production runs. Drives that were made in the same batch, have been stored and stock under the exact same conditions, shipped out together, used and put into production together - have a very very high probability of also failing together :D
OK Will do that for sure when I set this up.
_________________________________________________________________ Need to know the score, the latest news, or you need your Hotmail®-get your "fix". http://www.msnmobilefix.com/Default.aspx
Therese Trudeau wrote:
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
Pardon my ignorance, what is write back caching and BBU?
Write Back Caching means the card will cache writes in its onboard storage, and let the OS continue immediately...
...this is only 'safe' if the card has a 'battery backup unit' to protect the cache during power failures so that the cached write data can be written to the disks when the power resumes. Some raid cards even allow you to remove the battery still attached to the cache along with the disks and install them on a different but similar machine in case of a total server failure, this is a feature on many HP SmartArray cards.
A battery backed Write Back Cache can hugely speed up random writes such as from a relational database server.
Again pardon my ignorance, what is a hot spare? A blank drive connected in the RAID 5 setup that can be written to in case one of the other 3 drives fail?
exactly. a hot spare sits unused until one of the RAID members fails, then its used to replace the failed drive by remirroring or restriping the parity, once this is finished, and the original failed drive is replaced it can become the new hot spare.
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
Pardon my ignorance, what is write back caching and BBU?
Write Back Caching means the card will cache writes in its onboard storage, and let the OS continue immediately
...this is only 'safe' if the card has a 'battery backup unit' to protect the cache during power failures so that the cached write data can be written to the disks when the power resumes. Some raid cards even allow you to remove the battery still attached to the cache along with the disks and install them on a different but similar machine in case of a total server failure, this is a feature on many HP SmartArray cards.
A battery backed Write Back Cache can hugely speed up random writes such as from a relational database server.
Ah that makes total sense now, thanks. Do the 3wire and the Areca cards allow you to remove battery/cache/disk and install into similar motherboard? Also when you say remove battery and cache, do you mean remove the entire RAID card with battery attached to it as complete assembly with accompaying drive and slap them all onto a new motherboard?
Again pardon my ignorance, what is a hot spare? A blank drive connected in the RAID 5 setup that can be written to in case one of the other 3 drives fail?
exactly. a hot spare sits unused until one of the RAID members fails, then its used to replace the failed drive by remirroring or restriping the parity, once this is finished, and the original failed drive is replaced it can become the new hot spare.
So if I understand correctly, RAID 5 is three active drives and one blank drive connected to a RAID 5 card, and if one of the three active drives fails, the fourth empty drive is automatically written to? If correct, what happens if the drive that fails loses all it's data before the blank drive has a chance to grab it?
_________________________________________________________________ Climb to the top of the charts! Play the word scramble challenge with star power. http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_jan
Therese Trudeau wrote:
Ah that makes total sense now, thanks. Do the 3wire and the Areca cards allow you to remove battery/cache/disk and install into similar motherboard? Also when you say remove battery and cache, do you mean remove the entire RAID card with battery attached to it as complete assembly with accompaying drive and slap them all onto a new motherboard?
I /think/ with the 3ware you remove and swap the whole card, along with the drives.
On many server grade systems, such as the the HP DL380 series, with on board SmartArray, the cache ram module and battery are separate detachable components. in the dl380 they are actually two pieces with a cord between them. you unclip and remove the battery from the chassis without messing with the wire, then you pull the cache module out of its special slot, these can then be installed in another HP smartarray, along with the drives from the original system, and when that new DL380 powers up, the raid controller will verify the drives, and flush its cache, insureing data integrity, then boot up your environment.
Again pardon my ignorance, what is a hot spare? A blank drive connected in the RAID 5 setup that can be written to in case one of the other 3 drives fail?
exactly. a hot spare sits unused until one of the RAID members fails, then its used to replace the failed drive by remirroring or restriping the parity, once this is finished, and the original failed drive is replaced it can become the new hot spare.
So if I understand correctly, RAID 5 is three active drives and one blank drive connected to a RAID 5 card, and if one of the three active drives fails, the fourth empty drive is automatically written to? If correct, what happens if the drive that fails loses all it's data before the blank drive has a chance to grab it?
with a 3 drive raid 5, you write two drives worth of data across the 3... every third 'block' is a 'parity block' calculated by bit-wise exclusive or (XOR) of the other two blocks. on a 3 drive RAID-5, this parity block alternates across all three drives....
drive: 0 1 2 =========== data 0 1 0x1 blocks 2x3 2 3 4 4x5 5 6 7 6x7 8x9 8 9 .....
each of those 'blocks' is like 32K bytes, 64 x 512 byte sectors (this is the stride of the raid, configured when you create the raid). the ones that are just numbers are your data blocks, while the 0x1 is (block_0 XOR block_1) eg, the parity block for that stripe.
if any one drive, /dies/ abruptly with no warning, you can still read all the data from the remaining drives, the missing drive is the XOR of the other drives, so the controller can reconstruct it on the fly and you will continue operating in a degraded performance mode.
if you have a spare drive, or when you replace the failed drive, the raid controller begins a rebuild where it reads ALL the blocks of the working drives, XOR's them together, and writes this to the spare/new drive. when its done, things revert to normal full performance and redundant operation. raid controllers can do this while the logical volume is still in use and online, many let you set the priority of this to lower the performance impact from raid rebuilds
you can extend this with a reasonable number of drives, for instance, 5 drives might look like...
drive: 0 1 2 3 4 ===================== data 0 1 2 3 P blocks P 4 5 6 7 8 P 9 10 11 ....
where the P's are XOR's of /all/ the other blocks on the same line. p0 = b0 X b1 X b2 X b3. p1 = b4 X b5 X b6 X b7, etc.
there's tons of material online explaining this stuff far better than a centos list can. http://en.wikipedia.org/wiki/RAID
Ah that makes total sense now, thanks. Do the 3wire and the Areca cards allow you to remove battery/cache/disk and install into similar motherboard? Also when you say remove battery and cache, do you mean remove the entire RAID card with battery attached to it as complete assembly with accompaying drive and slap them all onto a new motherboard?
I /think/ with the 3ware you remove and swap the whole card, along with the drives.
On many server grade systems, such as the the HP DL380 series, with on board SmartArray, the cache ram module and battery are separate detachable components. in the dl380 they are actually two pieces with a cord between them. you unclip and remove the battery from the chassis without messing with the wire, then you pull the cache module out of its special slot, these can then be installed in another HP smartarray, along with the drives from the original system, and when that new DL380 powers up, the raid controller will verify the drives, and flush its cache, insureing data integrity, then boot up your environment.
Again pardon my ignorance, what is a hot spare? A blank drive connected in the RAID 5 setup that can be written to in case one of the other 3 drives fail?
exactly. a hot spare sits unused until one of the RAID members fails, then its used to replace the failed drive by remirroring or restriping the parity, once this is finished, and the original failed drive is replaced it can become the new hot spare.
So if I understand correctly, RAID 5 is three active drives and one blank drive connected to a RAID 5 card, and if one of the three active drives fails, the fourth empty drive is automatically written to? If correct, what happens if the drive that fails loses all it's data before the blank drive has a chance to grab it?
with a 3 drive raid 5, you write two drives worth of data across the 3... every third 'block' is a 'parity block' calculated by bit-wise exclusive or (XOR) of the other two blocks. on a 3 drive RAID-5, this parity block alternates across all three drives....
drive: 0 1 2
data 0 1 0x1 blocks 2x3 2 3 4 4x5 5 6 7 6x7 8x9 8 9 .....
each of those 'blocks' is like 32K bytes, 64 x 512 byte sectors (this is the stride of the raid, configured when you create the raid). the ones that are just numbers are your data blocks, while the 0x1 is (block_0 XOR block_1) eg, the parity block for that stripe.
if any one drive, /dies/ abruptly with no warning, you can still read all the data from the remaining drives, the missing drive is the XOR of the other drives, so the controller can reconstruct it on the fly and you will continue operating in a degraded performance mode.
if you have a spare drive, or when you replace the failed drive, the raid controller begins a rebuild where it reads ALL the blocks of the working drives, XOR's them together, and writes this to the spare/new drive. when its done, things revert to normal full performance and redundant operation. raid controllers can do this while the logical volume is still in use and online, many let you set the priority of this to lower the performance impact from raid rebuilds
you can extend this with a reasonable number of drives, for instance, 5 drives might look like...
drive: 0 1 2 3 4
data 0 1 2 3 P blocks P 4 5 6 7 8 P 9 10 11 ....
where the P's are XOR's of /all/ the other blocks on the same line. p0 = b0 X b1 X b2 X b3. p1 = b4 X b5 X b6 X b7, etc.
there's tons of material online explaining this stuff far better than a centos list can. http://en.wikipedia.org/wiki/RAID
Ah great i'll check out the URL thanks.
One thing, an earlier poster reccomended RAID 5 instead of RAID 1. I guess if one only has 2 drives RAID 1 is the way to go but if I have 4 drives he said go with RAID 5 over RAID 1. Isn't RAID 1 mirroring a better solution for a 4 drive array or am I missing something here?
_________________________________________________________________ Need to know the score, the latest news, or you need your Hotmail®-get your "fix". http://www.msnmobilefix.com/Default.aspx
On Mon, Mar 10, 2008 at 12:17:29AM -0400, Therese Trudeau enlightened us:
Ah great i'll check out the URL thanks.
One thing, an earlier poster reccomended RAID 5 instead of RAID 1. I guess if one only has 2 drives RAID 1 is the way to go but if I have 4 drives he said go with RAID 5 over RAID 1. Isn't RAID 1 mirroring a better solution for a 4 drive array or am I missing something here?
You'll get 4 copies of the same data if you do RAID 1 accross 4 drives. Your best bets for 4 drives will be RAID 5, RAID 6 or RAID 10. Each has its advantages and disadvantages, so you'll have to read up on them and make a decision as to which is best for your use.
Matt
Therese Trudeau wrote:
Ah great i'll check out the URL thanks.
One thing, an earlier poster reccomended RAID 5 instead of RAID 1. I guess if one only has 2 drives RAID 1 is the way to go but if I have 4 drives he said go with RAID 5 over RAID 1. Isn't RAID 1 mirroring a better solution for a 4 drive array or am I missing something here?
Depends on your needs, RAID 1 is certainly faster. RAID 1+0 faster still, but requires more disk(s/space).
Going back to my preferred array vendor, 3PAR, their software/hardware provides the ability to do online RAID conversions to/from RAID 0, 1+0, and 5+0 (3+1 parity to 8+1 parity) with no impact to the server. It also provides the ability to run multiple raid levels on the same physical disks because the RAID is made up of portions of the disks (each disk split into 256MB chunks), rather than the full physical disks themselves. Really flexible/powerful/fast. I love it! On my array I run RAID 1+0 on the outer regions of the disk(~7% faster than the inner regions), and RAID 5+0 (8+1) on the inner regions of the disks.
Of course this sort of technology isn't priced for the desktop unless your doing something like desktop consolidation with virtualization or remote application hosting/thin client.
nate
nate wrote:
Therese Trudeau wrote:
Ah great i'll check out the URL thanks.
One thing, an earlier poster reccomended RAID 5 instead of RAID 1. I guess if one only has 2 drives RAID 1 is the way to go but if I have 4 drives he said go with RAID 5 over RAID 1. Isn't RAID 1 mirroring a better solution for a 4 drive array or am I missing something here?
Depends on your needs, RAID 1 is certainly faster. RAID 1+0 faster still, but requires more disk(s/space).
Going back to my preferred array vendor, 3PAR, their software/hardware provides the ability to do online RAID conversions to/from RAID 0, 1+0, and 5+0 (3+1 parity to 8+1 parity) with no impact to the server. It also provides the ability to run multiple raid levels on the same physical disks because the RAID is made up of portions of the disks (each disk split into 256MB chunks), rather than the full physical disks themselves. Really flexible/powerful/fast. I love it! On my array I run RAID 1+0 on the outer regions of the disk(~7% faster than the inner regions), and RAID 5+0 (8+1) on the inner regions of the disks.
Of course this sort of technology isn't priced for the desktop unless your doing something like desktop consolidation with virtualization or remote application hosting/thin client.
I just had to put my .02 in here:
The key requirement here is technology for the desktop.
Since this is going to be a single user box I really do not recommend mucking around with hardware RAID at all.
You have 4 disks, create a software RAID mirror out of disk 1 and 2, and a software mirror out of 3 and 4.
Make a VG out of the first mirror, call it "CentOS" and put the OS and all your applications there, including "home".
Make a VG out of the second mirror, call it "Work" and put all your work material there.
Then you can always upgrade the OS disks without touching your work disks.
You can export your work disks VG, move the disks over to a new machine and work there.
Create snapshots of your work disks, backup your work onto your OS disks, etc.
A couple of 80 or 160GB disks for the CentOS VG, a couple of 320GB disks for the Work VG and your good to go.
If you want any advise on setting up LVM and LVs for different file systems I, and everyone on the list, would be happy to give their opinions.
This is the easiest, most flexible and fastest way to get a good performing redundant desktop setup.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Karanbir Singh wrote:
Therese Trudeau wrote:
I'm just really looking for a RAID card that will do RAID 1, with four drive capacity, i.e., a master drive with the OS and applications installed and mirrored, and a slave drive for data and photos, graphic design, video, etc also mirrored. What would battery built into a RAID card do for me?
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
I've never noticed any data loss from write cache being enabled on a 3ware card that didn't have a BBU. Not to say that BBU is a not a good idea in any case. Of the ~400 3Ware cards I've used(90% of them being 8006-2), only a handful have had a BBU. But all of them were backed either by a basic UPS or a full UPS with generator backup. I haven't run a computer without a UPS since 1996 ? It's second nature these days I don't even remember to recommend them anymore.
Of course I may of had data loss, but in the past 8 years it's not been anything that I've personally noticed.
I'm not sure if all 3Ware models support BBU or not.
For anything real serious like I prefer bigger dedicated storage systems, my array of choice these days comes from 3PAR (entry level pricing ~$90k), really makes my job easy and stress free.
nate
nate wrote
the whole point of a BBU is that you can turn on write back caching - and get a fair win in write performance on regular tasks.
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
UPS isnt going to help in cases where something breaks on the inner side ( psu fail, mobo fail, ram blows up etc ).
But, I am with Ross on this one - if its just a desktop machine, it might be well worth doing a bit of RTFM, and setting up a md-raid/lvm volume and be done with it.
On 3/10/08, nate centos@linuxpowered.net wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
Robert Arkiletian wrote:
On 3/10/08, nate centos@linuxpowered.net wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
How about a simple kernel panic! A simple kernel panic will hose your file system with write-back and no BBU.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Robert Arkiletian wrote:
On 3/10/08, nate centos@linuxpowered.net wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
redundant power supplies connected to redundant UPS's. I've seen more UPS failures than I've ever had failed PSUs on proper server grade hardware.
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
redundant power supplies connected to redundant UPS's. I've seen more UPS failures than I've ever had failed PSUs on proper server grade hardware.
This might be getting a bit elaborate for a desktop machine. I really want RAID because I'm tired every couple years of hard drive crashes and having to start from scratch and spending a week setting up new drives and getting my design software back on line and trying to recover data.
What do you think of alternative back up systems, such as a tape backup with bare metal restore software? I'd go that route instead if I could fine a solution which would allow me to restore to different hardware, i.e. if my motherboard dies and I need to buy a different brand or model MB. I know Storix back up software has this capability - I use storix on my Linux server with RAID 1. @ home I have one Linux and one Windows desktop machine.
_________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
Therese Trudeau wrote:
What do you think of alternative back up systems, such as a tape backup with bare metal restore software? I'd go that route instead if I could fine a solution which would allow me to restore to different hardware, i.e. if my motherboard dies and I need to buy a different brand or model MB. I know Storix back up software has this capability - I use storix on my Linux server with RAID 1. @ home I have one Linux and one Windows desktop machine.
raid is no substitute for backup, raid is strictly for maintaining 24/7 uptime in the face of hardware failures, which is total overkill for your desktop.
Skip RAID entirely.... Instead, get some external USB drives. on the linux machine, use 'dump' or 'tar' or whatever in a script to make backups, on the windows machine, get and use Acronis DiskImage, which has a bare metal restore from a bootable CD-R you can build.
build the windows system so the c: 'system' drive is only about 30-40GB, plenty big enough for the OS plus all your mainstream applications (adobe, etc), and use a D: drive for /all/ your data, including your user account profile. this way the bare metal restore only has to restore said C:, and you can use incremental datafile oriented backup techniques for the D: 'data' drive.
do much the same with linux, a modest / and a seperate /home
What do you think of alternative back up systems, such as a tape
backup with bare metal restore software? I'd go that route instead if I could fine a solution which would allow me to restore to different hardware, i.e. if my motherboard dies and I need to buy a different brand or model MB. I know Storix back up software has this capability - I use storix on my Linux server with RAID 1. @ home I have one Linux and one Windows desktop machine.
raid is no substitute for backup, raid is strictly for maintaining 24/7 uptime in the face of hardware failures, which is total overkill for your desktop.
Skip RAID entirely.... Instead, get some external USB drives. on the linux machine, use 'dump' or 'tar' or whatever in a script to make backups, on the windows machine, get and use Acronis DiskImage, which has a bare metal restore from a bootable CD-R you can build.
build the windows system so the c: 'system' drive is only about 30-40GB, plenty big enough for the OS plus all your mainstream applications (adobe, etc), and use a D: drive for /all/ your data, including your user account profile. this way the bare metal restore only has to restore said C:, and you can use incremental datafile oriented backup techniques for the D: 'data' drive.
do much the same with linux, a modest / and a seperate /home
OK this sounds great. My only questions here are, with Acronis DiskImage and or other vendors, if my motherboard/graphic card combo fails, could I migrate to a different model motherboard/graphic card combo? Also can one Acronis DiskImage package (or other vendors) do bare metal restore on both Linux boxes and Windows boxes, so I could use the same bare metal restore package on both machines? _________________________________________________________________ Helping your favorite cause is as easy as instant messaging. You IM, we give. http://im.live.com/Messenger/IM/Home/?source=text_hotmail_join
John R Pierce wrote:
Robert Arkiletian wrote:
On 3/10/08, nate centos@linuxpowered.net wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
redundant power supplies connected to redundant UPS's. I've seen more UPS failures than I've ever had failed PSUs on proper server grade hardware.
Exactly - You can expect a UPS to need new batteries every few years, but I had one machine with a 4+year uptime (and only because I was able to move power cords, change UPS's, and swap it's mirrored hard drives on the fly). The machine is still running (RH 7.3) but it's less critical now and I was too lazy to drag along a UPS when I had to move it to a different location.
Robert Arkiletian wrote:
On 3/10/08, nate centos@linuxpowered.net wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
That is true, buy high quality stuff up front for fewer problems down the road. Not a sure bet, but a better one. In the half dozen systems I've been running at home for the past several years none of them have suffered a hardware failure of any kind(fortunately). I've been running PC Power and Cooling power supplies for about 9 years now, really high quality PSUs(last one I bought was about 4 years ago, can't speak for their quality now).
I've had 15 power supplies fail across about 600 systems in the past 8 years at my various jobs. Probably 150+ disk failures during that same time on the same systems. And maybe 3 RAID card failures(all of which were caught before the system was put into use).
In 2005 when I purchased about 200 Cyclades managed power stripes I had at least 10 of those fail, which was scary. Their QA at the time was pretty poor(I toured their facility in early 2006), they claimed I was one of only two customers that were having problems with their power strips(and I had less than 100 of those PDUs in use at the time so ~10% failure rate). They've since been bought by Advocent and they're probably well on their way to outsourcing their manufacturing which they said would improve quality since it would force them to make better specs for testing and stuff.
So BBU is certainly a nice thing to have but at least in my experience isn't absolutely critical.
Of course for absolutely critical things I don't use server-based RAID anyways. Multiple redundant controllers, multiple redundant paths(to both the disks and to the hosts), is the way to go(assuming your application(s) aren't built to be able to run on something like a distributed file system). I've seen that some of the latest HP servers have dual ported SAS disks, which sounds pretty neat. I assume they still only have one controller though.
My main storage array has a built in battery as well, it's pretty cool in that if the power goes out, it keeps the controller operational long enough to dump the contents of the cache(8GB) to an internal IDE disk, then powers off. Eliminates the need for having to maintain the battery during an extended(several day) outage. And of course the cache is mirrored between two controller nodes, and no writes are committed to disk before the write is processed by both nodes. If one node fails the cache is disabled on the remaining node until the other node recovers.
nate
That is true, buy high quality stuff up front for fewer problems down the road. Not a sure bet, but a better one. In the half dozen systems I've been running at home for the past several years none of them have suffered a hardware failure of any kind(fortunately). I've been running PC Power and Cooling power supplies for about 9 years now, really high quality PSUs(last one I bought was about 4 years ago, can't speak for their quality now).
So for a top quality power supply for a mission critical desktop machine, which brand(s) would you reccomend? One of the towers I have is a Thermaltake Xaser 3 with lots of room, and I just bought a new Antec Sonata III tower with a 500 watt PS.
So BBU is certainly a nice thing to have but at least in my experience isn't absolutely critical.
Then for a Mission critical desktop machine, if you had to make a choice, would you go with a good quality UPS and/or redundant power supplies, or a BBU instead?
Of course for absolutely critical things I don't use server-based RAID anyways. Multiple redundant controllers, multiple redundant paths(to both the disks and to the hosts), is the way to go(assuming your application(s) aren't built to be able to run on something like a distributed file system). I've seen that some of the latest HP servers have dual ported SAS disks, which sounds pretty neat. I assume they still only have one controller though.
As an alternative to RAID1 for a mission critical desktop machine @ home, what would you reccomend? Maybe a bare metal restore solution able to restore to different hardware, (i.e. if a motherboard dies and drive crashes due to power spike or some catastrophe, I'm screwed if I can't find the exact same make - model)?
_________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
on 3-14-2008 6:31 AM Therese Trudeau spake the following:
That is true, buy high quality stuff up front for fewer problems down the road. Not a sure bet, but a better one. In the half dozen systems I've been running at home for the past several years none of them have suffered a hardware failure of any kind(fortunately). I've been running PC Power and Cooling power supplies for about 9 years now, really high quality PSUs(last one I bought was about 4 years ago, can't speak for their quality now).
So for a top quality power supply for a mission critical desktop machine, which brand(s) would you reccomend? One of the towers I have is a Thermaltake Xaser 3 with lots of room, and I just bought a new Antec Sonata III tower with a 500 watt PS.
So BBU is certainly a nice thing to have but at least in my experience isn't absolutely critical.
Then for a Mission critical desktop machine, if you had to make a choice, would you go with a good quality UPS and/or redundant power supplies, or a BBU instead?
Of course for absolutely critical things I don't use server-based RAID anyways. Multiple redundant controllers, multiple redundant paths(to both the disks and to the hosts), is the way to go(assuming your application(s) aren't built to be able to run on something like a distributed file system). I've seen that some of the latest HP servers have dual ported SAS disks, which sounds pretty neat. I assume they still only have one controller though.
As an alternative to RAID1 for a mission critical desktop machine @ home, what would you reccomend? Maybe a bare metal restore solution able to restore to different hardware, (i.e. if a motherboard dies and drive crashes due to power spike or some catastrophe, I'm screwed if I can't find the exact same make - model)?
Explain your definition of a mission critical desktop. Does the entire enterprise stop functioning if this desktop stops? I am THE tech support for my company, but my desktop could die right now, and although I would be heartbroken and a little peeved, I could just fire up my lappy and get back to work in a few minutes. I usually have 2 desktops running, just in case I need to put out fires while my main desktop is doing the windows reboot dance.
Explain your definition of a mission critical desktop. Does the entire enterprise stop functioning if this desktop stops? I am THE tech support for my company, but my desktop could die right now, and although I would be heartbroken and a little peeved, I could just fire up my lappy and get back to work in a few minutes. I usually have 2 desktops running, just in case I need to put out fires while my main desktop is doing the windows reboot dance.
If my linux machine stops functioning it's not as bad as the windows box going off line, but it still takes a day or two to get things back on line with the linux box with all the software I installed on it.
If the windows machine stops functioning, then yes it's a pain, it's at least two days by the time I get back up and running because much of my work is graphic design and that's where all my adobe stuff is loaded on, and it takes a long time to get the OS re instlled, then grabbing my data, and re installing many many software applications etc.
Because I am a one person company I just don't have time to spend days getting a machine back on line, and it's happened more than once. An hour or two however to get things runing again would not harm my work flow that much. _________________________________________________________________ Helping your favorite cause is as easy as instant messaging. You IM, we give. http://im.live.com/Messenger/IM/Home/?source=text_hotmail_join
Therese Trudeau wrote:
Explain your definition of a mission critical desktop. Does the entire enterprise stop functioning if this desktop stops? I am THE tech support for my company, but my desktop could die right now, and although I would be heartbroken and a little peeved, I could just fire up my lappy and get back to work in a few minutes. I usually have 2 desktops running, just in case I need to put out fires while my main desktop is doing the windows reboot dance.
If my linux machine stops functioning it's not as bad as the windows box going off line, but it still takes a day or two to get things back on line with the linux box with all the software I installed on it.
If the windows machine stops functioning, then yes it's a pain, it's at least two days by the time I get back up and running because much of my work is graphic design and that's where all my adobe stuff is loaded on, and it takes a long time to get the OS re instlled, then grabbing my data, and re installing many many software applications etc.
Because I am a one person company I just don't have time to spend days getting a machine back on line, and it's happened more than once. An hour or two however to get things runing again would not harm my work flow that much.
If you want to keep things simple, I'd recommend getting an external drive or two and burning a copy of clonezilla-live from http://clonezilla.sourceforge.net/clonezilla-live/. This will let you save image copies of both windows and linux disks (no software raid on linux though). Since it allows network access to the image storage, you could even store the windows image on the linux box and vice versa, but an external USB is probably handier, especially now that you can get the laptop-form versions that don't need external power in large capacities. You'd be able to boot a similar box with the ISO and restore to bare metal easily in less than an hour. The images are compressed and only save the used portion of the disk so you can keep a few around and do before/after images when making major changes in case you decide to roll back something that would otherwise be hard to undo.
I'd do this for the system drive and repeat the image copy only after updates. Then I'd put all of my own work on a separate partition (probably a software RAID1 mounted as /home on the linux box and samba-shared to windows) and periodically rsync the contents to an external USB/firewire drive. Depending on the value of this work, I might have multiple external drives that I'd rotate offsite.
Wow, I could have saved all that typing.... Great ideas, Les. You were the one that introduced me to Clonezilla (unknowingly) some time ago in another thread. I had been using a Hirens boot disk, but Clonezilla is better..... Dennis
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Les Mikesell Sent: Friday, March 14, 2008 10:23 AM To: CentOS mailing list Subject: Re: [CentOS] Re: Recommendations for a card on Centos box
Therese Trudeau wrote:
Explain your definition of a mission critical desktop. Does the entire enterprise stop functioning if this desktop stops? I am THE tech support for my company, but my desktop could die right now, and although I would be heartbroken and a little peeved, I could just fire up my lappy and get back to work in a few minutes. I usually have 2 desktops running, just in case I need to put out fires while my main desktop is doing the windows reboot dance.
If my linux machine stops functioning it's not as bad as the windows box going off line, but it still takes a day or two to get things back on line with the linux box with all the software I installed on it.
If the windows machine stops functioning, then yes it's a pain, it's at least two days by the time I get back up and running because much of my work is graphic design and that's where all my adobe stuff is loaded on, and it takes a long time to get the OS re instlled, then grabbing my data, and re installing many many software applications etc.
Because I am a one person company I just don't have time to spend days getting a machine back on line, and it's happened more than once. An hour or two however to get things runing again would not harm my work flow that much.
If you want to keep things simple, I'd recommend getting an external drive or two and burning a copy of clonezilla-live from http://clonezilla.sourceforge.net/clonezilla-live/. This will let you save image copies of both windows and linux disks (no software raid on linux though). Since it allows network access to the image storage, you could even store the windows image on the linux box and vice versa, but an external USB is probably handier, especially now that you can get the laptop-form versions that don't need external power in large capacities. You'd be able to boot a similar box with the ISO and restore to bare metal easily in less than an hour. The images are compressed and only save the used portion of the disk so you can keep a few around and do before/after images when making major changes in case you decide to roll back something that would otherwise be hard to undo.
I'd do this for the system drive and repeat the image copy only after updates. Then I'd put all of my own work on a separate partition (probably a software RAID1 mounted as /home on the linux box and samba-shared to windows) and periodically rsync the contents to an external USB/firewire drive. Depending on the value of this work, I might have multiple external drives that I'd rotate offsite.
on 3-14-2008 9:43 AM Therese Trudeau spake the following:
Explain your definition of a mission critical desktop. Does the entire enterprise stop functioning if this desktop stops? I am THE tech support for my company, but my desktop could die right now, and although I would be heartbroken and a little peeved, I could just fire up my lappy and get back to work in a few minutes. I usually have 2 desktops running, just in case I need to put out fires while my main desktop is doing the windows reboot dance.
If my linux machine stops functioning it's not as bad as the windows box going off line, but it still takes a day or two to get things back on line with the linux box with all the software I installed on it.
If the windows machine stops functioning, then yes it's a pain, it's at least two days by the time I get back up and running because much of my work is graphic design and that's where all my adobe stuff is loaded on, and it takes a long time to get the OS re instlled, then grabbing my data, and re installing many many software applications etc.
Because I am a one person company I just don't have time to spend days getting a machine back on line, and it's happened more than once. An hour or two however to get things runing again would not harm my work flow that much.
I believe that the Adobe products will let you have it installed on another machine as long as you only use one at a time. Might be worth it to just break down and get a cheaper system as a backup, with everything installed, and if your main machine goes down, just power up the backup machine. It might be a little slower, but you can still function and get work done as you fix/replace the main machine. $1000 US for a capable machine will seem like a sting at first, but you can justify it the first time you need it. And an occasional boot to keep OS and virus scanners current will give you a chance to make sure it is functioning.
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
That's a very good point never thought of that. Acrtually this RAID 1 setup I'm planning is for my desktop machine, problem is is's not built like a server so there is not the traditional slid in bay for a second PS as do many 1 and 2u rack servers have. Unless there is some specialty product available that somehow fits in to a tower case.
Could you reccomend a redundant PS for a desktop machine (if they exist)? _________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
Therese Trudeau wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
That's a very good point never thought of that. Acrtually this RAID 1 setup I'm planning is for my desktop machine, problem is is's not built like a server so there is not the traditional slid in bay for a second PS as do many 1 and 2u rack servers have. Unless there is some specialty product available that somehow fits in to a tower case.
Could you reccomend a redundant PS for a desktop machine (if they exist)?
The whole system needs to be designed for dual supplies. You can't just plop down two power supplies in parallel without some circuitry that attempts to monitor & balance them out.
I'm curious - why does your desktop needs so much redundancy ?
Toby Bluhm wrote:
Therese Trudeau wrote:
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
That's a very good point never thought of that. Acrtually this RAID 1 setup I'm planning is for my desktop machine, problem is is's not built like a server so there is not the traditional slid in bay for a second PS as do many 1 and 2u rack servers have. Unless there is some specialty product available that somehow fits in to a tower case. Could you reccomend a redundant PS for a desktop machine (if they exist)?
The whole system needs to be designed for dual supplies. You can't just plop down two power supplies in parallel without some circuitry that attempts to monitor & balance them out.
I'm curious - why does your desktop needs so much redundancy ?
Just for fun, the first hit on a google for "redundant atx power supply"
http://www.directron.com/tc400r8.html
Seems you can just plop one into your std atx chassis . . .
Just for fun, the first hit on a google for "redundant atx power supply"
http://www.directron.com/tc400r8.html
Seems you can just plop one into your std atx chassis . . .
i have never understood how something with a single feed can be termed 'redundant'
Just for fun, the first hit on a google for "redundant atx power supply"
http://www.directron.com/tc400r8.html
Seems you can just plop one into your std atx chassis . . .
i have never understood how something with a single feed can be termed 'redundant'
Yeah, that PS appears to have only one outlet (unless i'm not seeing it in the photo), most redundant PS's have seperaate outlets for a Y power cable one for each supply. Guess it's not that redundant. _________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
Yeah, that PS appears to have only one outlet (unless i'm not seeing it in the photo), most redundant PS's have seperaate outlets for a Y power cable one for each supply. Guess it's not that redundant.
yes - although i would never use a Y cable - Dual PSU's need 2 feeds from seperate PDU's
Tom Brown wrote:
Yeah, that PS appears to have only one outlet (unless i'm not seeing it in the photo), most redundant PS's have seperaate outlets for a Y power cable one for each supply. Guess it's not that redundant.
yes - although i would never use a Y cable - Dual PSU's need 2 feeds from seperate PDU's
Unless you have another source of AC power or want to use two UPS, then it's not important.
Tom Brown wrote:
Just for fun, the first hit on a google for "redundant atx power supply"
http://www.directron.com/tc400r8.html
Seems you can just plop one into your std atx chassis . . .
i have never understood how something with a single feed can be termed 'redundant'
Yes, that doesn't make sense, even aside from the fact that the connected UPS is about as likely to fail as the PS itself. One of the big values of having dual power supplies with two power cords is that you can move the plugs from one outlet to another while it is still running (e.g. to replace the UPS, move to a new location while connected to a small UPS, or just to move the cord to a different outlet).
You can turn on write back caching if you have a UPS as well (provided your UPS is wired into your system for a graceful shutdown)
Hopefully you have a redundant PS unit. Having a UPS is not going to help if your PS fails.
That's a very good point never thought of that. Acrtually this RAID 1 setup I'm planning is for my desktop machine, problem is is's not built like a server so there is not the traditional slid in bay for a second PS as do many 1 and 2u rack servers have. Unless there is some specialty product available that somehow fits in to a tower case. Could you reccomend a redundant PS for a desktop machine (if they exist)?
The whole system needs to be designed for dual supplies. You can't just plop down two power supplies in parallel without some circuitry that attempts to monitor & balance them out.
I'm curious - why does your desktop needs so much redundancy ?
Just for fun, the first hit on a google for "redundant atx power supply"
http://www.directron.com/tc400r8.html
Seems you can just plop one into your std atx chassis . . .
Hey thank's that's pretty cool, I'll check it out!
_________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
That's a very good point never thought of that. Acrtually this RAID 1 setup I'm planning is for my desktop machine, problem is is's not built like a server so there is not the traditional slid in bay for a second PS as do many 1 and 2u rack servers have. Unless there is some specialty product available that somehow fits in to a tower case.
Could you reccomend a redundant PS for a desktop machine (if they exist)?
The whole system needs to be designed for dual supplies. You can't just plop down two power supplies in parallel without some circuitry that attempts to monitor & balance them out.
Yes I realize that thanks, just wondered if there was some new product combo out there for existing towers, i.e. dual power supplies with controller boards, from your comment I assume there is not.
I'd be willing to migrate all of my hardware, i.e. motherboard, monitor card etc, to a new case, if I could find a case which includes a controller card for the power supplies, or a case that comes complete with such.
I'm curious - why does your desktop needs so much redundancy ?
Because I use the desktop machines about ten hours a day, I work out of home, doing graphic design, web design, uploading files to server, managing server, etc. The home desktop machines are just as mission critical as the server I upload to is. Maybe more so, because if there is a problem server side, I need remote access to it 24/7. _________________________________________________________________ Climb to the top of the charts! Play the word scramble challenge with star power. http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_jan
On Mar 9, 2008, at 2:28 PM, Therese Trudeau wrote:
Hi,
I'm considering setting up my Centos Desktop machine for RAID 1. I read a lot of good info at this site:http://linuxmafia.com/faq/ Hardware/sata.html#intel-vitesse about differences in fakeraid and real raid cards.
Discontinued chipset but works fine:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816110002
Nice price! $35/ 5 SATA drive support. Fewer drives is a $21 card.
No drivers, you run the RAID from BIOS, it shows as an IDE volume for linux. See the NewEgg comments for some tips. Depending on the speed you need, it could be just great.
You need Windows to update the firmware. Supports a handful of RAID types, but not RAID 5. True hardware RAID though.
All the firmwares, manuals, utils are at: http://www.soft-port.dk/
B
I'm considering setting up my Centos Desktop machine for RAID 1. I read a lot of good info at this site:http://linuxmafia.com/faq/ Hardware/sata.html#intel-vitesse about differences in fakeraid and real raid cards.
Discontinued chipset but works fine:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816110002
Nice price! $35/ 5 SATA drive support. Fewer drives is a $21 card.
No drivers, you run the RAID from BIOS, it shows as an IDE volume for linux. See the NewEgg comments for some tips. Depending on the speed you need, it could be just great.
You need Windows to update the firmware. Supports a handful of RAID types, but not RAID 5. True hardware RAID though.
All the firmwares, manuals, utils are at: http://www.soft-port.dk/
Hey thanks much I'll check it out!
One question on this card - does it write the raid metadata onto the disks rather than store them on the card itself?
I ask because Karanbir (in this thread) reccomends to get that kind. If it does, maybe I'll buy two or three of these in case one card fails (since they are out of business).
_________________________________________________________________ Connect and share in new ways with Windows Live. http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_012008
Once you get a handle on what you are after check back with me, I have a bunch of Raid controllers I picked up from a systems dealer who went out of business. Some LSI's ICP, ICP says they are an Adaptec company and a couple of other off brands... May be able to save you a few bucks, all are new, oem style...
john
Therese Trudeau wrote:
Hi,
I'm considering setting up my Centos Desktop machine for RAID 1. I read a lot of good info at this site:http://linuxmafia.com/faq/Hardware/sata.html#intel-vitesse about differences in fakeraid and real raid cards.
The hardware I plan on installing this RAID card into is an Intel DP35DP motherboard with the Intel E4500 dual core processor, and I have two Mator 500 gig SATA hard drives.
Can anyone recommend a good “real raid” card for my Linux? What I am looking for is to plug in a RAID controller card out of the box, and without having to load any drivers onto my Centos 5.1 box, have the Real hardware RAID card automatically do all the work, mirror my hard drive onto the second backup drive and do all the work for me.
Do such cards exist? If so which model /manufacturers do you recommend? Any experiences/info/insights on hardware RAID cards good or bad on centos boxes would be appreciated.
Climb to the top of the charts! Play the word scramble challenge with star power. http://club.live.com/star_shuffle.aspx?icid=starshuffle_wlmailtextlink_jan__... CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos