Hi,
I am looking into purchasing a new server. This server will be mission-critical.
I have read and somewhat understood the theories behind RAIDs 0, 1, 5, 10 & JBOD. However, I would like to get some feedback from those who have experience in implementing and recovering from a HDD failure using RAID.
Hardware specs include:-
Dual Xeon 3.2 GHz 2 GB RAM
I would like to implement hardware RAID but am unsure as to which would be most suitable for my needs. Any advice is appreciated.
Option 1 - RAID 5 (3 hdd's) + 1 hot spare
Option 2 - RAID 10 (4 hdd's) + 1 "cold" spare (in the shelf)
Questions I have :-
1) When should RAID 5 be implemented? 2) When should RAID 10 be implemented? 3) Is RAID 5 with a hot spare safer than RAID 10 with a "cold" spare? 4) Is it possible to configure RAID 10 to have a hot spare? 5) Should one of the HDDs fail, a hot spare w kick-in immediately and begin rebuilding. As I am planning to put in 300 GB HDDs, how long would this take on a RAID 5 vs. RAID 10? 6) Will there be a degradation in performance for users on the system (RAID 5 vs. RAID 10)? 7) What are the disadvantages of using RAID 5 vs. RAID 10?
Thanks in advance for answering my questions.
Best Regards, Andrew
I'm sure many will answer the questions, so for a different perspective: Make sure you buy a drives from a number of manufacturers, or get ones from different production batches - I was once on a customer's site where two 'brand-x' drives in a RAID 5 array went bad within minutes of each other due to a spindle bearing defect and this took down the array.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Andrew Vong Sent: 25 July 2005 10:57 To: centos@centos.org Subject: [CentOS] RAID 5 vs. RAID 10
Hi,
I am looking into purchasing a new server. This server will be mission-critical.
I have read and somewhat understood the theories behind RAIDs 0, 1, 5, 10 & JBOD. However, I would like to get some feedback from those who have experience in implementing and recovering from a HDD failure using RAID.
Hardware specs include:-
Dual Xeon 3.2 GHz 2 GB RAM
I would like to implement hardware RAID but am unsure as to which would be most suitable for my needs. Any advice is appreciated.
Option 1 - RAID 5 (3 hdd's) + 1 hot spare
Option 2 - RAID 10 (4 hdd's) + 1 "cold" spare (in the shelf)
Questions I have :-
1) When should RAID 5 be implemented? 2) When should RAID 10 be implemented? 3) Is RAID 5 with a hot spare safer than RAID 10 with a "cold" spare? 4) Is it possible to configure RAID 10 to have a hot spare? 5) Should one of the HDDs fail, a hot spare w kick-in immediately and begin rebuilding. As I am planning to put in 300 GB HDDs, how long would this take on a RAID 5 vs. RAID 10? 6) Will there be a degradation in performance for users on the system (RAID 5 vs. RAID 10)? 7) What are the disadvantages of using RAID 5 vs. RAID 10?
Thanks in advance for answering my questions.
Best Regards, Andrew
On Mon, 2005-07-25 at 12:26 +0100, Nigel kendrick wrote:
I'm sure many will answer the questions, so for a different perspective: Make sure you buy a drives from a number of manufacturers, or get ones from different production batches - I was once on a customer's site where two 'brand-x' drives in a RAID 5 array went bad within minutes of each other due to a spindle bearing defect and this took down the array.
Not a bad suggestion. BTW, were they commodity disks?
On Mon, 2005-07-25 at 17:57 +0800, Andrew Vong wrote:
Hi, I am looking into purchasing a new server. This server will be mission-critical.
What application(s)? That's the biggie.
I have read and somewhat understood the theories behind RAIDs 0, 1, 5, 10 & JBOD. However, I would like to get some feedback from those who have experience in implementing and recovering from a HDD failure using RAID.
On a 3Ware card (no spare): - Pull out bad drive - Put in new drive - Tell 3DM2 to rebuild array - Done.
On a 3Ware card (spare): - Automagically rebuilds array from designated spare - Pull out bad drive - Put in new drive - Tell 3DM2 about new spare - Done.
Hardware specs include:- Dual Xeon 3.2 GHz 2 GB RAM
What chipset? What mainboard? What is your I/O configuration?
I would like to implement hardware RAID but am unsure as to which would be most suitable for my needs. Any advice is appreciated. Option 1 - RAID 5 (3 hdd's) + 1 hot spare Option 2 - RAID 10 (4 hdd's) + 1 "cold" spare (in the shelf)
Questions I have :-
- When should RAID 5 be implemented?
- When you want maximum storage capacity per disk (the more disks, the better) - When you have largely read-only data - When much of your read-only data is in a database - When you have buffering RAID hardware
RAID-5 acts like RAID-0 during reads. But during writes RAID-5 can get bogged down in commits and requires a lot of buffer.
RAID-4 is better when you have large block/file writes/reads, and is used by some vendors (e.g., NetApp filers, especially for NFS).
RAID-3 (and NetCell's "RAID-XL") is better for desktops.
- When should RAID 10 be implemented?
- When you want maximum write performance - When you have lots of independent reads - When you have non-blocking RAID and disk hardware (ASIC+SRAM, ATA)
RAID-10 acts like two, independent RAID-0 volumes when reading. During writes, it's much faster than RAID-5 in many cases, especially for system, swap, etc... For file and large data servers, RAID-10 kicks RAID-5's butt in most applications.
For a public/external web server, where I/O is limited to well less than LAN speeds, RAID-10 loses much of its advantage over RAID-5, because disk access/throughput is far less important.
- Is RAID 5 with a hot spare safer than RAID 10 with a "cold" spare?
Of course, because most hardware RAID cards will automatically rebuild on the spare drive for you.
- Is it possible to configure RAID 10 to have a hot spare?
Of course! Just get enough channels on your hardware RAID card to do so. E.g., get at least an 8-channel card, and use 6 drives for RAID-10, leaving 2 channels for spares.
When cost is an issue, I typically do a "near-hot spare" where I get a 4-channel SATA card, an Enlight 5-bay case, and I have the "cold spare" already in a can, so it's simply a matter of plugging it into a "hot" bay.
- Should one of the HDDs fail, a hot spare w kick-in immediately and
begin rebuilding.
Yes.
As I am planning to put in 300 GB HDDs,
For "Mission Critical" servers, I'd almost push you towards the 73GB, 10000rpm WD Raptor SATA drives. They have are "enterprise class" and roll of the same line as Hitachi's U320 drives, so their vibration and other attributes are 3-8x better than typical, commodity drives.
Otherwise, I continue to be a big fan of Seagate for commodity drives, with their 5-year warranties. They can offer such because their new crop of materials can take 60C operating environments for longer durations (although they clearly don't recommend 24x7 operation).
how long would this take on a RAID 5 vs. RAID 10?
Depends on the RAID card. RAID-1[0] is just a direct copy of a disk. RAID-5 is actually writes less data, but reads far more than RAID-1[0] -- X-1 number of disks, so it can take much longer. If your RAID card doesn't buffer RAID-5 well (e.g., 3Ware Escalade 7000/8000, basically pre-9000 series), then it can take a long time.
- Will there be a degradation in performance for users on the system
(RAID 5 vs. RAID 10)?
Yes. The only time I haven't seen a card take a massive performance hit during a RAID rebuild is on the NetCell products with their RAID-XL. But it's only for desktops (definitely not a design for servers).
You want to minimize rebuild time, period.
- What are the disadvantages of using RAID 5 vs. RAID 10?
Write performance, especially on a direct, block device like [S]ATA. Unless you are building a web server where all you'll be doing is reading 99.9% of the time, I recommended highly against RAID-5.
And even when building a web server, I still recommend the "system" drive be RAID-10. E.g., with an 8-channel controller, consider:
- 4-disc RAID-10 System - 3-disc RAID-5 Data - 1-disc Hot Spare (which can be used for _either_ ;-)
With a 12-channel controller, make the RAID-5 data volume 7-discs.
Thanks in advance for answering my questions.
[ I assume you want to minimize costs -- e.g., 4 drives. But I'll include a full list of options regardless. ]
- Quality SATA RAID Cards ...
3Ware Escalade 8506-4LP/8/12 PCI64 (RAID-10 only, RAID-5 slow): [ 64-bit@66MHz (0.5GBps), 64-bit ASIC, 2-4MB SRAM ] http://www.3ware.com/products/serial_ata8000.asp
3Ware Escalade 9500S-4LP/8[MI]/12[MI]: [ 64-bit@66MHz (0.5GBps), 64-bit ASIC, 2-4MB SRAM, 128+MB DRAM ] http://www.3ware.com/products/serial_ata9000.asp
LSI Logic MegaRAID SATA 300-8X (8-channel SATA): [ 64-bit@66-133MHz (0.5-1.0GBps), XScale IOP331, 128MB DRAM ] http://www.lsilogic.com/products/megaraid/sata_300_8x.html
- Quality U320 SCSI RAID Cards ...
LSI Logic MegaRAID SCSI 320-2X 2-channel U320 SCSI (PCI-X) [ 64-bit@66-133MHz (0.5-1.0GBps), XScale IOP321, 128+MB DRAM ] http://www.lsilogic.com/products/megaraid/scsi_320_2x.html
LSI Logic MegaRAID SCSI 320-2E 2-channel U320 SCSI (PCIe) [ 1/8-bit@2.5GHz (0.25/2.0GBps), XScale IOP332, 128+MB DRAM ] http://www.lsilogic.com/products/megaraid/megaraid_320_2e.html
LSI Logic MegaRAID SCSI 320-4X 4-channel** U320 SCSI (PCI-X) [ 64-bit@66-133MHz (0.5-1.0GBps), XScale IOP321, 128+MB DRAM ] http://www.lsilogic.com/products/megaraid/scsi_320_4x.html
**NOTE: If you're thinking about 12 drives. Avoid putting more than 3-4 drives on a single SCSI channel.
- Drives
Going with "commodity" disks isn't always the most recommended. "Commodity" disk capacities are typically the 40, 60, 80, 120, 160, 200, 250, 300, 320 and 400GB capacities.
The more "enterprise" disks are capacities of 36, 73, 146GB. They are designed for 24x7 environments, with greatly reduced vibration and other superior quality.
NOTE: You _can_ get "enterprise" disks in SATA interfaces. E.g., the Hitachi 10000rpm 36GB and 73GB U320 SCSI products are sold by Western Digital as its "Raptor" SATA products. The price difference is ~$350 (U320 SCSI) to ~$200 (SATA) for the 73GB capacity.
- Enclosures
If you have 3-6U of internal space, you can fit 5-10 1" drives (5 in each 3U). I like the Enlight EN-8721 which comes in both U320 SCSI SCA and SATA (which is SCA-like hot-plug) flavors: http://twe.enlightcorp.com/products/server/detel.php?serial=42
The SATA version runs about $150, the U320 runs just over $200.
For external enclosures, SATA is just finally getting some. The 1m limitation to SATA is a major factor (something that will be solved by Serial Attached SCSI, SAS, which uses the same physical interface as SATA for maximum compatibility).
If you're thinking more like 12 SCSI drives (with the 320-4X), the Enlight SE-301 is Intel SSI certified: http://twe.enlightcorp.com/products/server/detel.php?serial=41
On Mon, 2005-07-25 at 06:49 -0500, Bryan J. Smith wrote:
And even when building a web server, I still recommend the "system" drive be RAID-10. E.g., with an 8-channel controller, consider:
- 4-disc RAID-10 System
- 3-disc RAID-5 Data
- 1-disc Hot Spare (which can be used for _either_ ;-)
With a 12-channel controller, make the RAID-5 data volume 7-discs.
Many times I'll actually just use RAID-1 for the system on an 8-channel card. E.g.,
8-channel: 2-system (RAID-1), 5-data (RAID-5), 1-spare (either) 12-channel: 4-system (RAID-10), 7-data (RAID-5), 1-spare (either)
Otherwise, if the hardware is local to you, with a 3Ware Escalade card, it can page you on a failure. So if cost is an issue, get a 3Ware Escalade 8506-4LP (sub-$300), one EN-8721 enclosure ($150), and put in 5 drives, knowing that one drive is not connected (but is "ready to plug" in its can).