Arch = x86_64 CentOS-6.4
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting RAID on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a bunch more that I should but do not know enough yet to ask.
From what I have read it appears that the system disk must use RAID 1 if it
uses RAID at all. Is this the case? If so, is there any benefit to be had by taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? Should these two drives be pulled and replaced with two smaller ones or should we bother with RAID for the boot disk at all?
Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have read that RAID 5 is the only viable option. It also appears that the amount of storage available on a RAID5 array with N members is N-1/N. I also read that as the number of members increase both latency and the risk of data loss increases. As the amount of disk space we have in this unit (24Tb) is greater than the total storage of all our existing hosts it appears that a RAID5 array of 5 units would leave at least one hot spare in the chassis and two if the OS is put on one disk.
Alternatively, the thought comes to mind that we could do a RAID1 with two RAID5 arrays each of which have 3 drives. Whether one would actually want to do that seems to me a bit questionable but it seems to be at least possible.
Comments, suggestions, caveats?
Regards,
James B. Byrne wrote:
Arch = x86_64 CentOS-6.4
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP
application and
DBMS host. The ERP application will likely eventually have web access
but at
the moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting
RAID
on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a
bunch
more that I should but do not know enough yet to ask.
From what I have read it appears that the system disk must use RAID 1 if it uses RAID at all. Is this the case? If so, is there any benefit to be
Why? Where did you get that idea? On all of our large RAIDs, we're using RAID 6 - remember, with RAID 1, you have half the space of the physical drives.
had by taking two of the 8 drives (6Tb) solely to hold the OS and boot
partition?
Are you doing Linux software RAID, or do you have a hardware RAID controller?
Me, I'd partition it up... lessee, the 3TB drive that I'm about to replace one of our user's root drive has 1G /boot, 2G swap, 500G /, and a 4th partition with the rest of the space.
Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have
read
that RAID 5 is the only viable option. It also appears that the amount of
Sounds old news, to me. As I said *all* of our large RAIDs are running 6.
As the amount of disk space we have in this unit (24Tb) is greater than
Note that I'd make it 14TB and 10TB, or perhaps 500G, maybe RAID 1, for a total of 1TB, then 14TB and 9TB.
the total storage of all our existing hosts it appears that a RAID5
array of 5
units would leave at least one hot spare in the chassis and two if the OS is put on one disk.
Yeah, a hot spare's a good idea. <snip> mark
On Thu, Nov 14, 2013 at 12:23 PM, James B. Byrne byrnejb@harte-lyne.cawrote:
Arch = x86_64 CentOS-6.4
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting RAID on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a bunch more that I should but do not know enough yet to ask.
Are you going to use hardware or software raid?
From what I have read it appears that the system disk must use RAID 1 if
it uses RAID at all. Is this the case? If so, is there any benefit to be had by taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? Should these two drives be pulled and replaced with two smaller ones or should we bother with RAID for the boot disk at all?
Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have read that RAID 5 is the only viable option. It also appears that the amount of storage
What about RAID10?
I've read that running a database server on raid5 isn't recommended, but raid1 or raid10 is recommended.
available on a RAID5 array with N members is N-1/N. I also read that as the number of members increase both latency and the risk of data loss increases. As the amount of disk space we have in this unit (24Tb) is greater than the total storage of all our existing hosts it appears that a RAID5 array of 5 units would leave at least one hot spare in the chassis and two if the OS is put on one disk.
Space efficiency is less than that of raid5. Rather than 1-1/n with raid5 you have 2/n with raid10.
Alternatively, the thought comes to mind that we could do a RAID1 with two RAID5 arrays each of which have 3 drives. Whether one would actually want to do that seems to me a bit questionable but it seems to be at least possible.
You're suggesting a raid5+1 or raid51 http://en.wikipedia.org/wiki/Nested_RAID_levels
I wouldn't suggest nesting software raid if you can avoid it for the complexity. There are reasons to create a raid array with two hardware arrays, but I'd avoid doing so.
Comments, suggestions, caveats?
Regards,
-- *** E-Mail is NOT a SECURE channel *** James B. Byrne mailto:ByrneJB@Harte-Lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, Nov 14, 2013 at 1:04 PM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Nov 14, 2013 at 12:23 PM, James B. Byrne byrnejb@harte-lyne.cawrote:
Arch = x86_64 CentOS-6.4
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting RAID on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a bunch more that I should but do not know enough yet to ask.
Are you going to use hardware or software raid?
Butting in, I know people who would argue for either solution, and that is not even calling the zfs crowd... ;)
From what I have read it appears that the system disk must use RAID 1 if
it uses RAID at all. Is this the case? If so, is there any benefit to be had by taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? Should these two drives be pulled and replaced with two smaller ones or should we bother with RAID for the boot disk at all?
Servers I have built usually have 2 40GB SSDs in raid1 for the OS and then SSDs or spinny disks for the data itself in some raid setup.
Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have read that RAID 5 is the only viable option. It also appears that the amount of storage
What about RAID10?
I've read that running a database server on raid5 isn't recommended, but raid1 or raid10 is recommended.
I do agree that for the amount of drives he has, raid10 seems to be the way to go. That said, what about raid6?
available on a RAID5 array with N members is N-1/N. I also read that as the number of members increase both latency and the risk of data loss increases. As the amount of disk space we have in this unit (24Tb) is greater than the total storage of all our existing hosts it appears that a RAID5 array of 5 units would leave at least one hot spare in the chassis and two if the OS is put on one disk.
Space efficiency is less than that of raid5. Rather than 1-1/n with raid5 you have 2/n with raid10.
But it would be faster. And disks are cheap.
Alternatively, the thought comes to mind that we could do a RAID1 with two RAID5 arrays each of which have 3 drives. Whether one would actually want to do that seems to me a bit questionable but it seems to be at least possible.
You're suggesting a raid5+1 or raid51 http://en.wikipedia.org/wiki/Nested_RAID_levels
I wouldn't suggest nesting software raid if you can avoid it for the complexity. There are reasons to create a raid array with two hardware arrays, but I'd avoid doing so.
Comments, suggestions, caveats?
Regards,
-- *** E-Mail is NOT a SECURE channel *** James B. Byrne mailto:ByrneJB@Harte-Lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- ---~~.~~--- Mike // SilverTip257 // _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 11/14/2013 12:52 PM, Mauricio Tavares wrote:
I do agree that for the amount of drives he has, raid10 seems to
be the way to go. That said, what about raid6?
every random write has to read/modify/write 3 drives on raid6. raid6 rebuilds are painfully slow. raid6 write performance degradation when a drive is down is awful.
From: James B. Byrne byrnejb@harte-lyne.ca
We have a cold server with 32Gb RAM and 8 x 3TB SATA drives mounted in hotswap cells. The intended purpose of this system is as an ERP application and DBMS host. The ERP application will likely eventually have web access but at the moment only dedicated client applications can connect to it.
I am researching how to best set this system up for use as a production host employing RAID. I have read the (minimal) documentation respecting RAID on the RedHat site and have found and read a few online guides. Naturally, in my ignorance I have a bunch of questions to ask and I probably have a bunch more that I should but do not know enough yet to ask.
From what I have read it appears that the system disk must use RAID 1 if it
uses RAID at all. Is this the case? If so, is there any benefit to be had by taking two of the 8 drives (6Tb) solely to hold the OS and boot partition? Should these two drives be pulled and replaced with two smaller ones or should we bother with RAID for the boot disk at all?
Given that one or two drive bays will be given over to the OS what should be the configuration of the remaining six? It appears from what I have read that RAID 5 is the only viable option. It also appears that the amount of storage available on a RAID5 array with N members is N-1/N. I also read that as the number of members increase both latency and the risk of data loss increases. As the amount of disk space we have in this unit (24Tb) is greater than the total storage of all our existing hosts it appears that a RAID5 array of 5 units would leave at least one hot spare in the chassis and two if the OS is put on one disk.
Alternatively, the thought comes to mind that we could do a RAID1 with two RAID5 arrays each of which have 3 drives. Whether one would actually want to do that seems to me a bit questionable but it seems to be at least possible.
For some storage servers, we put used the following cards with 2 small drives for the system: http://www.sybausa.com/productInfo.php?iid=1134
But do you really need 10+ TB for your ERP+DBM? I'd just go with a RAID10 for DBs...
JD