Guys,
I was thrown a cheap OEM-server with a 120 GB SSD and 10 x 4 TB SATA-disks for the data-backup to build a backup server. It's built around an Asus Z87-A that seems to have problems with anything Linux unfortunately.
Anyway, BackupPC is my preferred backup-solution, so I went ahead to install another favourite, CentOS 6.4 - and failed.
The raid controller is a Highpoint RocketRAID 2740 and its driver is suggested to be prior to starting Anaconda by way of "ctrl-alt-f2", at which point Anaconda freezes.
I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.
I've never had to deal with this big raid-arrays before and am a bit stumped.
Any hints as to where to start reading up, as well as hints on how to proceed (another motherboard, ditto raidcontroller?), would be greatly appreciated.
Thanks.
On 04.11.2013 12:44, Sorin Srbu wrote:
Anyway, BackupPC is my preferred backup-solution, so I went ahead to install another favourite, CentOS 6.4 - and failed.
The raid controller is a Highpoint RocketRAID 2740 and its driver is suggested to be prior to starting Anaconda by way of "ctrl-alt-f2", at which point Anaconda freezes.
Hi Sorin,
Please check this page, if you have the driver from the manufactured it shows you how to load it: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/...
I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.
Yes, RedHat puts in this artificial limit. They say they do not support volumes larger than this and recommend XFS instead, which is what I recommend as well.
I've never had to deal with this big raid-arrays before and am a bit stumped.
Any hints as to where to start reading up, as well as hints on how to proceed (another motherboard, ditto raidcontroller?), would be greatly appreciated.
Just a thought - I maintain a CentOS destop oriented remix and have an ISO with the kernel from elrepo.org (kernel-ml): http://li.nux.ro/download/ISO/Stella6.4_x86_64.1_kernel-ml.iso It's not tested much but the kernel might be new enough to support the raid card, if you can install it you could keep using it; "changing it" it to CentOS is trivial.
HTH Lucian
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Nux! Sent: den 4 november 2013 14:02 To: CentOS mailing list Subject: Re: [CentOS] [OT] Building a new backup server
Please check this page, if you have the driver from the manufactured it shows you how to load it: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/...
That doesn't look like the CentOS-installer does it. Anyway, this was what I was trying to accomplish with ctrl-alt-f2 when Anaconda froze on me.
I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.
Yes, RedHat puts in this artificial limit. They say they do not support volumes larger than this and recommend XFS instead, which is what I recommend as well.
What about this 1 GB RAM per TB disk-space for XFS in order to be able to do an fsck? I don't think I can fit that much RAM (40 GB) on this particular motherboard.
Just a thought - I maintain a CentOS destop oriented remix and have an ISO with the kernel from elrepo.org (kernel-ml): http://li.nux.ro/download/ISO/Stella6.4_x86_64.1_kernel-ml.iso It's not tested much but the kernel might be new enough to support the raid card, if you can install it you could keep using it; "changing it" it to CentOS is trivial.
That's a thought - plan B. Thanks!
-- //Sorin
On 2013-11-04, Sorin Srbu Sorin.Srbu@orgfarm.uu.se wrote:
What about this 1 GB RAM per TB disk-space for XFS in order to be able to do an fsck? I don't think I can fit that much RAM (40 GB) on this particular motherboard.
That is somewhat of a guideline; I have done an fsck on largish filesystems with much less than 1GB RAM per 1TB storage. I believe I have done an xfs_repair on a 25TB filesystem with only 4GB of memory (you pretty much need -P to do this). It is true that you want to throw as much memory as you can at these filesystems, but if you can't then you can't. Especially for your application (backup server) more RAM is helpful but not as essential.
--keith
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Keith Keller Sent: den 4 november 2013 20:19 To: centos@centos.org Subject: Re: [CentOS] [OT] Building a new backup server
What about this 1 GB RAM per TB disk-space for XFS in order to be able to do an fsck? I don't think I can fit that much RAM (40 GB) on this particular motherboard.
That is somewhat of a guideline; I have done an fsck on largish filesystems with much less than 1GB RAM per 1TB storage. I believe I have done an xfs_repair on a 25TB filesystem with only 4GB of memory (you pretty much need -P to do this). It is true that you want to throw as much memory as you can at these filesystems, but if you can't then you can't. Especially for your application (backup server) more RAM is helpful but not as essential.
Excellent, thanks!
-- //Sorin
On 11/4/2013 4:44 AM, Sorin Srbu wrote:
I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.
I've never had to deal with this big raid-arrays before and am a bit stumped.
use XFS for large file systems.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of John R Pierce Sent: den 4 november 2013 18:08 To: centos@centos.org Subject: Re: [CentOS] [OT] Building a new backup server
On 11/4/2013 4:44 AM, Sorin Srbu wrote:
I've come so far as installing Fedora 19 and having it see all the hard-drives, but it refuses to create any partition bigger than approx. 16 TB with ext4.
I've never had to deal with this big raid-arrays before and am a bit
stumped.
use XFS for large file systems.
Looking into it. Thanks.
-- //Sorin