Hello again,
Been an interesting day.
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
However upon entering parted, and making a gpt label, print reports back as follows;
Model: Areca ARC-1680-VOL#000 (scsi) Disk /dev/sdc: 2199GB Sector Size (logical/physical): 512B/512B Partition Table: gpt
Why is the disk reporting ~2TB?
Should I be using another partitioning tool for such a large volume?
- aurf
What filesystem are you planning to use, I am hoping for XFS in such a large volume.
aurfalien@gmail.com 1/11/2011 4:41 PM >>>
Hello again,
Been an interesting day.
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
However upon entering parted, and making a gpt label, print reports back as follows;
Model: Areca ARC-1680-VOL#000 (scsi) Disk /dev/sdc: 2199GB Sector Size (logical/physical): 512B/512B Partition Table: gpt
Why is the disk reporting ~2TB?
Should I be using another partitioning tool for such a large volume?
- aurf
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 11 Jan 2011, aurfalien@gmail.com wrote:
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
I don't know the answer to your parted question, but let me be the first of many to express horror at the idea of using RAID-5 for such a large volume with so many spindles, even with a hot spare. The rebuild times are probably going to be days, and the chance of a second spindle failure in that time is high enough to make it dangerous. Use RAID-6 at least.
Steve
On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
On Tue, 11 Jan 2011, aurfalien@gmail.com wrote:
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
I don't know the answer to your parted question, but let me be the first of many to express horror at the idea of using RAID-5 for such a large volume with so many spindles, even with a hot spare. The rebuild times are probably going to be days, and the chance of a second spindle failure in that time is high enough to make it dangerous. Use RAID-6 at least.
Ok RAID 6 it is.
How would any one proceed with partitioning such a large sized volume?
- aurf
On Jan 11, 2011, at 2:56 PM, compdoc wrote:
mklabel gpt
then use zfs and zpool commands. Lots of good info on google.
Well, I did that and it still shows 2199GB.
Any ideas why or am I hung up on benign errors.
When ever I do this in partd;
mkpart primary 0 26T
I get;
Error: The location 26000G is outside of the device /dev/sdc
My Raid set up is Raid 6 + spare giving me 26TB raw.
if I do;
mkpart 0 3T it works and the partition is 3TB.
This is a hardware based Areca RAID. I didn't feel the need to load any Areca drivers as Centos supports this out the box.
Any ideas?
- aurf
On January 11, 2011 03:16:23 pm aurfalien@gmail.com wrote:
mkpart 0 3T it works and the partition is 3TB.
This is a hardware based Areca RAID. I didn't feel the need to load any Areca drivers as Centos supports this out the box.
Any ideas?
Maybe it can only make 16TB partitions? dunno, never had a drive quite that big. I know it works up to 10TB.
How are you planning to backup a 30TB filesystem, btw?
On Jan 11, 2011, at 3:39 PM, Alan Hodgson wrote:
On January 11, 2011 03:16:23 pm aurfalien@gmail.com wrote:
mkpart 0 3T it works and the partition is 3TB.
This is a hardware based Areca RAID. I didn't feel the need to load any Areca drivers as Centos supports this out the box.
Any ideas?
Maybe it can only make 16TB partitions? dunno, never had a drive quite that big. I know it works up to 10TB.
How are you planning to backup a 30TB filesystem, btw?
I rsync it to a few large arrays.
The production array will consist of a few dirs which will be individual NFS exports.
These dirs are, by them selves already small enough to backup to other
10TB arrays.
- aurf
I think it's better to let parted decide how big the partition can be:
mkpart primary 0 -1
That should create a partition without a fs type, (no ext3, etc) starting at zero, and using all available.
if you have Advanced Format hard drives which are being sold these days, they say you can have performance problems if you start on zero, so you should start at one:
mkpart primary 1 -1
Be interesting to know if you see a performance hit either way...
On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
On Tue, 11 Jan 2011, aurfalien@gmail.com wrote:
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
I don't know the answer to your parted question, but let me be the first of many to express horror at the idea of using RAID-5 for such a large volume with so many spindles, even with a hot spare. The rebuild times are probably going to be days, and the chance of a second spindle failure in that time is high enough to make it dangerous. Use RAID-6 at least.
Hi Steve.
I went with Raid 6 + spare.
I'll force a failure (that is, once I get past my parted issue) and let you know rebuild times.
- aurf
On 01/11/11 5:34 PM, aurfalien@gmail.com wrote:
On Jan 11, 2011, at 2:06 PM, Steve Thompson wrote:
On Tue, 11 Jan 2011, aurfalien@gmail.com wrote:
I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS..
I don't know the answer to your parted question, but let me be the first of many to express horror at the idea of using RAID-5 for such a large volume with so many spindles, even with a hot spare. The rebuild times are probably going to be days, and the chance of a second spindle failure in that time is high enough to make it dangerous. Use RAID-6 at least.
Hi Steve.
I went with Raid 6 + spare.
I'll force a failure (that is, once I get past my parted issue) and let you know rebuild times
I wouldn't do that if read/write performance is important during the possibly several day rebuild times. a degraded raid6 can be really really slow until its fully rebuilt.
I personally prefer using RAID10 for just about everything, except maybe a bulk nearline store, and those will be raid5 or 6 with no more than 6-8 disks per raidset, if I have more spindles, I do raid 5+0 or 6+0 (eg, stripe two raid5 or raid6 sets). disks are cheap. time is money.