Hey folks,
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected to a Sunfire x2250 running 5.8 ( 64 bit )
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small This should be about 20TB :
[root@solexa1 StorMan]# df -h /dev/sdb1 Filesystem Size Used Avail Use% Mounted on /dev/sdb1 186G 60M 176G 1% /mnt/J4400-1
Here is how I created it :
./arcconf create 1 logicaldrive name J4400-1-RAID60 max 60 0 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0 18 0 19 0 20 0 21 0 22 0 23 noprompt
[root@solexa1 StorMan]# ./arcconf getconfig 1 ld Controllers found: 1 ---------------------------------------------------------------------- Logical device information ---------------------------------------------------------------------- Logical device number 0 Logical device name : J4400-1-RAID60 RAID level : 60 XOR Status of logical device : Impacted Size : 19066880 MB Stripe-unit size : 256 KB Read-cache mode : Enabled Write-cache mode : Enabled (write-back) Write-cache setting : Enabled (write-back) when protected by battery Partitioned : Yes Protected by Hot-Spare : No Bootable : Yes Failed stripes : No -------------------------------------------------------- Logical device segment information -------------------------------------------------------- Group 0, Segment 0 : Present (0,0) 9QJ3ZAYQ Group 0, Segment 1 : Present (0,1) 9QJ3ZP3Y Group 0, Segment 2 : Present (0,2) 9QJ3X7GR Group 0, Segment 3 : Present (0,3) 9QJ3XJQW Group 0, Segment 4 : Present (0,4) 9QJ3TPK2 Group 0, Segment 5 : Present (0,5) 9QJ40PHP Group 0, Segment 6 : Present (0,6) GTE002PBHJEDBE Group 0, Segment 7 : Present (0,7) 9QJ3ZHE0 Group 0, Segment 8 : Present (0,8) 9QJ3Z053 Group 0, Segment 9 : Present (0,9) 9QJ3ZEX6 Group 0, Segment 10 : Present (0,10) 9QJ33XGG Group 0, Segment 11 : Present (0,11) 9QJ3X88X Group 1, Segment 0 : Present (0,12) 9QJ3YLR2 Group 1, Segment 1 : Present (0,13) GTE002PBHHNVZE Group 1, Segment 2 : Present (0,14) 9QJ3ZGM2 Group 1, Segment 3 : Present (0,15) GTE002PBGP9VZE Group 1, Segment 4 : Present (0,16) 9QJ3ZB4X Group 1, Segment 5 : Present (0,17) 9QJ3ZAE0 Group 1, Segment 6 : Present (0,18) 9QJ3Y8C8 Group 1, Segment 7 : Present (0,19) GTE002PBH30GKE Group 1, Segment 8 : Present (0,20) GTE002PAKXKDPE Group 1, Segment 9 : Present (0,21) 9QJ3VXEL Group 1, Segment 10 : Present (0,22) 9QJ3W4W6 Group 1, Segment 11 : Present (0,23) 9QJ3TPGR
Make 1 big partition :
sfdisk /dev/sdb <<EOF ,,L EOF
[root@solexa1 StorMan]# sfdisk -l /dev/sdb
Disk /dev/sdb: 2430685 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 24540- 24541- 197124430 83 Linux /dev/sdb2 0 - 0 0 0 Empty /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty
And then make an ext4 filesystem on that :
[root@solexa1 StorMan]# mke4fs /dev/sdb1 mke4fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 12320768 inodes, 49281107 blocks 2464055 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 1504 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune4fs -c or -i to override.
Damn, should have checked the archives first (had been looking at Centos and RHEL docs but no luck)
Looks like 16TB is the limit?
On 05/23/2012 01:25 PM, Alan McKay wrote:
Damn, should have checked the archives first (had been looking at Centos and RHEL docs but no luck)
Looks like 16TB is the limit?
http://wiki.centos.org/About/Product
Indeed 16TB is the ext3 limit.
On May 23, 2012, at 2:25 PM, Alan McKay alan.mckay@gmail.com wrote:
Damn, should have checked the archives first (had been looking at Centos and RHEL docs but no luck)
Looks like 16TB is the limit?
In ext3, i believe xfs has a limit in the PBs.
-Ross
From: Alan McKay alan.mckay@gmail.com
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected But then I mount it and it is way too small This should be about 20TB :
You partitions looks weird. Di you use GPT...? FAT is limited to 2TB.
JD
On 5/23/2012 12:23 PM, Alan McKay wrote:
And then make an ext4 filesystem on that :
ext4 is limited to 16 TB.
Use xfs instead. I explain how here:
On Wednesday 23 May 2012 14.23.31 Alan McKay wrote:
Hey folks,
...
I used 'arcconf' to create a big RAID60 out of (see below).
But then I mount it and it is way too small This should be about 20TB :
...
/dev/sdb1 186G 60M 176G 1% /mnt/J4400-1
...
Here is how I created it :
./arcconf create 1 logicaldrive name J4400-1-RAID60 max 60 0 0 0 1 0 2
...
Make 1 big partition :
sfdisk /dev/sdb <<EOF ,,L EOF
This is the problem, various filesystems issues are irrelevant. sfdisk only uses "the old" msdos type partition table and this does not support >2T devices. It is unfortunate that it lacks proper error checking and warnings...
You should do one of:
1) don't use partitioning (mkfs directly on /dev/sdb) 2) use LVM (pvcreate /dev/sdb ...) 3) use a GPT type partition table (parted /dev/sdb or similar)
After this you'll have to tackle the current 16T limit for ext4 and other filesystem related oddities..
/Peter
Greetings,
On Wed, May 23, 2012 at 11:53 PM, Alan McKay alan.mckay@gmail.com wrote:
Hey folks,
I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected to a Sunfire x2250 running 5.8 ( 64 bit )
You can perhaps think about using GFS apart from XFS