Yes, I did try to boot off a 4.5TB device. Now, the individual partitions are:
/ 1GB /dev/sda2 ext3 /boot 250MB /dev/sda1 ext2 (Marked as primary partition.) swap 1024 /dev/sda3 ext3 /data 4.449TB /dev/sda4 ext3
All 9 drives are RAID5 together via the 3Ware card. Tomorrow, I'll don my earplugs and create a small 1TB RAID device to install CentOS, then RAID5 the remaining 7 drives for all the data. I'l recreate the partitions above in the 1TB array, then use LVM to add the remaining 3.5TB into the LVs I need. I could also read up on array carving, that's new to me.
Thanks for all the help on this, I really thought I was doing something silly with the BIOS or some other setting. Didn't realize I would be running into limits of the OS.
Now, if anyone knows how to tune this thing for best write performance, that would be really cool. This server is part of a tapeless backup system using rsync. There's a good howto in the Linux Hacks book. We started from that, and it's just grown into a fully automated system that does hourly, daily and weekly backups. Plus, it sends it offsite to a colo location.... In case people were wondering why so much disk space...
Mark
Quoting Mark Schoonover schoon@amgt.com:
All 9 drives are RAID5 together via the 3Ware card. Tomorrow, I'll don my earplugs and create a small 1TB RAID device to install CentOS, then RAID5 the remaining 7 drives for all the data. I'l recreate the partitions above in the 1TB array, then use LVM to add the remaining 3.5TB into the LVs I need. I could also read up on array carving, that's new to me.
I don't have any 95xx cards around, but some suggestions anyhow what I'd try if I had any. I'd try array carving first. That way you can have OS on separate LUN from the rest of your data. So it is kind of separated on hardware level. As I said before, you don't need to recompile the kernel. Just tell scsi_mod how many LUNs it needs to scan. For example, create LUN 0 to be 1 or 2 gigs, and LUN 1 to be the rest of the space. Install OS on LUN 0 (the only one it will see by default). Then use max_luns option to gain acess to LUN 1.
9-drive RAID-5 will give you 500 gig of disk space more that you would have by creating 2-drive RAID-1 and 7-drive RAID-5. So I think it is worth to try.
Even with array carving, I'd still use LVM. It might look like overkill to have volume group with single physical volume and single logical volume on it, but it will make things more managable and expandable in the future. It is trivial to setup, and overhead should be small.