On 06/15/2014 04:23 AM, Warren Young wrote:
On Jun 12, 2014, at 11:27 AM, Warren Young warren@etr-usa.com wrote:
[*] The absolute XFS filesystem size limit is about 8 million terabytes, which requires about 500 cubic meters of the densest HDDs available today.
I’ve been wondering what 500 TB looks like, so I worked it out. It requires a mere 100 x 6 TB disks for 20% redundancy.
Viewed that way, 500 TB looks a little on the low side. You can get a 9U server chassis[*] with its face almost covered with 50 hot-swap 3.5 inch drive trays. That puts us only one size doubling from being able to achieve a max-size array in a single server.
Even if we assume SAS drives, we’re still only about 3 doublings away from filling that 9U chassis with a 500 TB array. RHEL7 will be in production 1 level support for another 5 years, enough time for those 3 doublings.
I assume we’re climbing out of the doubling doldrums brought on by the Taiwan floods by now. Even if not, we’ve got another *10* years before RHEL 7 leaves production level 3 support.
Apparently Red Hat picked this number by doing similar projections, and set it fairly conservatively.
What this means is that some of us will be DIYing petabyte scale arrays in a single commodity chassis by the time RHEL 8 ships. I’m not talking about high-dollar SAN or Big Iron stuff here; we’ll be making them from commodity parts you can buy off NewEgg without a special order. Wow.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Aside from some corporation...or from a home business perspective where expansion is expected. I don't think I would attempt this, but I'm sure there are those who actually need to do something like this to ensure their site remains stable reliable and robust. I can only imagine the nightmares that would begin for me trying to get this all up and running.
EGO II