[CentOS] really large file systems with centos

Devin Reade gdr at gno.org
Thu Jul 14 15:20:48 UTC 2011

Two thoughts:

1.  Others have already inquired as to your motivation to move away from
    ZFS/Solaris.  If it is just the hardware & licensing aspect, you
    might want to consider ZFS on FreeBSD.  (I understand that unlike
    the Linux ZFS implementation, the FreeBSD one is in-kernel.)

2.  If you really want to move away from move away from ZFS, one 
    possibility is to use glusterfs, which is a lockless distributed
    filesystem.  Based on the glusterfs architecture, you scale out
    horizontally over time; instead of buying a single server with
    massive capacity, you buy smaller servers and add more as your
    space requirements exceed your current capacity.  You also decide
    over how many nodes you want your data to be mirrored.  Think
    about it as a RAID0/RAID1/RAID10 solution spread over machines
    rather than just disk.  It uses fuse over native filesystems,
    so if you decide to back it out you turn off glusterfs and you
    still have your data on the native filesystem.

    From the client perspective the server cluster looks like a single
    logical entity, either over NFS or the native client software.
    (The native client is configured with info on all the server nodes,
    the NFS client depends on round-robin DNS to connect to *some* node
    of the cluster.)


    Caveat:  I've only used glusterfs in one small deployment in
    a mirrored-between-two-nodes configuration.  Glusterfs doesn't
    have as many miles on it as ZFS or the other more common filesystems.
    I've not run into any serious hiccoughs, but put in a test cluster
    first and try it out.  Commodity hardware is just fine for such
    a test cluster.


More information about the CentOS mailing list