[CentOS-virt] Can I use direct attached storage as a shared filesystem in Xen

Tue Feb 9 15:13:56 UTC 2010
Rich <rhdyes at gmail.com>

I want to make the 10 terabytes raid an xfs filesystem and then share the
drive with all 4 of the vm's.  3 of the servers will be samba servers and
one will be my Lotus notes server.  I want to make the filesystem /data and
then each one of the servers will use specific sub directories.  I have it
set up as block devices now but I want the flexibility of having the whole
10 terabytes available to all 4 servers.

On Tue, Feb 9, 2010 at 1:28 AM, Christopher G. Stach II <cgs at ldsys.net>wrote:

> ----- "Adam Adamou" <adam0x54 at gmail.com> wrote:
>
> > either nfs or ocfs2. nfs is the easiest route. ocfs2 will give you a
> > clustered filesystem.
>
> Except NFS doesn't follow normal filesystem semantics and you can end up
> with corrupt data without knowing it, and it, along with CIFS, will give you
> a free shitload of network overhead to go along with your possibly corrupt
> data. OCFS2 or GFS are the only practical choices if you want it to behave
> like a typical filesystem and not have to worry about catering to it or
> rewriting software and/or reeducating developers, and OCFS2 is extremely
> easy to set up.
>
> The original question didn't specify much about the requirements, though. A
> single shared filesystem? Read-write or read-only? No filesystem at all?
> Without that information, I would at first recommend not sharing. It can be
> a lot of trouble, it's usually not required, and it severely complicates
> life when things fail.
>
> Well, there is always XenFS... :/
>
> --
> Christopher G. Stach II
> http://ldsys.net/~cgs/ <http://ldsys.net/%7Ecgs/>
> _______________________________________________
> CentOS-virt mailing list
> CentOS-virt at centos.org
> http://lists.centos.org/mailman/listinfo/centos-virt
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20100209/5a725b9d/attachment-0004.html>