On 1/26/2011 4:55 PM, Dr. Ed Morbius wrote: > > The specific solution is 'umount -fl<dir|device>'. > > The general solution's a little stickier. > > I'd suggest the automount route as well (you're only open to NFS issues > while the filesystem is mounted), but you then have to maintain > automount maps and run the risk of issues with the automounter (I've > seen large production environments in which the OOM killer would > arbitrarily select processes to kill ....). > > Monitoring of client and server NFS processes helps. If it's the filer > heads which are failing, and need warrants it, look into HA failover > options. > > Soft mounts as mentioned won't hange processes, but may result in data > loss. This is most critical in database operations (where atomicity is > assumed and generally assured by the DBMS). If the issue is one of > re-running a backup job, and you can get a clear failure, risk would be > generally mitigated. Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing to use for backups and it provides a web interface for restores (you pick the historical version you want and either tell it to put it back to the original host or you can download a tarball through the browser). Very nice for self-serve access. It does want to map complete hosts to owners that have permission to access them but with a little work you make different areas of a shared system look like separate hosts. -- Les Mikesell lesmikesell at gmail.com