Hi All,
How do I unmount an NFS share when the NFS server is unaivalable?
I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it:
root@saturn:[~]$ umount -f /bck umount2: Device or resource busy umount: /bck: device is busy umount2: Device or resource busy umount: /bck: device is busy
This non-working NFS share is causing problems on the server and I need to unmount it until such a time when the NFS server (faulty NAS) is repaired.
Hi,
On 1/26/11 5:23 PM, Rudi Ahlers wrote:
Hi All,
How do I unmount an NFS share when the NFS server is unaivalable?
I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it:
Try:
umount -f -l /bck
HTH,
On Wed, Jan 26, 2011 at 10:32 AM, Edo ml2edwin@gmail.com wrote:
Hi,
On 1/26/11 5:23 PM, Rudi Ahlers wrote:
Hi All,
How do I unmount an NFS share when the NFS server is unaivalable?
I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it:
Try:
umount -f -l /bck
HTH,
--
Thanx, that worked :)
How does one mount an NFS share, to avoid system timeouts when the remove NFS server is offline?
Rudi Ahlers ha scritto:
On Wed, Jan 26, 2011 at 10:32 AM, Edo ml2edwin@gmail.com wrote:
How does one mount an NFS share, to avoid system timeouts when the remove NFS server is offline?
I would use a different approach: use autofs, then the share is mounted "on the fly" only when needed, and unmounted after a while of not using it anymore. Is this fine with your environment?
Regards Lorenzo
On Wed, Jan 26, 2011 at 12:41 PM, Lorenzo Quatrini lorenzo.quatrini@gmail.com wrote:
Rudi Ahlers ha scritto:
On Wed, Jan 26, 2011 at 10:32 AM, Edo ml2edwin@gmail.com wrote:
How does one mount an NFS share, to avoid system timeouts when the remove NFS server is offline?
I would use a different approach: use autofs, then the share is mounted "on the fly" only when needed, and unmounted after a while of not using it anymore. Is this fine with your environment?
That won't really work. The NFS clients run cPanel and we need a way for end-users to have full access to their backups all the time. We used to run backup over FTP, but then when a client wanted to restore data one of the techs first had to download it from the backup server and then let the client restore it. So I'm trying to cut down on unnecessary support tasks.
On Wed, 26 Jan 2011, Rudi Ahlers wrote:
That won't really work. The NFS clients run cPanel and we need a way for end-users to have full access to their backups all the time. We used to run backup over FTP, but then when a client wanted to restore data one of the techs first had to download it from the backup server and then let the client restore it. So I'm trying to cut down on unnecessary support tasks.
Double check that autofs isn't what you want, as I suspect you're wrong in discounting it. With autofs, a user is free to access files in a currently unmounted nfs path, as autofs will mount it dynamically as required. But it generally copes a lot better than just static NFS mounts.
jh
On 1/26/11 5:35 AM, Rudi Ahlers wrote:
On Wed, Jan 26, 2011 at 12:41 PM, Lorenzo Quatrini lorenzo.quatrini@gmail.com wrote:
Rudi Ahlers ha scritto:
On Wed, Jan 26, 2011 at 10:32 AM, Edoml2edwin@gmail.com wrote:
How does one mount an NFS share, to avoid system timeouts when the remove NFS server is offline?
I would use a different approach: use autofs, then the share is mounted "on the fly" only when needed, and unmounted after a while of not using it anymore. Is this fine with your environment?
That won't really work. The NFS clients run cPanel and we need a way for end-users to have full access to their backups all the time. We used to run backup over FTP, but then when a client wanted to restore data one of the techs first had to download it from the backup server and then let the client restore it. So I'm trying to cut down on unnecessary support tasks.
I don't see why the automounter wouldn't work for this, but you can mount with the soft,bg options to keep from hanging.
On Wed, 26 Jan 2011, Les Mikesell wrote:
That won't really work. The NFS clients run cPanel and we need a way for end-users to have full access to their backups all the time. We used to run backup over FTP, but then when a client wanted to restore data one of the techs first had to download it from the backup server and then let the client restore it. So I'm trying to cut down on unnecessary support tasks.
I don't see why the automounter wouldn't work for this, but you can mount with the soft,bg options to keep from hanging.
You need to be completely sure that 100% of your apps know how to handle I/O errors before using soft mounts.
Errors in hard-mounted NFS filesystems will produce hanging applications, which are admittedly a pain, but the apps will stop issuing i/o calls until the filesystem returns. An app can never be fooled into think a write or read operation succeeded when it didn't.
Soft-mounted filesystems, however, return error codes that applications can (and most often do) ignore, resulting in all sorts file corruption.
On Wed, Jan 26, 2011 at 4:10 PM, Paul Heinlein heinlein@madboa.com wrote:
On Wed, 26 Jan 2011, Les Mikesell wrote:
That won't really work. The NFS clients run cPanel and we need a way for end-users to have full access to their backups all the time. We used to run backup over FTP, but then when a client wanted to restore data one of the techs first had to download it from the backup server and then let the client restore it. So I'm trying to cut down on unnecessary support tasks.
I don't see why the automounter wouldn't work for this, but you can mount with the soft,bg options to keep from hanging.
You need to be completely sure that 100% of your apps know how to handle I/O errors before using soft mounts.
Errors in hard-mounted NFS filesystems will produce hanging applications, which are admittedly a pain, but the apps will stop issuing i/o calls until the filesystem returns. An app can never be fooled into think a write or read operation succeeded when it didn't.
Soft-mounted filesystems, however, return error codes that applications can (and most often do) ignore, resulting in all sorts file corruption.
-- Paul Heinlein <> heinlein@madboa.com <> http://www.madboa.com/ _______________________________________________
the problem I'm getting is that the NFS mount is for backups only, so if it's off-line then no backups can be made, which I can live with for the time being while it's being brought online again.
But, the problem I sit with is that other regular operations on the local disk "hang" so to say, untill I manually unmount the NFS mount. How do I get local operations to continue while the NFS mount if faulty, but not unmounted?
Hi,
On Jan 26, 2011, at 5:50 PM, Rudi Ahlers Rudi@SoftDux.com wrote:
On Wed, Jan 26, 2011 at 10:32 AM, Edo ml2edwin@gmail.com wrote:
Hi,
On 1/26/11 5:23 PM, Rudi Ahlers wrote:
Hi All,
How do I unmount an NFS share when the NFS server is unaivalable?
I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it:
Try:
umount -f -l /bck
HTH,
--
Thanx, that worked :)
How does one mount an NFS share, to avoid system timeouts when the remove NFS server is offline?
Mount? Or unmount?
If unmount, then, just create a simple script that will ping the server and then run the above command if it doesn’t respond.
HTH,
on 10:23 Wed 26 Jan, Rudi Ahlers (Rudi@SoftDux.com) wrote:
Hi All,
How do I unmount an NFS share when the NFS server is unaivalable?
I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it:
root@saturn:[~]$ umount -f /bck umount2: Device or resource busy umount: /bck: device is busy umount2: Device or resource busy umount: /bck: device is busy
This non-working NFS share is causing problems on the server and I need to unmount it until such a time when the NFS server (faulty NAS) is repaired.
The specific solution is 'umount -fl <dir|device>'.
The general solution's a little stickier.
I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ....).
Monitoring of client and server NFS processes helps. If it's the filer heads which are failing, and need warrants it, look into HA failover options.
Soft mounts as mentioned won't hange processes, but may result in data loss. This is most critical in database operations (where atomicity is assumed and generally assured by the DBMS). If the issue is one of re-running a backup job, and you can get a clear failure, risk would be generally mitigated.
On 1/26/2011 4:55 PM, Dr. Ed Morbius wrote:
The specific solution is 'umount -fl<dir|device>'.
The general solution's a little stickier.
I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ....).
Monitoring of client and server NFS processes helps. If it's the filer heads which are failing, and need warrants it, look into HA failover options.
Soft mounts as mentioned won't hange processes, but may result in data loss. This is most critical in database operations (where atomicity is assumed and generally assured by the DBMS). If the issue is one of re-running a backup job, and you can get a clear failure, risk would be generally mitigated.
Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing to use for backups and it provides a web interface for restores (you pick the historical version you want and either tell it to put it back to the original host or you can download a tarball through the browser). Very nice for self-serve access. It does want to map complete hosts to owners that have permission to access them but with a little work you make different areas of a shared system look like separate hosts.
On Thu, Jan 27, 2011 at 1:26 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 1/26/2011 4:55 PM, Dr. Ed Morbius wrote:
The specific solution is 'umount -fl<dir|device>'.
The general solution's a little stickier.
I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ....).
Monitoring of client and server NFS processes helps. If it's the filer heads which are failing, and need warrants it, look into HA failover options.
Soft mounts as mentioned won't hange processes, but may result in data loss. This is most critical in database operations (where atomicity is assumed and generally assured by the DBMS). If the issue is one of re-running a backup job, and you can get a clear failure, risk would be generally mitigated.
Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing to use for backups and it provides a web interface for restores (you pick the historical version you want and either tell it to put it back to the original host or you can download a tarball through the browser). Very nice for self-serve access. It does want to map complete hosts to owners that have permission to access them but with a little work you make different areas of a shared system look like separate hosts.
-- Les Mikesell lesmikesell@gmail.com
BackupPC doesn't intergrate into cPanel.
On Thu, Jan 27, 2011 at 9:05 AM, John R Pierce pierce@hogranch.com wrote:
On 01/26/11 10:57 PM, Rudi Ahlers wrote:
BackupPC doesn't intergrate into cPanel.
cpanel is pure crap.
And you are any better?
On Wed, 2011-01-26 at 23:05 -0800, John R Pierce wrote:
cpanel is pure crap.
It is a ghastly and frustrating nightmare. Command line, even for a Linux beginner like me, is far superior. It is amazing that people pay lots of money to use it.
Always Learning wrote:
On Wed, 2011-01-26 at 23:05 -0800, John R Pierce wrote:
cpanel is pure crap.
It is a ghastly and frustrating nightmare. Command line, even for a Linux beginner like me, is far superior. It is amazing that people pay lots of money to use it.
It may be crap, but a) I haven't seen any ISPs that offer shell access for the better part of a decade, at least, and b) consider the enTHUsistic folks who build so many websites who have no clue about computers, security, and get the cooties if they were to see a command line.
*shrug* I live with it from my hosting provider. But then, I do everything on my own system (CentOS, of course), and hardly do more with cPanel than I would/could with Ye Olde Ftp.
mark
On Thu, 2011-01-27 at 10:05 -0500, m.roth@5-cent.us wrote:
On Wed, 2011-01-26 at 23:05 -0800, John R Pierce wrote:
cpanel is pure crap.
It may be crap, but a) I haven't seen any ISPs that offer shell access for the better part of a decade, at least, and b) consider the enTHUsistic folks who build so many websites who have no clue about computers, security, and get the cooties if they were to see a command line.
*shrug* I live with it from my hosting provider. But then, I do everything on my own system (CentOS, of course), and hardly do more with cPanel than I would/could with Ye Olde Ftp.
I moved to VPSs and got root access and a choice. Top of the list was Centos so I chose it. I have been happy ever since. Centos evokes cherished memories of 'real computing' in different countries. Personally M$ Windoze and Cpanel are unpleasant memories. Perhaps they are suitable for those lacking good computer skills but I really don't want that crap especially at my non-young age. I want quality and a professional operating system. Centos gives it to me.
On Thu, Jan 27, 2011 at 10:05:35AM -0500, m.roth@5-cent.us wrote:
It may be crap, but a) I haven't seen any ISPs that offer shell access for the better part of a decade, at least, and b) consider the enTHUsistic
www.panix.com - Your $HOME away from home.
Of course many people who want shell access just get their own VMs now (eg linode, Panix v-colo).
Stephen Harris wrote:
On Thu, Jan 27, 2011 at 10:05:35AM -0500, m.roth@5-cent.us wrote:
It may be crap, but a) I haven't seen any ISPs that offer shell access for the better part of a decade, at least, and b) consider the enTHUsistic
www.panix.com - Your $HOME away from home.
Of course many people who want shell access just get their own VMs now (eg linode, Panix v-colo).
*shrug*. I've got paid-up hosting with bluehost/hostmonster. It's cheap, I've had very few problems, and it's not like I've got a big, high traffic site.
mark "and I do everything on my own system, anyway"
On 1/27/11 12:57 AM, Rudi Ahlers wrote:
Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing to use for backups and it provides a web interface for restores (you pick the historical version you want and either tell it to put it back to the original host or you can download a tarball through the browser). Very nice for self-serve access. It does want to map complete hosts to owners that have permission to access them but with a little work you make different areas of a shared system look like separate hosts.
BackupPC doesn't intergrate into cPanel.
Why does it have to integrate? It runs on a different machine. Can't you make a remote apache authenticate the same way as a cpanel user would to access its web interface?
On Thu, Jan 27, 2011 at 3:00 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 1/27/11 12:57 AM, Rudi Ahlers wrote:
Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing to use for backups and it provides a web interface for restores (you pick the historical version you want and either tell it to put it back to the original host or you can download a tarball through the browser). Very nice for self-serve access. It does want to map complete hosts to owners that have permission to access them but with a little work you make different areas of a shared system look like separate hosts.
BackupPC doesn't intergrate into cPanel.
Why does it have to integrate? It runs on a different machine. Can't you make a remote apache authenticate the same way as a cpanel user would to access its web interface?
-- Les Mikesell
Sorry, I should have explained. cPanel is a web based control panel which allows end users to control every aspect of their domain (Web, stats, mail, files, databases, logs, DNS, etc) including backups.
It currently backs up everything over FTP, and works fairly well but when a user wants to restore a broken website one of our techs needs to download the backup from the FTP server, to the cPanel server and then restore it on the client's behalf.
Thus, mounting the NFS share basically added enough storage to the cPanel todo the backups "locally", and then the users can restore the backups themselves by logging into cPanel. i.e. all the necessary security checks are performed automatically.
But, If we use something like backupPC, then each user will need to be created on the BackupPC server (which will be a nightmare) and he then has to download the backup to his own PC first (some sites are several GB's, into the 10's of GB's), which then means the backup will take ages to restore.
With cPanel, everything happens on the server directly so it's very quick.
Rudi Ahlers wrote:
On Thu, Jan 27, 2011 at 3:00 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 1/27/11 12:57 AM, Rudi Ahlers wrote:
Actually, since the original question involved access to backups, I should have given my usual answer which is that backuppc is the thing
<snip>
It currently backs up everything over FTP, and works fairly well but when a user wants to restore a broken website one of our techs needs to download the backup from the FTP server, to the cPanel server and then restore it on the client's behalf.
Thus, mounting the NFS share basically added enough storage to the cPanel todo the backups "locally", and then the users can restore the backups themselves by logging into cPanel. i.e. all the necessary security checks are performed automatically.
<snip> Well, I wouldn't be running ftp, anyway, but may I offer an alternative? How 'bout either rsync or scp; have the users' backups in their own directories, and set up ssh keys, and then give them a canned script to run, so that a) they say, AUGH! Website bad! Gotta restore! b) they go to cPanel, to the, what's it called, system maintenance? page, then are offered an icon that brings of a page that allows them to select one or more directories, or the whole site, c) clicking a <restore> button rcyncs or sftp's it over, from the backup directory that's owned by them to their site, with no passwords needed?
mark "ftp bad, *so* 1980's/early '90s, when the 'Net was a better place"
On 1/27/2011 7:30 AM, Rudi Ahlers wrote:
BackupPC doesn't intergrate into cPanel.
Why does it have to integrate? It runs on a different machine. Can't you make a remote apache authenticate the same way as a cpanel user would to access its web interface?
Sorry, I should have explained. cPanel is a web based control panel which allows end users to control every aspect of their domain (Web, stats, mail, files, databases, logs, DNS, etc) including backups.
It currently backs up everything over FTP, and works fairly well but when a user wants to restore a broken website one of our techs needs to download the backup from the FTP server, to the cPanel server and then restore it on the client's behalf.
Thus, mounting the NFS share basically added enough storage to the cPanel todo the backups "locally", and then the users can restore the backups themselves by logging into cPanel. i.e. all the necessary security checks are performed automatically.
If you are going this route, the obvious thing would be to make the automounter mount the user's copy into his own space when/if he accesses it and unmount the rest of the time.
But, If we use something like backupPC, then each user will need to be created on the BackupPC server (which will be a nightmare)
It's not that complicated. You only need an authentication method that would set apache's REMOTE_USER which probably already exists on the server and wouldn't be hard to copy elsewhere in whatever way it works now - or you can run the server locally with nfs-mounted storage.
and he then has to download the backup to his own PC first (some sites are several GB's, into the 10's of GB's), which then means the backup will take ages to restore.
No, downloading from the browser is an option, but the server can also put files back directly over the same transport that was used for the backup. The only issue that might be a problem would be controlling where each user could restore to. Typically each target host has an 'owner' and access to the web side is limited to the hosts you own - and you can map subdirectory targets to look like separate hosts. But when you restore, the commands run as the backuppc user which would typically have full root ssh access to the whole target host. There's probably some way to work around this - maybe using the ftp transport and controlling where the logins can go.
Anyway the big advantage of backuppc is that all identical files are pooled so you can keep a much longer history on line.
On Wed, 26 Jan 2011, Dr. Ed Morbius wrote:
I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ....).
Once you're into an OOM state, you're screwed anyway. Is turning off overcommit a sane option these days or not?
jh
on 07:54 Thu 27 Jan, John Hodrien (J.H.Hodrien@leeds.ac.uk) wrote:
On Wed, 26 Jan 2011, Dr. Ed Morbius wrote:
I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ....).
Once you're into an OOM state, you're screwed anyway. Is turning off overcommit a sane option these days or not?
Our suggested fix was to dramtically reduce overcommit, or disable it. I don't recall what was ultimately decided.
Frankly, bouncing the box would generally be better than letting it get in some weird wedge state (and was what we usually ended up doing in this instance anyway). Environment was a distributed batch-process server farm. Engineers were disciplined to either improve memory management or request host resources appropriately.
Now, if you were to run monit, out of init, and restart critical services as they failed, you might get around some of the borkage, but yeah, generally, what OOM is trying to tell you is that you're Doing It Wrong[tm].