On Tuesday, May 19, 2020 1:36:03 AM CEST Warren Young wrote: > On May 18, 2020, at 5:13 AM, hw <hw at gc-24.de> wrote: > > Is there a better alternative for mounting remote file systems over > > unreliable connections? > > I don’t have a good answer for you, because if you’d asked me without all > this backstory whether NFS or SSHFS is more tolerant of bad connections, > I’d have told you SSHFS. That's what I thought. Should I make a bug report? Sshfs is clearly intended to reconnect automatically when mounted like that, and it doesn't do that. > NFS comes out of the "Unix lab” world, where all of the computers are > hard-wired to nearby servers. It gets really annoyed when packet loss > starts happening, and since it’s down in the kernel, that can mean the > whole box locks up until NFS gets happy again. It's intended to do that, which is fine. Sshfs is intended to do that as well. Both are supposed to reconnect when the connection is back. So far, sshfs has failed to do that to the extend that it is unusable. So far, NFS with autofs hasn't caused issues, yet the testing continues. It's also a lot faster despite I used compression with sshfs. > NFS is that way on purpose: it’s often used to provide critical file service > (e.g. root-on-NFS) so if file I/O stops happening it *must* block and wait > out the failure, else all I/O dependent on NFS starts failing. > > Some of this affects SSHFS as well. To some extent, the solution to the > broader problem is “Dropbox” et al. That is, a solution that was designed > around the idea that connectivity might not be constant. Well, I need the file system accessible like a file system, not involving storing files somewhere else and downloading them somewhere else or somehow syncing some files manually between servers and clients once in a while. How am I supposed to work remotely when I don't have access to the files involved. > This is also while DVCSes like Git have become popular. Are you sure that's the reason?