Looks like just starting the nfs service turns on V2, 3, and 4 (based on reading the script, reading the man pages, and looking at the ports using netstat -l).
However, I can connect using -t nfs in the mount, and -t nfs4 fails.
I don't believe this is a firewall issue, internal IPs are fully open to each other according to an early rule in iptables.
$ sudo mount host03:/home/ddb /mnt/ddb -t nfs4 -o rw,hard,intr mount: permission denied
but
$ sudo mount host03:/home/ddb /mnt/ddb -t nfs -o rw,hard,intr $
I'm not sure I especially care about NFS V4 (this is to share the programmers' home directories, so it's easy to work on any of the production systems; it won't be particularly high-load or any particularly strange usage pattern), but I care about understanding things at least.
Both systems are running Centos 4.6.
Any ideas?
David Dyer-Bennet wrote:
I'm not sure I especially care about NFS V4 (this is to share the programmers' home directories, so it's easy to work on any of the production systems; it won't be particularly high-load or any particularly strange usage pattern), but I care about understanding things at least.
Both systems are running Centos 4.6.
Any ideas?
Try it entirely as root instead of with sudo? Also see this it may have useful info: http://www.citi.umich.edu/projects/nfsv4/linux/using-nfsv4.html
I wouldn't bother with NFSv4 myself, looks too complicated.
nate
Ok, I don't have the origional post in my email so I am replying via a reply cutting and pasting from the archives list web page.
Looks like just starting the nfs service turns on V2, 3, and 4 (based on
reading the script, reading the man pages, and looking at the ports using netstat -l).
That behavior is set in the /etc/sysconfig/nfs file
I don't believe this is a firewall issue, internal IPs are fully open to
each other according to an early rule in iptables.
It may not be a firewall issue, but NFS does use a different port. port "2049"
You got yourself a configuration issue! So, this is what I did:
On the server, in /etc/sysconfig/nfs be sure you set: SECURE_NFS="no" until you are ready to take on kerveros authentication. While you are there you can change which versions of NFS get mounted. I haven't had to change anything else in that file.
Next, on both the sever and client, go into the /etc/idmap.conf and be sure to set your "Domain =" to your domain name. and also set:
Nobody-User = nobody Nobody-Group = nobody
Now for the /etc/exports file
Lets say you keep everything in a /export directory. In there you have a home/ and a data directory... Well, the export file should look something like:
/export 192.168.0.*(rw,fsid=0,no_subtree_check,insecure,sync) /export/home 192.168.0.*(rw,no_subtree_check,insecure,sync) /export/home 192.168.0.*(ro,no_subtree_check,insecure,sync)
Notice that the flags are different. Not the fsid=0 flag? Well that defines the /export as the "root" NFS directory so you do not need to included "/export" in the fstab or the mount string when mounting. There can be more than one fsid flag as long as the numbers are unique but only fsid=0 sets the root directory. Other numbers allow different kerberos setups, or so I understand.
Remember to restart NFS on the server!
Now to finish with the client, be sure you did the /etc/idmap.conf on the client or you will get all sorts of strange results!
Edit the fstab file
If you want to mount just /export on the server to /mnt/nfs on the client the fstab entry would look like:
server.dom:/ /mnt/nfs nfs4 rw,soft,intr,proto=tcp,port=2049 0 0
Notice there is NO /export . That is because of the fsid=0 flag. If you included the /export it would deny the mount.
To mount the two directories:
server.dom:/home /home nfs4 rw,soft,intr,proto=tcp,port=2049 0 0 server.dom:/data /mnt/data nfs4 rw,soft,intr,proto=tcp,port=2049 0 0
again no /export
On Tue, July 22, 2008 18:39, MJT wrote:
Ok, I don't have the origional post in my email so I am replying via a reply cutting and pasting from the archives list web page.
Thank you!
Looks like just starting the nfs service turns on V2, 3, and 4 (based on
reading the script, reading the man pages, and looking at the ports using netstat -l).
That behavior is set in the /etc/sysconfig/nfs file
Which is empty by default in Centos 4.6. In fact nonexistent.
I don't believe this is a firewall issue, internal IPs are fully open to
each other according to an early rule in iptables.
It may not be a firewall issue, but NFS does use a different port. port "2049"
Yes, I know that (in fact, it conflicted with a local use, too, so I'm running it on 22049 currently; that worked by setting RPCNFSDARGS in /etc/sysconfig/nfs to include -p 22049).
You got yourself a configuration issue! So, this is what I did:
Gosh, really? :-)
On the server, in /etc/sysconfig/nfs be sure you set: SECURE_NFS="no" until you are ready to take on kerveros authentication. While you are there you can change which versions of NFS get mounted. I haven't had to change anything else in that file.
I don't believe SECURE_NFS does anything; at least, it's not mentioned in /etc/init.d/nfs anywhere, and it's not in the nfsd man page.
Next, on both the sever and client, go into the /etc/idmap.conf and be sure to set your "Domain =" to your domain name. and also set:
Nobody-User = nobody Nobody-Group = nobody
Those are already set in the Centos 4.6 idpmapd.conf file.
Domain is set to "localdomain", though. Does it work if everybody agrees, or does it have to be "right" in some broader sense? I don't know what it uses this for.
It's complicated here by the fact that internally our DNS likes to use "example.local" instead of "example.com" (I'm obfuscating the name of my employer). So I guess domain should probably be "example.local", since "host.example.local" is what you look up to get the right internal IP for all our hosts?
Now for the /etc/exports file
Lets say you keep everything in a /export directory. In there you have a home/ and a data directory... Well, the export file should look something like:
/export 192.168.0.*(rw,fsid=0,no_subtree_check,insecure,sync) /export/home 192.168.0.*(rw,no_subtree_check,insecure,sync) /export/home 192.168.0.*(ro,no_subtree_check,insecure,sync)
Notice that the flags are different. Not the fsid=0 flag? Well that defines the /export as the "root" NFS directory so you do not need to included "/export" in the fstab or the mount string when mounting. There can be more than one fsid flag as long as the numbers are unique but only fsid=0 sets the root directory. Other numbers allow different kerberos setups, or so I understand.
I'd read about fsid=0, but hadn't gotten the fact that it hides that level from what I read. Thanks!
I'm getting the impression that /etc/exports is used by NFS V4 and earlier versions, and in conflicting ways. Is that true? Or are there at least semi-clever ways to make one that works for everything?
Remember to restart NFS on the server!
This is one place where command history has been very handy for me.
Now to finish with the client, be sure you did the /etc/idmap.conf on the client or you will get all sorts of strange results!
Edit the fstab file
If you want to mount just /export on the server to /mnt/nfs on the client the fstab entry would look like:
server.dom:/ /mnt/nfs nfs4 rw,soft,intr,proto=tcp,port=2049 0 0
Never would have occurred to me to specify the default port there! But since I'm using a non-default port, I have the port= parameter in place.
Notice there is NO /export . That is because of the fsid=0 flag. If you included the /export it would deny the mount.
To mount the two directories:
server.dom:/home /home nfs4 rw,soft,intr,proto=tcp,port=2049 0 0 server.dom:/data /mnt/data nfs4 rw,soft,intr,proto=tcp,port=2049 0 0
again no /export
Well, I definitely understand a couple of things better than when we started. Thank you very much!
It is not, however, working. Is that likely to be the "domain=" setting, given what I said above?
I'll try constructing a "standard" /export and set things up more exactly that way and see if anything changes.
But the errors I'm getting tend to be like:
[ddb@host00 ~]$ sudo mount host01:/ddb /mnt/ddb -t nfs4 -o rw,hard,intr,proto=tcp,port=22049 mount: mount to NFS server 'host01' failed: System Error: Connection refused.
Hmm; I'm currently exporting
/home host00(rw,no_subtree_check,sync,fsid=0)
and /home/ddb is under that. Do I have to separately export /home/ddb, given that it's really there and not a link? (My main application is exporing users' home directories to be shared among all the linux boxes, so setting up an extra hierarchy /export didn't see to gain me anything.)
Well, I definitely understand a couple of things better than when we started. Thank you very much!
It is not, however, working. Is that likely to be the "domain=" setting, given what I said above?
The "domain" in NFSv4-speak has nothing to do with DNS. It _can_ be you DNS-domainname but it can be anything as long as client and server agrees. If they disagree you can still mount, but all files will be owned by Nobody-User and Nobody-Group if I remember correctly.
I'll try constructing a "standard" /export and set things up more exactly that way and see if anything changes.
But the errors I'm getting tend to be like:
[ddb@host00 ~]$ sudo mount host01:/ddb /mnt/ddb -t nfs4 -o rw,hard,intr,proto=tcp,port=22049 mount: mount to NFS server 'host01' failed: System Error: Connection refused.
Shield up, Scotty!
Looks like a firewall issue to me. Do you allow incoming traffic to port 22049/TCP?
Can you mount over NFSv3?
/jens
On Wed, July 23, 2008 14:03, Jens Larsson wrote:
Well, I definitely understand a couple of things better than when we started. Thank you very much!
It is not, however, working. Is that likely to be the "domain=" setting, given what I said above?
The "domain" in NFSv4-speak has nothing to do with DNS. It _can_ be you DNS-domainname but it can be anything as long as client and server agrees. If they disagree you can still mount, but all files will be owned by Nobody-User and Nobody-Group if I remember correctly.
Thanks. Then that's not the problem. (and the Centos RPMs have it set a way that will work, which is good.)
But the errors I'm getting tend to be like:
[ddb@host00 ~]$ sudo mount host01:/ddb /mnt/ddb -t nfs4 -o rw,hard,intr,proto=tcp,port=22049 mount: mount to NFS server 'host01' failed: System Error: Connection refused.
Shield up, Scotty!
Looks like a firewall issue to me. Do you allow incoming traffic to port 22049/TCP?
As I said in the message you're responding to, all connections from internal IPs are allowed.
Can you mount over NFSv3?
Yes. And I said that in the message you're responding to also.
On Wednesday 23 July 2008 9:55:57 am David Dyer-Bennet wrote:
change which versions of NFS get mounted. I haven't had to change anything else in that file.
I don't believe SECURE_NFS does anything; at least, it's not mentioned in /etc/init.d/nfs anywhere, and it's not in the nfsd man page.
It is in the /etc/sysconfig/nfs file, so it does not necessarily need to be in the /etc/init.d/nfs. It is supposed to handle authentication and you are having authentication problems, right?.
I do not have your version of centos running but "SECURE_NFS" is not listed in /etc/init.d/nfs, it IS in /etc/rpcgssd and /etc/init.d/rpcvsgssd in centos 5.2, I'm betting that it is somewhat the same on your system.
From Redhat documentation:
https://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference...
"pc.svcgssd — This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file."
"rpc.gssd — This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file."
Notice that both of these talk about authentication, which is the problem you are having, right?
On Wed, July 23, 2008 14:17, MJT wrote:
On Wednesday 23 July 2008 9:55:57 am David Dyer-Bennet wrote:
change which versions of NFS get mounted. I haven't had to change anything else in that file.
I don't believe SECURE_NFS does anything; at least, it's not mentioned in /etc/init.d/nfs anywhere, and it's not in the nfsd man page.
It is in the /etc/sysconfig/nfs file, so it does not necessarily need to be in the /etc/init.d/nfs. It is supposed to handle authentication and you are having authentication problems, right?.
In the Centos RPM, there is no /etc/sysconfig/nfs file (though the init.d/nfs script checks for one and reads it if present).
By "no reference in the init script", I'm pointing out that nothing would be different if that variable were set that I could find. The init script doesn't do anything based on it, and it's not mentioned in the nfsd man page as being used there either.
I do not have your version of centos running but "SECURE_NFS" is not listed in /etc/init.d/nfs, it IS in /etc/rpcgssd and /etc/init.d/rpcvsgssd in centos 5.2, I'm betting that it is somewhat the same on your system.
From Redhat documentation:
https://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference...
"pc.svcgssd â This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file."
"rpc.gssd â This process is used by the NFS server to perform user authentication and is started only when SECURE_NFS=yes is set in the /etc/sysconfig/nfs file."
Notice that both of these talk about authentication, which is the problem you are having, right?
Yes, and they say that it doesn't happen unless that variable is set in the sysconfig file. That variable is not set in the sysconfig file.
I've gotten several useful pointers about the exact export syntax. I'll add the explicit SECURE_NFS=no to the sysconfig file rather than depending on the default, and test the export syntax, and I forget what (I've got notes, and I saved the email) and give these things a try; tomorrow it looks like though, a meeting is about to burn the rest of my day.
Thanks again for your suggestions!