On Tue, July 22, 2008 18:39, MJT wrote: > Ok, I don't have the origional post in my email so I am replying via a > reply > cutting and pasting from the archives list web page. Thank you! >> Looks like just starting the nfs service turns on V2, 3, and 4 (based on > reading the script, reading the man pages, and looking at the ports using > netstat -l). > > That behavior is set in the /etc/sysconfig/nfs file Which is empty by default in Centos 4.6. In fact nonexistent. >> I don't believe this is a firewall issue, internal IPs are fully open to > each other according to an early rule in iptables. > > It may not be a firewall issue, but NFS does use a different port. port > "2049" Yes, I know that (in fact, it conflicted with a local use, too, so I'm running it on 22049 currently; that worked by setting RPCNFSDARGS in /etc/sysconfig/nfs to include -p 22049). > You got yourself a configuration issue! So, this is what I did: Gosh, really? :-) > On the server, in /etc/sysconfig/nfs be sure you set: SECURE_NFS="no" > until > you are ready to take on kerveros authentication. While you are there you > can > change which versions of NFS get mounted. I haven't had to change anything > else in that file. I don't believe SECURE_NFS does anything; at least, it's not mentioned in /etc/init.d/nfs anywhere, and it's not in the nfsd man page. > Next, on both the sever and client, go into the /etc/idmap.conf and be > sure to > set your "Domain =" to your domain name. and also set: > > Nobody-User = nobody > Nobody-Group = nobody Those are already set in the Centos 4.6 idpmapd.conf file. Domain is set to "localdomain", though. Does it work if everybody agrees, or does it have to be "right" in some broader sense? I don't know what it uses this for. It's complicated here by the fact that internally our DNS likes to use "example.local" instead of "example.com" (I'm obfuscating the name of my employer). So I guess domain should probably be "example.local", since "host.example.local" is what you look up to get the right internal IP for all our hosts? > Now for the /etc/exports file > > Lets say you keep everything in a /export directory. In there you have a > home/ > and a data directory... Well, the export file should look something like: > > /export 192.168.0.*(rw,fsid=0,no_subtree_check,insecure,sync) > /export/home 192.168.0.*(rw,no_subtree_check,insecure,sync) > /export/home 192.168.0.*(ro,no_subtree_check,insecure,sync) > > Notice that the flags are different. Not the fsid=0 flag? Well that > defines > the /export as the "root" NFS directory so you do not need to > included "/export" in the fstab or the mount string when mounting. There > can > be more than one fsid flag as long as the numbers are unique but only > fsid=0 > sets the root directory. Other numbers allow different kerberos setups, or > so > I understand. I'd read about fsid=0, but hadn't gotten the fact that it hides that level from what I read. Thanks! I'm getting the impression that /etc/exports is used by NFS V4 and earlier versions, and in conflicting ways. Is that true? Or are there at least semi-clever ways to make one that works for everything? > Remember to restart NFS on the server! This is one place where command history has been very handy for me. > Now to finish with the client, be sure you did the /etc/idmap.conf on the > client or you will get all sorts of strange results! > > Edit the fstab file > > If you want to mount just /export on the server to /mnt/nfs on the client > the > fstab entry would look like: > > server.dom:/ /mnt/nfs nfs4 rw,soft,intr,proto=tcp,port=2049 > 0 0 Never would have occurred to me to specify the default port there! But since I'm using a non-default port, I have the port= parameter in place. > Notice there is NO /export . That is because of the fsid=0 flag. If you > included the /export it would deny the mount. > > To mount the two directories: > > server.dom:/home /home nfs4 rw,soft,intr,proto=tcp,port=2049 > 0 0 > server.dom:/data /mnt/data nfs4 rw,soft,intr,proto=tcp,port=2049 > 0 0 > > again no /export Well, I definitely understand a couple of things better than when we started. Thank you very much! It is not, however, working. Is that likely to be the "domain=" setting, given what I said above? I'll try constructing a "standard" /export and set things up more exactly that way and see if anything changes. But the errors I'm getting tend to be like: [ddb at host00 ~]$ sudo mount host01:/ddb /mnt/ddb -t nfs4 -o rw,hard,intr,proto=tcp,port=22049 mount: mount to NFS server 'host01' failed: System Error: Connection refused. Hmm; I'm currently exporting /home host00(rw,no_subtree_check,sync,fsid=0) and /home/ddb is under that. Do I have to separately export /home/ddb, given that it's really there and not a link? (My main application is exporing users' home directories to be shared among all the linux boxes, so setting up an extra hierarchy /export didn't see to gain me anything.) -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info