Hey guys,
My NFS server has been working really well for a long time now. Both client and server run CentOS 7.2.
However when I just had to remount one of my home directories on an NFS client, I'm now getting the error when I run mount -a
mount.nfs: an incorrect mount option was specified
This is the corresponding line I have in my fstab file on the client:
nfs1.example.com:/var/nfs/home /home nfs rw 0 0
I get the same error if I try to run the mount command explicitly:
mount -t nfs nfs1.example.com:/var/nfs/home /home mount.nfs: an incorrect mount option was specified
This is the verbose output of that same command:
mount -vvv -t nfs nfs1.example.com:/var/nfs/home /home mount.nfs: timeout set for Sun Oct 2 23:17:03 2016 mount.nfs: trying text-based options 'vers=4,addr=162.xx.xx.xx.xx,clientaddr=107.xxx.xx.xx' mount.nfs: mount(2): Invalid argument mount.nfs: an incorrect mount option was specified
This is the entry I have in my /etc/exports file on the nfs server
/var/nfs/home web2.jokefire.com(rw,sync,no_root_squash,no_all_squash)
I get this same result if the firewall is up or down (for very microscopic slivers of time for testing purposes).
With the firewall down (for testing again very quickly) I get this result from the showmount -e command:
[root@web2:~] #showmount -e nfs1.example.com
Export list for nfs1.example.com:
/var/nfs/varnish varnish1.example.com
/var/nfs/es es3.example.com,es2.example.com,logs.example.com
/var/nfs/www web2.example.com,puppet.example.com,ops3.example.com, ops2.example.com,web1.example.com
/var/nfs/home ansible.example.com,chef.example.com,logs3.example.com, logs2.example.com,logs1.example.com,ops.example.com,lb1.example.com, ldap1.example.com,web2.example.com,web1.lyricgem.com,nginx1.example.com, salt.example.com,puppet.example.com,nfs1.example.com,db4.example.com, db3.example.com,db2.example.com,db1.example.com,varnish2.example.com, varnish1.example.com,es3.example.com,es2.example.com,es1.example.com, repo.example.com,ops3.example.com,ops2.example.com,solr1.example.com, time1.example.com,mcollective.example.com,logs.example.com, hadoop04.example.com,hadoop03.example.com,hadoop02.example.com, hadoop01.example.com,monitor3.example.com,monitor2.example.com, monitor1.example.com,web1.example.com,activemq1.example.com
With the firewall on the nfs server up (as it is all the time other than this short test), I get back this result:
showmount -e nfs1.example.com
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
This is a list of ports I have open on the NFS server:
[root@nfs1:~] #firewall-cmd --list-all
public (default, active)
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 2719/tcp 9102/tcp 52926/tcp 111/tcp 25/tcp 875/tcp 54302/tcp 46666/tcp 20048/tcp 2692/tcp 55982/tcp 2049/tcp 17123/tcp 42955/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="xx.xx.xx.x/32" port port="5666" protocol="tcp" accept
So I have two problems I need to solve. 1) How do I open the firewall ports on the nfs server so that clients can contact it? I'm using firewalld on the nfs server. And 2) why am I getting an error saying that "an incorrect mount option was specified"?
Thanks,
Tim
On Sun, Oct 02, 2016 at 11:42:41PM -0400, Tim Dunphy wrote:
Hey guys,
My NFS server has been working really well for a long time now. Both client and server run CentOS 7.2.
However when I just had to remount one of my home directories on an NFS client, I'm now getting the error when I run mount -a
mount.nfs: an incorrect mount option was specified
This is the corresponding line I have in my fstab file on the client:
nfs1.example.com:/var/nfs/home /home nfs rw 0 0
I get the same error if I try to run the mount command explicitly:
IIRC, for mount.nfs, the "r" option is read only while the "w" option is read+write. They may be mutually exclusive.
jl
mount -t nfs nfs1.example.com:/var/nfs/home /home mount.nfs: an incorrect mount option was specified
This is the verbose output of that same command:
mount -vvv -t nfs nfs1.example.com:/var/nfs/home /home mount.nfs: timeout set for Sun Oct 2 23:17:03 2016 mount.nfs: trying text-based options 'vers=4,addr=162.xx.xx.xx.xx,clientaddr=107.xxx.xx.xx' mount.nfs: mount(2): Invalid argument mount.nfs: an incorrect mount option was specified
This is the entry I have in my /etc/exports file on the nfs server
/var/nfs/home web2.jokefire.com(rw,sync,no_root_squash,no_all_squash)
I get this same result if the firewall is up or down (for very microscopic slivers of time for testing purposes).
With the firewall down (for testing again very quickly) I get this result from the showmount -e command:
[root@web2:~] #showmount -e nfs1.example.com
Export list for nfs1.example.com:
/var/nfs/varnish varnish1.example.com
/var/nfs/es es3.example.com,es2.example.com,logs.example.com
/var/nfs/www web2.example.com,puppet.example.com,ops3.example.com, ops2.example.com,web1.example.com
/var/nfs/home ansible.example.com,chef.example.com,logs3.example.com, logs2.example.com,logs1.example.com,ops.example.com,lb1.example.com, ldap1.example.com,web2.example.com,web1.lyricgem.com,nginx1.example.com, salt.example.com,puppet.example.com,nfs1.example.com,db4.example.com, db3.example.com,db2.example.com,db1.example.com,varnish2.example.com, varnish1.example.com,es3.example.com,es2.example.com,es1.example.com, repo.example.com,ops3.example.com,ops2.example.com,solr1.example.com, time1.example.com,mcollective.example.com,logs.example.com, hadoop04.example.com,hadoop03.example.com,hadoop02.example.com, hadoop01.example.com,monitor3.example.com,monitor2.example.com, monitor1.example.com,web1.example.com,activemq1.example.com
With the firewall on the nfs server up (as it is all the time other than this short test), I get back this result:
showmount -e nfs1.example.com
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
This is a list of ports I have open on the NFS server:
[root@nfs1:~] #firewall-cmd --list-all
public (default, active)
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 2719/tcp 9102/tcp 52926/tcp 111/tcp 25/tcp 875/tcp 54302/tcp 46666/tcp 20048/tcp 2692/tcp 55982/tcp 2049/tcp 17123/tcp 42955/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="xx.xx.xx.x/32" port port="5666"
protocol="tcp" accept
So I have two problems I need to solve. 1) How do I open the firewall ports on the nfs server so that clients can contact it? I'm using firewalld on the nfs server. And 2) why am I getting an error saying that "an incorrect mount option was specified"?
Thanks,
Tim
-- GPG me!!
gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
End of included message <<<
On 2016-10-03, Jon LaBadie jcu@labadie.us wrote:
IIRC, for mount.nfs, the "r" option is read only while the "w" option is read+write. They may be mutually exclusive.
I don't believe this is accurate. ro and rw are mutually exclusive, but there is no "w" option. (Which doesn't help the OP, unfortunately, but at least he knows.)
For the OP, you should check the system logs on both the client and the server. There may be clues as to what the real error is; sometimes mount.nfs reports a misleading error when the actual problem is somewhere else.
--keith
On Mon, Oct 03, 2016 at 12:27:10PM -0700, Keith Keller wrote:
On 2016-10-03, Jon LaBadie jcu@labadie.us wrote:
IIRC, for mount.nfs, the "r" option is read only while the "w" option is read+write. They may be mutually exclusive.
I don't believe this is accurate. ro and rw are mutually exclusive, but there is no "w" option. (Which doesn't help the OP, unfortunately, but at least he knows.)
Ahh, just checked. I was remembering the mount.nfs options, not fstab. Thanks for the correction.