Hi,
I have a server running under CentOS 5.8 and I appear to be in a situation in which the NFS file server is not recognizing the available space on a particular disk (actually a hardware RAID-6 of 13 2Tb disks). If I try to write to the disk I get the following error message
[root@nas-0-1 mseas-data-0-1]# touch dum touch: cannot touch `dum': No space left on device
However, if I check the available space, there seems to be plenty
[root@nas-0-1 mseas-data-0-1]# df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1
[root@nas-0-1 mseas-data-0-1]# df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb1 3290047552 4391552 3285656000 1% /mseas-data-0-1
I don't know if the following is relevant but the disk in question is served as one of 3 bricks in a gluster namespace.
Based on the test with touch, which is happening directly at the NFS level, this seems to be an NFS rather than gluster issue. I couldn't find any file in /var/log which had a time that corresponded to the failed touch test and I didn't see anything in dmesg. We have tried rebooting this system. What else should we look at and/or try to resolve or debug this issue?
Thanks.
Pat
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301
On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote:
Hi,
I have a server running under CentOS 5.8 and I appear to be in a situation in which the NFS file server is not recognizing the available space on a particular disk (actually a hardware RAID-6 of 13 2Tb disks). If I try to write to the disk I get the following error message
[root@nas-0-1 mseas-data-0-1]# touch dum touch: cannot touch `dum': No space left on device
However, if I check the available space, there seems to be plenty
[root@nas-0-1 mseas-data-0-1]# df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1
Maybe you're hitting the allocation of reserved blocks for root? With your disk usage of 97% I'd think that could be the case.
You didn't say what file system you're using for that 21TB array, so we (this list) won't be of too much help without knowing that.
tune2fs [0] is your friend - use it to determine if there are reserved blocks - use it to adjust the settings
[0] https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks
[root@nas-0-1 mseas-data-0-1]# df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb1 3290047552 4391552 3285656000 1% /mseas-data-0-1
I don't know if the following is relevant but the disk in question is served as one of 3 bricks in a gluster namespace.
Based on the test with touch, which is happening directly at the NFS level, this seems to be an NFS rather than gluster issue. I couldn't find any file in /var/log which had a time that corresponded to the failed touch test and I didn't see anything in dmesg. We have tried rebooting this system. What else should we look at and/or try to resolve or debug this issue?
If you have a non-root shell account on that box, can you write to that array from the NFS host? ( Take NFS out of the equation. )
Thanks.
Pat
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi:
On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote:
Hi,
I have a server running under CentOS 5.8 and I appear to be in a situation in which the NFS file server is not recognizing the available space on a particular disk (actually a hardware RAID-6 of 13 2Tb disks). If I try to write to the disk I get the following error message
[root@nas-0-1 mseas-data-0-1]# touch dum touch: cannot touch `dum': No space left on device
However, if I check the available space, there seems to be plenty
[root@nas-0-1 mseas-data-0-1]# df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1
Maybe you're hitting the allocation of reserved blocks for root? With your disk usage of 97% I'd think that could be the case.
You didn't say what file system you're using for that 21TB array, so we (this list) won't be of too much help without knowing that.
xfs file system. The fstab line for this array is:
/dev/sdb1 /mseas-data-0-1 xfs defaults 1 0
tune2fs [0] is your friend
if I read the man pages correctly tune2fs will not work for xfs. From xfs_info I get the following for mseas-data-0-1
[root@nas-0-1 mseas-data-0-1]# xfs_info . meta-data=/dev/sdb1 isize=256 agcount=32, agsize=167846667 blks = sectsz=512 attr=1 data = bsize=4096 blocks=5371093344, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0
Unfortunately, I don't know how to interpret this or if it is giving relevant information to the question at hand>
- use it to determine if there are reserved blocks
- use it to adjust the settings
[0] https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks
[root@nas-0-1 mseas-data-0-1]# df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb1 3290047552 4391552 3285656000 1% /mseas-data-0-1
I don't know if the following is relevant but the disk in question is served as one of 3 bricks in a gluster namespace.
Based on the test with touch, which is happening directly at the NFS level, this seems to be an NFS rather than gluster issue. I couldn't find any file in /var/log which had a time that corresponded to the failed touch test and I didn't see anything in dmesg. We have tried rebooting this system. What else should we look at and/or try to resolve or debug this issue?
If you have a non-root shell account on that box, can you write to that array from the NFS host? ( Take NFS out of the equation. )
Unfortunately we only have a root account on that box.
Thanks.
Pat
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
change mount to nobarrier,inode64,delaylog
----- Original Message ----- | | Hi: | | > On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote: | > | >> Hi, | >> | >> I have a server running under CentOS 5.8 and I appear to be in a | >> situation in which the NFS file server is not recognizing the | >> available | >> space on a particular disk (actually a hardware RAID-6 of 13 2Tb | >> disks). | >> If I try to write to the disk I get the following error message | >> | >> [root@nas-0-1 mseas-data-0-1]# touch dum | >> touch: cannot touch `dum': No space left on device | >> | >> However, if I check the available space, there seems to | >> be plenty | >> | >> [root@nas-0-1 mseas-data-0-1]# df -h . | >> Filesystem Size Used Avail Use% Mounted on | >> /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1 | >> | > | > Maybe you're hitting the allocation of reserved blocks for root? | > With your disk usage of 97% I'd think that could be the case. | > | > You didn't say what file system you're using for that 21TB array, | > so we | > (this list) won't be of too much help without knowing that. | | xfs file system. The fstab line for this array is: | | /dev/sdb1 /mseas-data-0-1 xfs defaults | 1 0 | | > | > tune2fs [0] is your friend | | if I read the man pages correctly tune2fs will not work for xfs. | From xfs_info I get the following for mseas-data-0-1 | | [root@nas-0-1 mseas-data-0-1]# xfs_info . | meta-data=/dev/sdb1 isize=256 agcount=32, | agsize=167846667 blks | = sectsz=512 attr=1 | data = bsize=4096 blocks=5371093344, | imaxpct=25 | = sunit=0 swidth=0 blks, | unwritten=1 | naming =version 2 bsize=4096 | log =internal bsize=4096 blocks=32768, version=1 | = sectsz=512 sunit=0 blks, | lazy-count=0 | realtime =none extsz=4096 blocks=0, rtextents=0 | | Unfortunately, I don't know how to interpret this or if | it is giving relevant information to the question at hand> | | > - use it to determine if there are reserved blocks | > - use it to adjust the settings | > | > [0] | > https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks | > | > | >> [root@nas-0-1 mseas-data-0-1]# df -i . | >> Filesystem Inodes IUsed IFree IUse% Mounted on | >> /dev/sdb1 3290047552 4391552 3285656000 1% | >> /mseas-data-0-1 | >> | >> I don't know if the following is relevant but the disk in question | >> is served as one of 3 bricks in a gluster namespace. | >> | >> Based on the test with touch, which is happening directly | >> at the NFS level, this seems to be an NFS rather than gluster | >> issue. I couldn't find any file in /var/log which had a | >> time that corresponded to the failed touch test and I didn't | >> see anything in dmesg. We have tried rebooting this system. | >> What else should we look at and/or try to resolve or debug | >> this issue? | >> | > | > If you have a non-root shell account on that box, can you write to | > that | > array from the NFS host? | > ( Take NFS out of the equation. ) | | Unfortunately we only have a root account on that box. | | > | > | >> Thanks. | >> | >> Pat | >> | >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | >> Pat Haley Email: phaley@mit.edu | >> Center for Ocean Engineering Phone: (617) 253-6824 | >> Dept. of Mechanical Engineering Fax: (617) 253-8125 | >> MIT, Room 5-213 http://web.mit.edu/phaley/www/ | >> 77 Massachusetts Avenue | >> Cambridge, MA 02139-4301 | >> _______________________________________________ | >> CentOS mailing list | >> CentOS@centos.org | >> http://lists.centos.org/mailman/listinfo/centos | >> | > | > | > | | | -- | | -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | Pat Haley Email: phaley@mit.edu | Center for Ocean Engineering Phone: (617) 253-6824 | Dept. of Mechanical Engineering Fax: (617) 253-8125 | MIT, Room 5-213 http://web.mit.edu/phaley/www/ | 77 Massachusetts Avenue | Cambridge, MA 02139-4301 | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos |
Hi James,
My system did not recognize the delaylog option, but when I mounted with nobarrier,inode64 things worked and I was able to write to the array!
Thanks!
Pat
change mount to nobarrier,inode64,delaylog
----- Original Message ----- | | Hi: | | > On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote: | > | >> Hi, | >> | >> I have a server running under CentOS 5.8 and I appear to be in a | >> situation in which the NFS file server is not recognizing the | >> available | >> space on a particular disk (actually a hardware RAID-6 of 13 2Tb | >> disks). | >> If I try to write to the disk I get the following error message | >> | >> [root@nas-0-1 mseas-data-0-1]# touch dum | >> touch: cannot touch `dum': No space left on device | >> | >> However, if I check the available space, there seems to | >> be plenty | >> | >> [root@nas-0-1 mseas-data-0-1]# df -h . | >> Filesystem Size Used Avail Use% Mounted on | >> /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1 | >> | > | > Maybe you're hitting the allocation of reserved blocks for root? | > With your disk usage of 97% I'd think that could be the case. | > | > You didn't say what file system you're using for that 21TB array, | > so we | > (this list) won't be of too much help without knowing that. | | xfs file system. The fstab line for this array is: | | /dev/sdb1 /mseas-data-0-1 xfs defaults | 1 0 | | > | > tune2fs [0] is your friend | | if I read the man pages correctly tune2fs will not work for xfs. | From xfs_info I get the following for mseas-data-0-1 | | [root@nas-0-1 mseas-data-0-1]# xfs_info . | meta-data=/dev/sdb1 isize=256 agcount=32, | agsize=167846667 blks | = sectsz=512 attr=1 | data = bsize=4096 blocks=5371093344, | imaxpct=25 | = sunit=0 swidth=0 blks, | unwritten=1 | naming =version 2 bsize=4096 | log =internal bsize=4096 blocks=32768, version=1 | = sectsz=512 sunit=0 blks, | lazy-count=0 | realtime =none extsz=4096 blocks=0, rtextents=0 | | Unfortunately, I don't know how to interpret this or if | it is giving relevant information to the question at hand> | | > - use it to determine if there are reserved blocks | > - use it to adjust the settings | > | > [0] | > https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks | > | > | >> [root@nas-0-1 mseas-data-0-1]# df -i . | >> Filesystem Inodes IUsed IFree IUse% Mounted on | >> /dev/sdb1 3290047552 4391552 3285656000 1% | >> /mseas-data-0-1 | >> | >> I don't know if the following is relevant but the disk in question | >> is served as one of 3 bricks in a gluster namespace. | >> | >> Based on the test with touch, which is happening directly | >> at the NFS level, this seems to be an NFS rather than gluster | >> issue. I couldn't find any file in /var/log which had a | >> time that corresponded to the failed touch test and I didn't | >> see anything in dmesg. We have tried rebooting this system. | >> What else should we look at and/or try to resolve or debug | >> this issue? | >> | > | > If you have a non-root shell account on that box, can you write to | > that | > array from the NFS host? | > ( Take NFS out of the equation. ) | | Unfortunately we only have a root account on that box. | | > | > | >> Thanks. | >> | >> Pat | >> | >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | >> Pat Haley Email: phaley@mit.edu | >> Center for Ocean Engineering Phone: (617) 253-6824 | >> Dept. of Mechanical Engineering Fax: (617) 253-8125 | >> MIT, Room 5-213 http://web.mit.edu/phaley/www/ | >> 77 Massachusetts Avenue | >> Cambridge, MA 02139-4301 | >> _______________________________________________ | >> CentOS mailing list | >> CentOS@centos.org | >> http://lists.centos.org/mailman/listinfo/centos | >> | > | > | > | | | -- | | -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | Pat Haley Email: phaley@mit.edu | Center for Ocean Engineering Phone: (617) 253-6824 | Dept. of Mechanical Engineering Fax: (617) 253-8125 | MIT, Room 5-213 http://web.mit.edu/phaley/www/ | 77 Massachusetts Avenue | Cambridge, MA 02139-4301 | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos |
Pat Haley wrote:
My system did not recognize the delaylog option, but when I mounted with nobarrier,inode64 things worked and I was able to write to the array!
a) please don't top post. b) a couple of things: first, is your system on a UPS? barrier is *supposed* to help make transactions atomic, so that files/dbs are not left in an undefined state in case of power blip/outage; second, it will *VERY* much speed up your NFS writes with nobarrier. We finally started moving people from home directories on 5.x to 6.x when we found that... with 5.x, uncompressing and untarring a 25MB file that expanded to about 105M, on an NFS-mounted drive was about 30 sec, while on 6.x it ran around 7 MINUTES. Then we found nobarrier... and it went down to pretty much what it had been on 5.x.
mark
Thanks!
Pat
change mount to nobarrier,inode64,delaylog
----- Original Message ----- | | Hi: | | > On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote: | > | >> Hi, | >> | >> I have a server running under CentOS 5.8 and I appear to be in a | >> situation in which the NFS file server is not recognizing the | >> available | >> space on a particular disk (actually a hardware RAID-6 of 13 2Tb | >> disks). | >> If I try to write to the disk I get the following error message | >> | >> [root@nas-0-1 mseas-data-0-1]# touch dum | >> touch: cannot touch `dum': No space left on device | >> | >> However, if I check the available space, there seems to | >> be plenty | >> | >> [root@nas-0-1 mseas-data-0-1]# df -h . | >> Filesystem Size Used Avail Use% Mounted on | >> /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1 | >> | > | > Maybe you're hitting the allocation of reserved blocks for root? | > With your disk usage of 97% I'd think that could be the case. | > | > You didn't say what file system you're using for that 21TB array, | > so we | > (this list) won't be of too much help without knowing that. | | xfs file system. The fstab line for this array is: | | /dev/sdb1 /mseas-data-0-1 xfs defaults | 1 0 | | > | > tune2fs [0] is your friend | | if I read the man pages correctly tune2fs will not work for xfs. | From xfs_info I get the following for mseas-data-0-1 | | [root@nas-0-1 mseas-data-0-1]# xfs_info . | meta-data=/dev/sdb1 isize=256 agcount=32, | agsize=167846667 blks | = sectsz=512 attr=1 | data = bsize=4096 blocks=5371093344, | imaxpct=25 | = sunit=0 swidth=0 blks, | unwritten=1 | naming =version 2 bsize=4096 | log =internal bsize=4096 blocks=32768, version=1 | = sectsz=512 sunit=0 blks, | lazy-count=0 | realtime =none extsz=4096 blocks=0, rtextents=0 | | Unfortunately, I don't know how to interpret this or if | it is giving relevant information to the question at hand> | | > - use it to determine if there are reserved blocks | > - use it to adjust the settings | > | > [0] | > https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks | > | > | >> [root@nas-0-1 mseas-data-0-1]# df -i . | >> Filesystem Inodes IUsed IFree IUse% Mounted on | >> /dev/sdb1 3290047552 4391552 3285656000 1% | >> /mseas-data-0-1 | >> | >> I don't know if the following is relevant but the disk in question | >> is served as one of 3 bricks in a gluster namespace. | >> | >> Based on the test with touch, which is happening directly | >> at the NFS level, this seems to be an NFS rather than gluster | >> issue. I couldn't find any file in /var/log which had a | >> time that corresponded to the failed touch test and I didn't | >> see anything in dmesg. We have tried rebooting this system. | >> What else should we look at and/or try to resolve or debug | >> this issue? | >> | > | > If you have a non-root shell account on that box, can you write to | > that | > array from the NFS host? | > ( Take NFS out of the equation. ) | | Unfortunately we only have a root account on that box. | | > | > | >> Thanks. | >> | >> Pat | >> | >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | >> Pat Haley Email: phaley@mit.edu | >> Center for Ocean Engineering Phone: (617) 253-6824 | >> Dept. of Mechanical Engineering Fax: (617) 253-8125 | >> MIT, Room 5-213 http://web.mit.edu/phaley/www/ | >> 77 Massachusetts Avenue | >> Cambridge, MA 02139-4301 | >> _______________________________________________ | >> CentOS mailing list | >> CentOS@centos.org | >> http://lists.centos.org/mailman/listinfo/centos | >> | > | > | > | | | -- | | -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | Pat Haley Email: phaley@mit.edu | Center for Ocean Engineering Phone: (617) 253-6824 | Dept. of Mechanical Engineering Fax: (617) 253-8125 | MIT, Room 5-213 http://web.mit.edu/phaley/www/ | 77 Massachusetts Avenue | Cambridge, MA 02139-4301 | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos |
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, 2014-02-06 at 11:26 -0500, m.roth@5-cent.us wrote:
Pat Haley wrote:
My system did not recognize the delaylog option, but when I mounted with nobarrier,inode64 things worked and I was able to write to the array!
a) please don't top post. b) a couple of things: first, is your system on a UPS? barrier is *supposed* to help make transactions atomic, so that files/dbs are not left in an undefined state in case of power blip/outage; second, it will *VERY* much speed up your NFS writes with nobarrier. We finally started moving people from home directories on 5.x to 6.x when we found that... with 5.x, uncompressing and untarring a 25MB file that expanded to about 105M, on an NFS-mounted drive was about 30 sec, while on 6.x it ran around 7 MINUTES. Then we found nobarrier... and it went down to pretty much what it had been on 5.x.
mark
and please DO NOT forget to eliminate all the unwanted text :-)
Thanks,
Thanks!
Pat
change mount to nobarrier,inode64,delaylog
----- Original Message ----- | | Hi: | | > On Tue, Feb 4, 2014 at 1:03 PM, Pat Haley phaley@mit.edu wrote: | > | >> Hi, | >> | >> I have a server running under CentOS 5.8 and I appear to be in a | >> situation in which the NFS file server is not recognizing the | >> available | >> space on a particular disk (actually a hardware RAID-6 of 13 2Tb | >> disks). | >> If I try to write to the disk I get the following error message | >> | >> [root@nas-0-1 mseas-data-0-1]# touch dum | >> touch: cannot touch `dum': No space left on device | >> | >> However, if I check the available space, there seems to | >> be plenty | >> | >> [root@nas-0-1 mseas-data-0-1]# df -h . | >> Filesystem Size Used Avail Use% Mounted on | >> /dev/sdb1 21T 20T 784G 97% /mseas-data-0-1 | >> | > | > Maybe you're hitting the allocation of reserved blocks for root? | > With your disk usage of 97% I'd think that could be the case. | > | > You didn't say what file system you're using for that 21TB array, | > so we | > (this list) won't be of too much help without knowing that. | | xfs file system. The fstab line for this array is: | | /dev/sdb1 /mseas-data-0-1 xfs defaults | 1 0 | | > | > tune2fs [0] is your friend | | if I read the man pages correctly tune2fs will not work for xfs. | From xfs_info I get the following for mseas-data-0-1 | | [root@nas-0-1 mseas-data-0-1]# xfs_info . | meta-data=/dev/sdb1 isize=256 agcount=32, | agsize=167846667 blks | = sectsz=512 attr=1 | data = bsize=4096 blocks=5371093344, | imaxpct=25 | = sunit=0 swidth=0 blks, | unwritten=1 | naming =version 2 bsize=4096 | log =internal bsize=4096 blocks=32768, version=1 | = sectsz=512 sunit=0 blks, | lazy-count=0 | realtime =none extsz=4096 blocks=0, rtextents=0 | | Unfortunately, I don't know how to interpret this or if | it is giving relevant information to the question at hand> | | > - use it to determine if there are reserved blocks | > - use it to adjust the settings | > | > [0] | > https://wiki.archlinux.org/index.php/ext4#Remove_reserved_blocks | > | > | >> [root@nas-0-1 mseas-data-0-1]# df -i . | >> Filesystem Inodes IUsed IFree IUse% Mounted on | >> /dev/sdb1 3290047552 4391552 3285656000 1% | >> /mseas-data-0-1 | >> | >> I don't know if the following is relevant but the disk in question | >> is served as one of 3 bricks in a gluster namespace. | >> | >> Based on the test with touch, which is happening directly | >> at the NFS level, this seems to be an NFS rather than gluster | >> issue. I couldn't find any file in /var/log which had a | >> time that corresponded to the failed touch test and I didn't | >> see anything in dmesg. We have tried rebooting this system. | >> What else should we look at and/or try to resolve or debug | >> this issue? | >> | > | > If you have a non-root shell account on that box, can you write to | > that | > array from the NFS host? | > ( Take NFS out of the equation. ) | | Unfortunately we only have a root account on that box. | | > | > | >> Thanks. | >> | >> Pat | >> | >> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | >> Pat Haley Email: phaley@mit.edu | >> Center for Ocean Engineering Phone: (617) 253-6824 | >> Dept. of Mechanical Engineering Fax: (617) 253-8125 | >> MIT, Room 5-213 http://web.mit.edu/phaley/www/ | >> 77 Massachusetts Avenue | >> Cambridge, MA 02139-4301 | >> _______________________________________________ | >> CentOS mailing list | >> CentOS@centos.org | >> http://lists.centos.org/mailman/listinfo/centos | >> | > | > | > | | | -- | | -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- | Pat Haley Email: phaley@mit.edu | Center for Ocean Engineering Phone: (617) 253-6824 | Dept. of Mechanical Engineering Fax: (617) 253-8125 | MIT, Room 5-213 http://web.mit.edu/phaley/www/ | 77 Massachusetts Avenue | Cambridge, MA 02139-4301 | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos |
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Pat Haley Email: phaley@mit.edu Center for Ocean Engineering Phone: (617) 253-6824 Dept. of Mechanical Engineering Fax: (617) 253-8125 MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue Cambridge, MA 02139-4301 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CUT OUT TEXT NOT RELEVANT.
Many thanks.
----- Original Message ----- | | Hi James, | | My system did not recognize the delaylog option, | but when I mounted with nobarrier,inode64 | things worked and I was able to write to the | array! | | Thanks! | | Pat
The actual reason that this worked is the inode64 option and nothing else. Please note that this will remove backward compatibility with some older systems (very old). I hope you understand what it is that inode64 does, if not the following URL describes it
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/...
The nobarrier option disabled write barriers and so writes are faster. The *only* reason I enabled this is because you said you had a RAID card, which I *assumed* was battery backed. Should your machine disks not be battery backed I would request that you remove this option as it will be more dangerous if you don't have UPS or battery backed up RAID controller. In either case, I do also recommend that you use the deadline disk I/O scheduler which will make either option even more safe.
If you have a battery backed RAID controller deadline will have no real performance hit unless you have REALLY slow disks behind it. XFS does some pretty extensive caching and so again, unless there are other issues you have with the equipment.
Sorry about delaylog, that's a CentOS 6 option that is unavailable in 5.8 but if you should move to CentOS 6 it's an option that you will want to check out for additional increased performance.
There are various other tweaks that you can use to increase performance such as those surrounding file system caches. CentOS 6 includes a tool called tuned-adm which has various predefined profiles that you can assign to a machine. Profiles like throughput-performance, virtual-host, virtual-guest as well as enterprise-storage. You may wish to have a look at the tunables that the profiles set and apply them manually to your CentOS 5 machine where applicable.
Glad to see this helped though! :)
On 2014-02-06, James A. Peltier jpeltier@sfu.ca wrote:
The actual reason that this worked is the inode64 option and nothing else. Please note that this will remove backward compatibility with some older systems (very old). I hope you understand what it is that inode64 does, if not the following URL describes it
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/...
Here is another URL, from the XFS folks:
http://www.xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for...
Be sure to read the previous FAQ entry if you are exporting NFS.
--keith