I began an update of one of our servers via yum and, coincidentally or not, I have been getting the following logged into the message file since:
messages:Jan 7 15:55:51 inet07 kernel: post_create: setxattr failed, rc=28 (dev=dm-0 ino=280175)
Now, this tells me that dev dm-0 is out of space but, what is dm-0?
So, can anyone tell me what is happening and why?
On Mon, 2008-01-07 at 16:22 -0500, James B. Byrne wrote:
I began an update of one of our servers via yum and, coincidentally or not, I have been getting the following logged into the message file since:
messages:Jan 7 15:55:51 inet07 kernel: post_create: setxattr failed, rc=28 (dev=dm-0 ino=280175)
Now, this tells me that dev dm-0 is out of space but, what is dm-0?
On my system,
]# ls -l /dev/mapper total 0 brw-rw---- 1 root disk 253, 0 Dec 29 10:07 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 3 Dec 29 10:08 VolGroup01-Home01 brw-rw---- 1 root disk 253, 2 Dec 29 10:08 VolGroupAA-lvol1 brw-rw---- 1 root disk 253, 1 Dec 29 10:08 VolGroupAE-LogVolTemp crw------- 1 root root 10, 63 Dec 29 10:07 control [root@centos01 ~]# ls -l /dev/dm* brw-r----- 1 root root 253, 0 Dec 29 10:07 /dev/dm-0 brw-r----- 1 root root 253, 1 Dec 29 10:08 /dev/dm-1 brw-r----- 1 root root 253, 2 Dec 29 10:08 /dev/dm-2 brw-r----- 1 root root 253, 3 Dec 29 10:08 /dev/dm-3
S/b very similar on yours.
So, can anyone tell me what is happening and why?
NP here. Do a df or du on the equivalent LVM(?) item.
<snip>
HTH
On Mon, 2008-01-07 at 16:59 -0500, William L. Maltby wrote:
On Mon, 2008-01-07 at 16:22 -0500, James B. Byrne wrote:
<snip>
On my system,
]# ls -l /dev/mapper total 0 brw-rw---- 1 root disk 253, 0 Dec 29 10:07 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 3 Dec 29 10:08 VolGroup01-Home01 brw-rw---- 1 root disk 253, 2 Dec 29 10:08 VolGroupAA-lvol1 brw-rw---- 1 root disk 253, 1 Dec 29 10:08 VolGroupAE-LogVolTemp crw------- 1 root root 10, 63 Dec 29 10:07 control [root@centos01 ~]# ls -l /dev/dm* brw-r----- 1 root root 253, 0 Dec 29 10:07 /dev/dm-0 brw-r----- 1 root root 253, 1 Dec 29 10:08 /dev/dm-1 brw-r----- 1 root root 253, 2 Dec 29 10:08 /dev/dm-2 brw-r----- 1 root root 253, 3 Dec 29 10:08 /dev/dm-3
S/b very similar on yours.
So, can anyone tell me what is happening and why?
NP here. Do a df or du on the equivalent LVM(?) item.
s/LVM/mounted LVM/
For the raw device, "stat --filesystem <yours>" should be the ticket?
<snip>
After finishing the update I ran yum clean and then rebooted the server. On startup the following were logged:
Jan 7 16:43:39 inet07 kernel: EXT3-fs: INFO: recovery required on readonly file system. Jan 7 16:43:39 inet07 kernel: EXT3-fs: write access will be enabled during reco very. Jan 7 16:43:39 inet07 kernel: kjournald starting. Commit interval 5 seconds Jan 7 16:43:39 inet07 kernel: EXT3-fs: dm-0: orphan cleanup on readonly fs Jan 7 16:43:40 inet07 kernel: EXT3-fs: dm-0: 1 orphan inode deleted Jan 7 16:43:40 inet07 kernel: EXT3-fs: recovery complete. Jan 7 16:43:40 inet07 kernel: EXT3-fs: mounted filesystem with ordered data mod e.
I would appreciate it very much if somebody could enlighten me as to what happened and why. Is this indicative of a hardware failure?
Regards,
# ls -l /dev/mapper /dev/dm* brw-r----- 1 root root 253, 0 Jan 7 16:42 /dev/dm-0 brw-r----- 1 root root 253, 1 Jan 7 16:42 /dev/dm-1 brw-r----- 1 root root 253, 2 Jan 7 16:42 /dev/dm-2 brw-r----- 1 root root 253, 3 Jan 7 16:42 /dev/dm-3 brw-r----- 1 root root 253, 4 Jan 7 16:42 /dev/dm-4 brw-r----- 1 root root 253, 5 Jan 7 16:42 /dev/dm-5 brw-r----- 1 root root 253, 6 Jan 7 16:42 /dev/dm-6
/dev/mapper: total 0 crw------- 1 root root 10, 63 Jan 7 16:42 control brw-rw---- 1 root disk 253, 0 Jan 7 16:42 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 2 Jan 7 16:42 VolGroup00-LogVol01 brw-rw---- 1 root disk 253, 1 Jan 7 16:42 VolGroup00-LogVol02 brw-rw---- 1 root disk 253, 3 Jan 7 16:42 VolGroup00-lv--IMAP brw-rw---- 1 root disk 253, 6 Jan 7 16:42 VolGroup00-lv--IMAP--2 brw-rw---- 1 root disk 253, 4 Jan 7 16:42 VolGroup00-lv--MailMan brw-rw---- 1 root disk 253, 5 Jan 7 16:42 VolGroup00-lv--webfax
I infer that dm-0 ===> VolGroup00-LogVol00 and that VolGroup00-LogVol00 ===> /
so df / gives
# df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8256952 6677880 1159644 86% /
I am guessing that the yum update caused the file system to fill and to precipitate this problem. Is the the probable cause?
On Mon, 2008-01-07 at 17:22 -0500, James B. Byrne wrote:
# ls -l /dev/mapper /dev/dm* brw-r----- 1 root root 253, 0 Jan 7 16:42 /dev/dm-0 brw-r----- 1 root root 253, 1 Jan 7 16:42 /dev/dm-1 brw-r----- 1 root root 253, 2 Jan 7 16:42 /dev/dm-2 brw-r----- 1 root root 253, 3 Jan 7 16:42 /dev/dm-3 brw-r----- 1 root root 253, 4 Jan 7 16:42 /dev/dm-4 brw-r----- 1 root root 253, 5 Jan 7 16:42 /dev/dm-5 brw-r----- 1 root root 253, 6 Jan 7 16:42 /dev/dm-6
/dev/mapper: total 0 crw------- 1 root root 10, 63 Jan 7 16:42 control brw-rw---- 1 root disk 253, 0 Jan 7 16:42 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 2 Jan 7 16:42 VolGroup00-LogVol01 brw-rw---- 1 root disk 253, 1 Jan 7 16:42 VolGroup00-LogVol02 brw-rw---- 1 root disk 253, 3 Jan 7 16:42 VolGroup00-lv--IMAP brw-rw---- 1 root disk 253, 6 Jan 7 16:42 VolGroup00-lv--IMAP--2 brw-rw---- 1 root disk 253, 4 Jan 7 16:42 VolGroup00-lv--MailMan brw-rw---- 1 root disk 253, 5 Jan 7 16:42 VolGroup00-lv--webfax
I infer that dm-0 ===> VolGroup00-LogVol00 and that VolGroup00-LogVol00 ===> /
so df / gives
# df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8256952 6677880 1159644 86% /
I am guessing that the yum update caused the file system to fill and to precipitate this problem. Is the the probable cause?
With appx 1.19GB available, by itself I don't think so. There are underlying tmpfs files systems associated with the LVs. A stat on those will show a different set of numbers. Maybe one of these filled?
# stat --filesystem /dev/mapper/VolGroup00-LogVol00 File: "/dev/mapper/VolGroup00-LogVol00" ID: 0 Namelen: 255 Type: tmpfs Blocks: Total: 194473 Free: 194415 Available: 194415 Size: 4096 Inodes: Total: 194473 Free: 194075 # stat --filesystem /dev/mapper/VolGroup01-Home01 File: "/dev/mapper/VolGroup01-Home01" ID: 0 Namelen: 255 Type: tmpfs Blocks: Total: 194473 Free: 194415 Available: 194415 Size: 4096 Inodes: Total: 194473 Free: 194075 # df -H / Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 19G 11G 7.4G 59% / I *guess* that when doing the update, there were some components that were in use and could not be *truly* deleted. When the update was being done, in addition to temporary high-water marks achieved while transactions were being done, rpms shuffled, etc., there was probably additional space not yet released by some component that was "replaced" but could not yet be deleted.
There is also the possibility that your i-nodes were used up. Since I set my file systems to 4K blocks, I use fewer than normal.
Do "df -i" on mounted FSs.
I *suspect*, relative to your orphaned i-node, that this same underlying situation was the cause? Some component that couldn't be released was still active when the file system had to be unmounted. It should no re- cur unless it really is some other problem.
# stat --filesystem /dev/mapper/VolGroup00-LogVol00 File: "/dev/mapper/VolGroup00-LogVol00" ID: 0 Namelen: 255 Type: tmpfs Blocks: Total: 258475 Free: 258421 Available: 258421 Size: 4096 Inodes: Total: 224103 Free: 223740
It seems that I have plenty of free inodes now at least.
If it of interest, I did an uptime just prior to shutdown and the server up was 101 days. I did not run any of the utilities discussed in this thread prior to the reboot so I cannot say if there was any evidence of a resource starvation that might relate to uptime.
Thank you very much for your assistance. My postings are a little disjointed because I subscribe to the digest and have to use the archives for questions demanding more immediate responses than once a day.