Hey folks,
We have Munin set up for longer term performance monitoring stuff, and it has been extremely useful to us for what it does. However, it is hard coded to poll systems at 5 minute intervals, which of course proves not to be so useful when you have to dig down into more detail on something.
What we typically do for load tests where we need more detail, is run this command :
/usr/lib64/sa/sadc -d -I -F 2 /var/log/foo/bar
Which of course logs data every 2 seconds.
We now have a problem with our PostgreSQL server and want to set something up to record sadc data more frequently than the 1 minute limitation of cron. Though 2 seconds is probably a bit much, we are thinking more like 5 or 10 seconds. So the above command would be ideal.
Except ... what if it fills up the disk?
I looked at the man page and do not see any obvious way to get it to write out to a file of given size, and just keep overwriting the oldest data in that file. That way would could pre-allocate a big file of a given size, and be able to store the last X minutes of sadc data. So when the PG system crashes again (sigh), we can review the data for the last X minutes before the crash.
One solution I could think of is if there is some kind of filesystem to implement this sort of circular file.
Any ideas? Any other obvious solution I am missing?
I can think of ways to do the next best thing in a script e.g. alternate between two files and switch back and forth programmatically.
thanks, -Alan
Alan McKay wrote:
One solution I could think of is if there is some kind of filesystem to implement this sort of circular file.
You could create one, use dd to create an empty file that is the size you want, format it with mke2fs and mount it via loopback
Any ideas? Any other obvious solution I am missing?
For me I have a custom setup that queries sar once a minute for cpu usage mainly, and truncates it's log files daily, the results are queried via remote and stored in RRD files. It wouldn't be hard to configure it to query more often, even though cron only kicks off once a minute you could have sar run every minute and log many times during that minute.
nate
You could create one, use dd to create an empty file that is the size you want, format it with mke2fs and mount it via loopback
Would it automatically do the loop thing though? i.e. overwrite the oldest data in the file?
Alan McKay wrote:
You could create one, use dd to create an empty file that is the size you want, format it with mke2fs and mount it via loopback
Would it automatically do the loop thing though? i.e. overwrite the oldest data in the file?
No but you could be sure you'd never fill your normal file system up.
If you want to automatically overwrite the oldest file come up with a naming/numbering scheme for the files and iterate through them and then repeat.
nate
If you want to automatically overwrite the oldest file come up with a naming/numbering scheme for the files and iterate through them and then repeat.
Yes, that was my intended workaround from the beginning but I'm hoping someone will know of a way to do it without having to do this. We shall see.
Thanks for your input.
There is a tool in BSDland called fifolog. It does exactly what you want, but is BSD only to my knowledge. Perhaps you can port it, I can't imagine it would be that hard.
You do need to read the data using the fifolog tools, though. It does not store data in pure text.
-geoff
--------------------------------- Geoff Galitz Blankenheim NRW, Germany http://www.galitz.org/ http://german-way.com/blog/