I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly.
However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis).
I've heard rumors of a Perl script used for apache logs.
Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that.
Thanks in advance.
On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
logger
It's part of util-linux, and should be on every CentOS box, unless something is bad wrong....
It can take its stdin and syslog to any loglevel and facility, and can do so over any socket.
on 16:35 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
logger
I'm familiar with it.
It's part of util-linux, and should be on every CentOS box, unless something is bad wrong....
It can take its stdin and syslog to any loglevel and facility, and can do so over any socket.
So: as part of a robust production system solution, how would I, say, avoid retransmitting old log data?
Named FIFO pipes come to mind.
On Thursday, March 24, 2011 04:44:00 pm Dr. Ed Morbius wrote:
on 16:35 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
logger
I'm familiar with it.
Have you tried it? Prior to PostgreSQL supporting syslog I used it to pipe PostgreSQL output to syslog. Worked fine.
So: as part of a robust production system solution, how would I, say, avoid retransmitting old log data?
Timestamps, good NTP setup, and log deduplication. Better to have retransmitted than to never have transmitted at all.
Or, in the specific case of nginx, use the syslog patch from Marlon de Boer.
But nginx is not in the CentOS repos that I can see; logger is, however, and the general usage of logger in the CentOS context would be on-topic.
on 17:14 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 04:44:00 pm Dr. Ed Morbius wrote:
on 16:35 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
logger
I'm familiar with it.
Have you tried it? Prior to PostgreSQL supporting syslog I used it to pipe PostgreSQL output to syslog. Worked fine.
I haven't, looking at it.
So: as part of a robust production system solution, how would I, say, avoid retransmitting old log data?
Timestamps, good NTP setup, and log deduplication. Better to have retransmitted than to never have transmitted at all.
OK. Any pointers on configuration are greatly appreciated. Docs, etc.
Or, in the specific case of nginx, use the syslog patch from Marlon de Boer.
Yeah, we're aware of that (I mentioned this in my first post to the thread).
But nginx is not in the CentOS repos that I can see; logger is, however, and the general usage of logger in the CentOS context would be on-topic.
We've got a locally-compiled version of nginx, so patching isn't out of the question. Just looking at all our options.
Thanks.
On Thursday, March 24, 2011 05:37:41 pm Dr. Ed Morbius wrote:
on 17:14 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
Prior to PostgreSQL supporting syslog I used [logger] to pipe PostgreSQL output to syslog. Worked fine.
I haven't, looking at it.
It is one option that is definitely in vanilla CentOS.
OK. Any pointers on configuration are greatly appreciated. Docs, etc.
Whew. Large scale remote syslog operation is a large subject; I've never had anything large-enough scale to need more than logwatch or site-grown scripts to do processing. The biggest thing to do is set up NTP and have three reference time sources (three so that if one is wrong you know which one). Otherwise, log correlation is impossible.
Yeah, we're aware of that (I mentioned this in my first post to the thread).
Yep, that you did.
We've got a locally-compiled version of nginx, so patching isn't out of the question. Just looking at all our options.
While CentOS doesn't provide nginx itself, it does provide tools for dealing with logs; I saw several things doing a 'yum list | grep log' (I know there's easier ways of doing that; that's just the way I prefer to go about it). Also try grepping a yum list for 'watch' as I remember some logwatching stuff.....
on 17:50 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 05:37:41 pm Dr. Ed Morbius wrote:
on 17:14 Thu 24 Mar, Lamar Owen (lowen@pari.edu) wrote:
Prior to PostgreSQL supporting syslog I used [logger] to pipe PostgreSQL output to syslog. Worked fine.
I haven't, looking at it.
It is one option that is definitely in vanilla CentOS.
Quite.
OK. Any pointers on configuration are greatly appreciated. Docs, etc.
Whew. Large scale remote syslog operation is a large subject; I've never had anything large-enough scale to need more than logwatch or site-grown scripts to do processing. The biggest thing to do is set up NTP and have three reference time sources (three so that if one is wrong you know which one). Otherwise, log correlation is impossible.
It is. There've been a few advances in sysadmin practice since the Nemeth books were first produced, and while there are some titles dealing with portions of this, codifying practices in docs would be a wonderful thing. I've considered (and been approached regarding) tackling at least parts of this myself.
Useful logging is definitely part of this.
Yeah, we're aware of that (I mentioned this in my first post to the thread).
Yep, that you did.
We've got a locally-compiled version of nginx, so patching isn't out of the question. Just looking at all our options.
While CentOS doesn't provide nginx itself, it does provide tools for dealing with logs; I saw several things doing a 'yum list | grep log' (I know there's easier ways of doing that; that's just the way I prefer to go about it). Also try grepping a yum list for 'watch' as I remember some logwatching stuff.....
Right, and the general solution also generalizes to other tools. Postgresql (which we aren't using currently) also has its own log handler (a small frustration of mine with the database).
And I turned up the rsyslogd feature:
http://www.rsyslog.com/doc/imfile.html Text File Input Module
Module Name: imfile
Author: Rainer Gerhards rgerhards@adiscon.com
Description:
Provides the ability to convert any standard text file into a syslog message. A standard text file is a file consisting of printable characters with lines being delimited by LF.
The file is read line-by-line and any line read is passed to rsyslog's rule engine. The rule engine applies filter conditons and selects which actions needs to be carried out.
As new lines are written they are taken from the file and processed. Please note that this happens based on a polling interval and not immediately. The file monitor support file rotation. To fully work, rsyslogd must run while the file is rotated. Then, any remaining lines from the old file are read and processed and when done with that, the new file is being processed from the beginning. If rsyslogd is stopped during rotation, the new file is read, but any not-yet-reported lines from the previous file can no longer be obtained.
When rsyslogd is stopped while monitoring a text file, it records the last processed location and continues to work from there upon restart. So no data is lost during a restart (except, as noted above, if the file is rotated just in this very moment).
Currently, the file must have a fixed name and location (directory). It is planned to add support for dynamically generating file names in the future.
Multiple files may be monitored by specifying $InputRunFileMonitor multiple times.
On Thursday, March 24, 2011 06:52:24 pm Dr. Ed Morbius wrote:
Right, and the general solution also generalizes to other tools. Postgresql (which we aren't using currently) also has its own log handler (a small frustration of mine with the database).
PostgreSQL has had syslog support since version 7.x, with programmable facility information in /var/lib/pgsql/data/postgresql.conf. It's commented out by default; looking at a C4 server that has 7.4.30: #syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres'
(I don't have syslogging enabled for that box for PostgreSQL)
Sometimes it's still nice to see the stdout and stderr, though. And I don't recall when or if remote support was added; 7.4 was the last version I actively maintained the RPMs for, and the 8.x databases I have running aren't using syslog.
on 09:08 Fri 25 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 24, 2011 06:52:24 pm Dr. Ed Morbius wrote:
Right, and the general solution also generalizes to other tools. Postgresql (which we aren't using currently) also has its own log handler (a small frustration of mine with the database).
PostgreSQL has had syslog support since version 7.x, with programmable facility information in /var/lib/pgsql/data/postgresql.conf. It's commented out by default; looking at a C4 server that has 7.4.30: #syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres'
Good to know.
(I don't have syslogging enabled for that box for PostgreSQL)
Sometimes it's still nice to see the stdout and stderr, though.
Of course it is. Most daemon / service utilities have the ability to run non-detached, in debug mode. And you can always hunt down filedescriptors and nasty stuff like that, but devs of such abominations should be hauled out and shot. Or bribed with beer until they do provide the requisite foreground + stdout/stderr functionality.
And I don't recall when or if remote support was added; 7.4 was the last version I actively maintained the RPMs for, and the 8.x databases I have running aren't using syslog.
If there's syslog support, rsyslog or syslogng can handle the remote aspect.
Greetings,
On 3/25/11, Dr. Ed Morbius dredmorbius@gmail.com wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
Why not use a fine tuned apache instance instead for webserver?
Just a thought.
</ducking the bricks>
Regards,
Rajagopal
Hi!
I'm using follow method for remote logging and catch logs from many servers. Nginx writes logs into fifo, which created via nginx init script:
cat /etc/sysconfig/nginx ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
Nginx just writes to fifo as to file. Nginx has nonblocking output to logs and if nobody read fifo nginx dont stop on logs write.
From other side pipe reads syslog-ng.
cat /etc/syslog-ng/syslog-ng.conf ... source s_nginx_20 { fifo ("/var/log/nginx/access.log" log_prefix("nginx-access-log: ")); };
source s_nginx_21 { fifo ("/var/log/nginx/error.log" log_prefix("nginx-error-log: ")); }; ... destination d_remote { tcp("remote.example.com", port(514)); }; ... # nginx filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... # nginx log { source(s_nginx_20); filter(f_nginx_20); destination(d_remote); }; log { source(s_nginx_21); filter(f_nginx_21); destination(d_remote); };
To avoid syslog-ng problems on startup (ex. if fifo does not exists) used follow solution: cat /etc/sysconfig/syslog-ng ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
On remote side (remote.example.com): cat /etc/syslog-ng/syslog-ng.conf ... source s_net { udp(ip(0.0.0.0) port(514)); tcp(ip(0.0.0.0) port(514) keep-alive(yes) max-connections(128)); }; ... filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... destination d_nginx_20 { file("/var/log/nginx/access.log"); }; destination d_nginx_21 { file("/var/log/nginx/error.log"); }; ... log { source(s_sys); filter(f_nginx_20); destination(d_nginx_20); }; log { source(s_sys); filter(f_nginx_21); destination(d_nginx_21); };
In the same way I catch logs from 20-30 servers to 1 server, approx. 300GB gzipped logs per day.
On Thu, Mar 24, 2011 at 11:23 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly.
However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis).
I've heard rumors of a Perl script used for apache logs.
Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that.
Thanks in advance.
-- Dr. Ed Morbius, Chief Scientist / | Robot Wrangler / Staff Psychologist | When you seek unlimited power Krell Power Systems Unlimited | Go to Krell! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
First: thanks very much for spelling this out, Ilyas. This was along the lines of what I'd been considering. You addressed a number of concerns I had (e.g.: non-blocking output) which is really helpful.
on 08:39 Fri 25 Mar, Ilyas -- (umask00@gmail.com) wrote:
Hi!
I'm using follow method for remote logging and catch logs from many servers. Nginx writes logs into fifo, which created via nginx init script:
cat /etc/sysconfig/nginx ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
Nginx just writes to fifo as to file. Nginx has nonblocking output to logs and if nobody read fifo nginx dont stop on logs write.
Bingo.
From other side pipe reads syslog-ng. cat /etc/syslog-ng/syslog-ng.conf ... source s_nginx_20 { fifo ("/var/log/nginx/access.log" log_prefix("nginx-access-log: ")); };
source s_nginx_21 { fifo ("/var/log/nginx/error.log" log_prefix("nginx-error-log: ")); }; ... destination d_remote { tcp("remote.example.com", port(514)); }; ... # nginx filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... # nginx log { source(s_nginx_20); filter(f_nginx_20); destination(d_remote); }; log { source(s_nginx_21); filter(f_nginx_21); destination(d_remote); };
Nice.
To avoid syslog-ng problems on startup (ex. if fifo does not exists) used follow solution: cat /etc/sysconfig/syslog-ng ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
On remote side (remote.example.com): cat /etc/syslog-ng/syslog-ng.conf ... source s_net { udp(ip(0.0.0.0) port(514)); tcp(ip(0.0.0.0) port(514) keep-alive(yes) max-connections(128)); }; ... filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... destination d_nginx_20 { file("/var/log/nginx/access.log"); }; destination d_nginx_21 { file("/var/log/nginx/error.log"); }; ... log { source(s_sys); filter(f_nginx_20); destination(d_nginx_20); }; log { source(s_sys); filter(f_nginx_21); destination(d_nginx_21); };
In the same way I catch logs from 20-30 servers to 1 server, approx. 300GB gzipped logs per day.
Great. That also answers the scaling question. We're comfortably under that scale for now.
Very, very helpful post, thanks again.
On Thu, Mar 24, 2011 at 11:23 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly.
However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis).
I've heard rumors of a Perl script used for apache logs.
Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that.
Thanks in advance.
-- Dr. Ed Morbius, Chief Scientist / | Robot Wrangler / Staff Psychologist | When you seek unlimited power Krell Power Systems Unlimited | Go to Krell! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Ilyas R. Khasyanov Unix/Linux System Administrator GPG Key ID: 6EC5EB27 (Changed since 2009-05-12)
Hi!
Also note that: 1. logrotate wouldn't rotate fifo/pipes if options `notifempty' enabled in logrotate profiles. 2. enable buffering in syslog-ng.conf (next - whole list of options in my config): options { sync (128); time_reopen (10); log_fifo_size (16384); chain_hostnames (yes); use_dns (no); use_fqdn (yes); create_dirs (yes); keep_hostname (yes); dir_perm(0755); perm(0644); dir_owner(root); dir_group(root); owner(root); group(root); log_msg_size(16384); }; 3. Don't worry about blocking output in some services. If syslog-ng listen fifo locally (in the same server/vps where working daemon which logs we want serve) any output will be buffered (with few limits of free version of syslog-ng) in syslog-ng. Main idea here is that other side which listen fifo - locally runned syslog-ng. 4. I used and using opensource version of syslog-ng and no have problems with load. Syslog-ng is very perfect tool for loads.
On Fri, Mar 25, 2011 at 10:00 AM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
First: thanks very much for spelling this out, Ilyas. This was along the lines of what I'd been considering. You addressed a number of concerns I had (e.g.: non-blocking output) which is really helpful.
on 08:39 Fri 25 Mar, Ilyas -- (umask00@gmail.com) wrote:
Hi!
I'm using follow method for remote logging and catch logs from many servers. Nginx writes logs into fifo, which created via nginx init script:
cat /etc/sysconfig/nginx ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
Nginx just writes to fifo as to file. Nginx has nonblocking output to logs and if nobody read fifo nginx dont stop on logs write.
Bingo.
From other side pipe reads syslog-ng. cat /etc/syslog-ng/syslog-ng.conf ... source s_nginx_20 { fifo ("/var/log/nginx/access.log" log_prefix("nginx-access-log: ")); };
source s_nginx_21 { fifo ("/var/log/nginx/error.log" log_prefix("nginx-error-log: ")); }; ... destination d_remote { tcp("remote.example.com", port(514)); }; ... # nginx filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... # nginx log { source(s_nginx_20); filter(f_nginx_20); destination(d_remote); }; log { source(s_nginx_21); filter(f_nginx_21); destination(d_remote); };
Nice.
To avoid syslog-ng problems on startup (ex. if fifo does not exists) used follow solution: cat /etc/sysconfig/syslog-ng ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log
On remote side (remote.example.com): cat /etc/syslog-ng/syslog-ng.conf ... source s_net { udp(ip(0.0.0.0) port(514)); tcp(ip(0.0.0.0) port(514) keep-alive(yes) max-connections(128)); }; ... filter f_nginx_20 { match("nginx-access-log: "); }; filter f_nginx_21 { match("nginx-error-log: "); }; ... destination d_nginx_20 { file("/var/log/nginx/access.log"); }; destination d_nginx_21 { file("/var/log/nginx/error.log"); }; ... log { source(s_sys); filter(f_nginx_20); destination(d_nginx_20); }; log { source(s_sys); filter(f_nginx_21); destination(d_nginx_21); };
In the same way I catch logs from 20-30 servers to 1 server, approx. 300GB gzipped logs per day.
Great. That also answers the scaling question. We're comfortably under that scale for now.
Very, very helpful post, thanks again.
On Thu, Mar 24, 2011 at 11:23 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively.
I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly.
However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis).
I've heard rumors of a Perl script used for apache logs.
Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that.
Thanks in advance.
-- Dr. Ed Morbius, Chief Scientist / | Robot Wrangler / Staff Psychologist | When you seek unlimited power Krell Power Systems Unlimited | Go to Krell! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Ilyas R. Khasyanov Unix/Linux System Administrator GPG Key ID: 6EC5EB27 (Changed since 2009-05-12)
-- Dr. Ed Morbius, Chief Scientist / | Robot Wrangler / Staff Psychologist | When you seek unlimited power Krell Power Systems Unlimited | Go to Krell! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
on 20:19 Fri 25 Mar, Ilyas -- (umask00@gmail.com) wrote:
Hi!
Also note that:
- logrotate wouldn't rotate fifo/pipes if options `notifempty'
enabled in logrotate profiles. 2. enable buffering in syslog-ng.conf (next - whole list of options in my config): options { sync (128); time_reopen (10); log_fifo_size (16384); chain_hostnames (yes); use_dns (no); use_fqdn (yes); create_dirs (yes); keep_hostname (yes); dir_perm(0755); perm(0644); dir_owner(root); dir_group(root); owner(root); group(root); log_msg_size(16384); }; 3. Don't worry about blocking output in some services. If syslog-ng listen fifo locally (in the same server/vps where working daemon which logs we want serve) any output will be buffered (with few limits of free version of syslog-ng) in syslog-ng. Main idea here is that other side which listen fifo - locally runned syslog-ng.
My concern with buffering / blocking output has more to do with some critical service saying "wups, no more serving until I can flush my log buffers" than it does losing a few lines of logging periodically (though that should also be minimized).
- I used and using opensource version of syslog-ng and no have
problems with load. Syslog-ng is very perfect tool for loads.
<...>
On 3/25/2011 2:53 PM, Dr. Ed Morbius wrote:
My concern with buffering / blocking output has more to do with some critical service saying "wups, no more serving until I can flush my log buffers" than it does losing a few lines of logging periodically (though that should also be minimized).
Does this have to be centralized in realtime? I gather up the logs from a bunch of web servers with rsync over ssh after the application does its own log rollover. The log files end up with timestamped names so it doesn't matter if the rsync misses a day or two. That way you've got the whole file system as a queue...
on 15:28 Fri 25 Mar, Les Mikesell (lesmikesell@gmail.com) wrote:
On 3/25/2011 2:53 PM, Dr. Ed Morbius wrote:
My concern with buffering / blocking output has more to do with some critical service saying "wups, no more serving until I can flush my log buffers" than it does losing a few lines of logging periodically (though that should also be minimized).
Does this have to be centralized in realtime?
It'd be nice / helpful / useful.
We're at the point now where syncing daily will exceed local storage allocations soon with projected growth rates.
We could do more frequent log rotation / distribution, but given the role and volume, real-time (or very close to it) updates would be preferred, and workfactor is largely orthogonal.
If we need to queue, we could always have rsyslog (we're using it, not syslog-ng) write locally and rotate those frequently. There's still the risk of a hiccup between nginx and rsyslog, but we can keep an eye on that via monit.
On 3/25/2011 3:42 PM, Dr. Ed Morbius wrote:
My concern with buffering / blocking output has more to do with some critical service saying "wups, no more serving until I can flush my log buffers" than it does losing a few lines of logging periodically (though that should also be minimized).
Does this have to be centralized in realtime?
It'd be nice / helpful / useful.
We're at the point now where syncing daily will exceed local storage allocations soon with projected growth rates.
We could do more frequent log rotation / distribution, but given the role and volume, real-time (or very close to it) updates would be preferred, and workfactor is largely orthogonal.
If we need to queue, we could always have rsyslog (we're using it, not syslog-ng) write locally and rotate those frequently. There's still the risk of a hiccup between nginx and rsyslog, but we can keep an eye on that via monit.
Aren't you building in a single point of failure if you use a central syslog receiver - or do you have some sort of failover there? My servers are distributed over different data centers and I wouldn't want to depend on constant connectivity.