Greetings,
How does one monitor if a site is being accessed using browser?
IOW, I just want to know if a user has launched a session thru Firefox.
I basically want to know if a user has tried to access the webserver and unable to reach it and log such instances.
I am using cron and curl to seperately monitor the link.
Any clues?
Centos 5.2/Gnome/Firefox 3.0.16
Regards
Rajagopal
On Fri, Jan 29, 2010 at 3:12 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
Greetings,
How does one monitor if a site is being accessed using browser?
IOW, I just want to know if a user has launched a session thru Firefox.
I basically want to know if a user has tried to access the webserver and unable to reach it and log such instances.
I am using cron and curl to seperately monitor the link.
Any clues?
It is possible using the auditd subsysted. You'd need to define a rule to match that user and firefox. I don't have the exact syntax, but the rule I use for root in audit.rules is:
-a entry,always -S open -S close -S read -S write -S link -S unlink -S chmod -S chown -S execve -F uid=root -k root_activity
If you do a man on auditctl it can show the options. You could, for example, generate a rule whenever a particular user launches firefox, etc..
Greetings,
Thanks a lot
On Fri, Jan 29, 2010 at 4:13 PM, Kwan Lowe kwan.lowe@gmail.com wrote: On Fri, Jan 29, 2010 at 3:12 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
It is possible using the auditd subsysted. You'd need to define a rule
I presume you meant subsystem..
-a entry,always -S open -S close -S read -S write -S link -S unlink -S chmod -S chown -S execve -F uid=root -k root_activity
I will look into that..
Regards,
Rajagopal
On Fri, Jan 29, 2010 at 12:12 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
Greetings,
How does one monitor if a site is being accessed using browser?
IOW, I just want to know if a user has launched a session thru Firefox.
I basically want to know if a user has tried to access the webserver and unable to reach it and log such instances.
I am using cron and curl to seperately monitor the link.
Any clues?
Centos 5.2/Gnome/Firefox 3.0.16
It's clear what it is you're trying to do, but If you're running Apache, turn on
CustomLog "logs/access_log" combined
The default is
CustomLog "logs/access_log" common
It will not only log the browser type, it will also log the OS in the access_log file.
For errors accessing files, see the error_log.
If the client can't reach the site, then it should be clear the server won't be able to log the attempt.
Greetings,
Thanks for the reply.
On Sat, Jan 30, 2010 at 12:58 AM, Agile Aspect agile.aspect@gmail.com wrote:
It's clear what it is you're trying to do, but If you're running Apache, turn on
I am not running apache and it may not be feasible as the clients are not endowed enough.
If the client can't reach the site, then it should be clear the server won't be able to log the attempt.
In fact this is exactly the condition I wanted to capture as unavailability window
FWIW, I am approaching this with tcpdump
tcpdump -s 0 -A -i eth0 -n -q -tttt '(dst host <mumble> and dst port 80) and tcp[13] == 2'
Basically checking for the SYN flag in the outgoing traffic.
But it is generating too much data for my purposes.
Another approach I have in mind is running a proxy and logging the outgoing connection -- will that be resource hungry? I've never tried squid
Ideas?
Regards
Rajagopal
Regards,
Rajagopal
Is this the correct way?
Rajagopal Swaminathan wrote:
Greetings,
Thanks for the reply.
On Sat, Jan 30, 2010 at 12:58 AM, Agile Aspect agile.aspect@gmail.com wrote:
It's clear what it is you're trying to do, but If you're running Apache, turn on
I am not running apache and it may not be feasible as the clients are not endowed enough.
If the client can't reach the site, then it should be clear the server won't be able to log the attempt.
In fact this is exactly the condition I wanted to capture as unavailability window
FWIW, I am approaching this with tcpdump
tcpdump -s 0 -A -i eth0 -n -q -tttt '(dst host <mumble> and dst port 80) and tcp[13] == 2'
Basically checking for the SYN flag in the outgoing traffic.
But it is generating too much data for my purposes.
Another approach I have in mind is running a proxy and logging the outgoing connection -- will that be resource hungry? I've never tried squid
Depending on the nature of the content and the number of users, running a squid with caching enabled can be a resource win - and it will give you the log you want as long as the browser(s) are configured to use it.
Les Mikesell wrote:
Depending on the nature of the content and the number of users, running a squid with caching enabled can be a resource win - and it will give you the log you want as long as the browser(s) are configured to use it.
if you have control over the internet gateway, you can force -all- web traffic to transparently be routed to the squid proxy, and then process the squid access and error logs, perhaps with a perl script (perl really rocks for this sort of thing).
Greetings,
On Sat, Jan 30, 2010 at 11:28 AM, John R Pierce pierce@hogranch.com wrote:
if you have control over the internet gateway, you can force -all- web traffic to transparently be routed to the squid proxy, and then process the squid access and error logs, perhaps with a perl script (perl really rocks for this sort of thing).
Thanks John for the reply and suggestion.
It seems increasingly certain that I will have setup proxy.
Regards
Rajagopal
Greetings,
Thanks for the reply.
On Sat, Jan 30, 2010 at 11:25 AM, Les Mikesell lesmikesell@gmail.com wrote:
Depending on the nature of the content and the number of users, running a squid with caching enabled can be a resource win - and it will give you the log you want as long as the browser(s) are configured to use it.
IOW, Two programs Firefox and squid proxy running every such box : Centos Desktops running in GUI mode.
Will 512MB RAM be sufficient to what you are suggesting?
I have over 300 such desktops distributed across the geographical having a unpredictable connectivity and each one of them just use one browser based on-line application and some cron scripts for monitoring and logging simple details.
Changing the h/w configuration is nearly impossible now..
or is there another lightweight solution?
Thanks again Les,
Regards
Rajagopal
Rajagopal Swaminathan wrote:
Greetings,
Thanks for the reply.
On Sat, Jan 30, 2010 at 11:25 AM, Les Mikesell lesmikesell@gmail.com wrote:
Depending on the nature of the content and the number of users, running a squid with caching enabled can be a resource win - and it will give you the log you want as long as the browser(s) are configured to use it.
IOW, Two programs Firefox and squid proxy running every such box : Centos Desktops running in GUI mode.
Will 512MB RAM be sufficient to what you are suggesting?
I have over 300 such desktops distributed across the geographical having a unpredictable connectivity and each one of them just use one browser based on-line application and some cron scripts for monitoring and logging simple details.
Changing the h/w configuration is nearly impossible now..
or is there another lightweight solution?
There is not much point in having squid cache for a single user since browsers also do their own caching. The normal configuration is to install one behind each internet connection to be shared by many users at the same location and the cache saves the time to fetch files viewed by more than one browser. However, you can turn off the cache or make it very small and still get the logging and a point of control. Squid is fairly efficient but I don't know if that is the best approach.
Greetings,
On Sat, Jan 30, 2010 at 9:12 PM, Les Mikesell lesmikesell@gmail.com wrote:
Rajagopal Swaminathan wrote:
Greetings,
There is not much point in having squid cache for a single user since browsers also do their own caching.
Now is there a way to find out if the browser can be configured to throw "connection timeout" in a predictable way to someplace in filesystem which I can watch using inotify/auditd mechanism?
What you had written earlier echoed my thoughts.
Thanks again Les for your kind reply
Regards,
Rajagopal
PS: If only I could share few beers [ | "lot of hard alcohol" (as this problem IMHO demands)] with you all wonderful guys and gals in the greatest list on this earth...
Rajagopal Swaminathan wrote:
On Sat, Jan 30, 2010 at 12:58 AM, Agile Aspect agile.aspect@gmail.com wrote:
If the client can't reach the site, then it should be clear the server won't be able to log the attempt.
In fact this is exactly the condition I wanted to capture as unavailability window
FWIW, I am approaching this with tcpdump
tcpdump -s 0 -A -i eth0 -n -q -tttt '(dst host <mumble> and dst port 80) and tcp[13] == 2'
Basically checking for the SYN flag in the outgoing traffic.
But it is generating too much data for my purposes.
If you have X11 installed, use Wireshark to capture the data. If you don't, save the captured data into a file, then copy it to another computer where you can use Wireshark. Set the view filter for the specific IP addresses you are looking for. From above, it would be
"ip.addr eq <mumble>"
The view filter I used yesterday to examine one connection at work was
"ip.addr eq 10.3.1.66 and ip.addr eq 10.3.1.96"
Remove the flags condition from the capture (tcp[13]) as it won't make any difference until the SYN packets get through and then it will only get in the way of seeing what happens next.
Bob McConnell N2SPP
Greetings,
On Sat, Jan 30, 2010 at 7:25 PM, Bob McConnell rmcconne@lightlink.com wrote:
Rajagopal Swaminathan wrote:
On Sat, Jan 30, 2010 at 12:58 AM, Agile Aspect agile.aspect@gmail.com wrote:
Gee thanks for your kind reply,
If you have X11 installed, use Wireshark to capture the data.
This thought did cross my mind. But then the Level 1 guys/gals maintaining that dont even know english. It adds another level of complexity. If only I could filter the Hex output by TCP dump.
Secondly I am not interested in knowing what the response is. I am more interested in capturing the traffic which traversed in the LAN in a remote village with horrible internet connectivity. Once they are able to connect, I can track them on server anyway.
Remove the flags condition from the capture (tcp[13]) as it won't make any difference until the SYN packets get through and then it will only
I am interested in knowing the attempts to connect rather than (Un)/successful connects.
get in the way of seeing what happens next.
This anyway I can track once connected
Hope have made myself clear.
Thanks a lot for the reply
Regards,
Rajagopal
On Fri, Jan 29, 2010 at 10:12 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
Greetings,
How does one monitor if a site is being accessed using browser?
IOW, I just want to know if a user has launched a session thru Firefox.
I basically want to know if a user has tried to access the webserver and unable to reach it and log such instances.
I am using cron and curl to seperately monitor the link.
Any clues?
Centos 5.2/Gnome/Firefox 3.0.16
Regards
Rajagopal
why not to try do this with iptables with combination with inotify from man iptables --uid-owner userid Matches if the packet was created by a process with the given effective user id.
--gid-owner groupid Matches if the packet was created by a process with the given effective group id. --cmd-owner name Matches if the packet was created by a process with the given command name. (this option is present only if iptables was compiled under a kernel supporting this feature)
then adding in iptables rule -j LOG --log-level 4 --log-prefix "some_prefix "
secondly you can watch connection states with commands: -m state --state NEW SYN packet was sent -m state --state ESTABLISHED,RELATED you are getting response
and then in syslog.conf kern.warning /var/log/iptables.log
you then can watch with inotify /var/log/iptables.log file and do whatever you should
Also i would watch not only trafiic to external tcp 80 port, but also and tcp/udp 53 port.
browser can even don't try to load page if it doesn'i resolve dns name, or entered domain does not exist.