On Fri, Jul 09, 2021 at 08:14:06AM -0400, mario juliano grande-balletta wrote:
WAKE UP!
<sarcasm>Whew, I needed a wake up call! I was falling asleep at my keyboard!</sarcasm>
In all seriousness, I think forwarding the audit logs works, and if you just want to track when users execute a program, you'll need to add an audit rule. I believe we had something like this in /etc/audit/rules.d/:
-a exit,always -F arch=b64 -F euid>1000 -S execve -a exit,always -F arch=b32 -F euid>1000 -S execve
This captured all execve() syscalls for users with an effective User ID greater than 1000 (so not to audit system processes).
We didn't actually send it to a remote auditd server, though, because it was so chatty and we had a lot of users and workstations. We had an Elasticsearch cluster and sent the audit logs directly with logstash and then Beaver (https://python-beaver.readthedocs.io/en/latest/) This was done because we had redundant ingesters and a cluster of ES servers so logs were less likely to be dropped.
Then we had some simple frontends for the ES cluster to make it so we could quickly bring up what processes a user ran on what system. (The kibana interface is nice but too complex for a super simple query like that.) Along with collecting OS statistics like load, memory use, etc., we could track what users ran and how much resources they used.
Of course, at this job, we dropped all that and switched to Crowdstrike Falcon, a commercial security tool that does largely the same thing but with a proprietary LSM.