hello,
I want to protect the history file from deleted for all users except user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
Thanks!
Use remote logging to a second machine which only you have access to.
http://www.linuxjournal.com/content/creating-centralized-syslog-server
Harold
8/8/2012 12:56 PM, Heng Su wrote:
hello,
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
Thanks!
Greetings,
On Wed, Aug 8, 2012 at 10:26 PM, Heng Su ste.suheng@gmail.com wrote:
hello, For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
Perhaps you can look at inotify, put the .bash_history on its watchlist and then rsync the changes to a remote host.
Haven't tried it though.
HTH
Heng Su wrote:
hello,
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
So, you've got someone inside, who's doing nasty, or stupid, things?
The most obnoxious, stupid idea I've had to deal with was a few years ago, when the company I was subcontracting for put something in the .profile to log every. single. command. a developer issued....
However, since you've set up sudo for them, their commands should *also* be in /var/log/secure. Of course, what you need is a script to grab that, and attach to it which user had sudo'd.
Hmmm, as I type that, I just got to thinking: do they need all root privileges, or do specific users only need certain commands? If so, it's easy enough to limit what commands they're allowed to run under sudo - man sudoers.
mark
Hi mark,
Great! I think those you mentioned is exactly what I want. Normally, I want to trace which guy got wrong things in server.
I tried the link that Harold provided find it's a good idea to protect log files, however, I want to know is which guy type which command.
the /var/log/secure is what I want, thank you so much.
I can not limit the sudo commands , like cp command.
For instance, a small team 4 developers, we deploy some code file to this server, however, someone let say new guy overwrite wrong file. I need to trace on it and inform him carefully.
thanks.
On 08/09/2012 01:42 AM, m.roth@5-cent.us wrote:
Heng Su wrote:
hello,
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
So, you've got someone inside, who's doing nasty, or stupid, things?
The most obnoxious, stupid idea I've had to deal with was a few years ago, when the company I was subcontracting for put something in the .profile to log every. single. command. a developer issued....
However, since you've set up sudo for them, their commands should *also* be in /var/log/secure. Of course, what you need is a script to grab that, and attach to it which user had sudo'd.
Hmmm, as I type that, I just got to thinking: do they need all root privileges, or do specific users only need certain commands? If so, it's easy enough to limit what commands they're allowed to run under sudo - man sudoers.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Greetings,
On Wed, Aug 8, 2012 at 11:32 PM, Heng Su ste.suheng@gmail.com wrote:
this server, however, someone let say new guy overwrite wrong file. I need to trace on it and inform him carefully.
SCMs like SVN, git etc. are exactly for such events.
You are taking backups, aren't you?
On 08/09/2012 02:14 AM, Rajagopal Swaminathan wrote:
Greetings,
On Wed, Aug 8, 2012 at 11:32 PM, Heng Su ste.suheng@gmail.com wrote:
this server, however, someone let say new guy overwrite wrong file. I need to trace on it and inform him carefully.
SCMs like SVN, git etc. are exactly for such events.
You are taking backups, aren't you?
Yeah I know the bakups, It's only for making sure server running properly quickly after incident. However, you don't know which guy got wrong things. Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.
On Wed, Aug 8, 2012 at 1:23 PM, Heng Su ste.suheng@gmail.com wrote:
Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.
Why does it need root permissions to update this file? It doesn't cost anything to add a user to own your application's resources.
On 08/09/2012 02:46 AM, Les Mikesell wrote:
On Wed, Aug 8, 2012 at 1:23 PM, Heng Su ste.suheng@gmail.com wrote:
Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.
Why does it need root permissions to update this file? It doesn't cost anything to add a user to own your application's resources.
OK, assuming there is an jboss application server running under user 'jboss' in PRD server, and we have 4 developers want to update the jar file in that server. they always login use same user 'jboss' to do updating file in server, how can I tell which guy doing what things cause the server down as they use same user account 'jboss'?
So I don't know how should I do as I am a shoddy server admin, so I use root to maintain the application server. then create 4 account in server for individual developer. So if they want copy, move or other operations on those deploy folder or files. Let them use sudo. Now I got all commands they did in /var/log/secure, ^_^
Heng Su wrote:
On 08/09/2012 02:46 AM, Les Mikesell wrote:
On Wed, Aug 8, 2012 at 1:23 PM, Heng Su ste.suheng@gmail.com wrote:
Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.
Why does it need root permissions to update this file? It doesn't cost anything to add a user to own your application's resources.
OK, assuming there is an jboss application server running under user 'jboss' in PRD server, and we have 4 developers want to update the jar file in that server. they always login use same user 'jboss' to do updating file in server, how can I tell which guy doing what things cause the server down as they use same user account 'jboss'?
So I don't know how should I do as I am a shoddy server admin, so I use root to maintain the application server. then create 4 account in server for individual developer. So if they want copy, move or other operations on those deploy folder or files. Let them use sudo. Now I got all commands they did in /var/log/secure, ^_^
Now I have a picture of your problem.
<flame, but not to you, Heng Su> VCS's that let multiple people check the same object out at the same time.... You're *exactly* back where you were before people were using VCSs. </flame> It sounds like the use of the VCS is no different than saving them in a backup directory, which is *not* how it should be used.
Set up a real version control system. Configure it so that they *must* check out with a lock, so no one else can edit it. Extract to test, and test the damn thing. Then label it. Then, when they agree it's ok, you, the admin, get to install it, NOT THE DEVELOPERS!!!!! AND you extract it by label (or whatever the VCS calls it) to production directly from the VCS. You're guaranteed that the wrong file won't be moved to production.
Doing it that way, it's *very* easy to roll back (another thing VCSs are for).
And don't let them do *anything* with production: that's your job. Right now, start logging every time something wrong goes into production. If what what I read between the lines that you're suggesting is the case, it should take a week or so to have a number of problems; then dump it on your manager, and tell them this is a problem, here's the evidence of the problem, and here's the answer (as above, with you as admin, as the gatekeeper to production).
mark, many years as developer, 7 years with PVCS as config mgr, and plenty of years as sysadmin
On Wed, Aug 8, 2012 at 2:56 PM, m.roth@5-cent.us wrote:
<flame, but not to you, Heng Su> VCS's that let multiple people check the same object out at the same time.... You're *exactly* back where you were before people were using VCSs.
</flame>
Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.
Extract to test, and test the damn thing. Then label it. Then, when they agree it's ok, you, the admin, get to install it, NOT THE DEVELOPERS!!!!! AND you extract it by label (or whatever the VCS calls it) to production directly from the VCS. You're guaranteed that the wrong file won't be moved to production.
That part is true enough, although it is not so much who does the work, it is following the procedure. If you are going to be picky about who does what, there should really be a QA person involved that makes the actual decision about what version should be running in production in between the developers making changes and the operators doing the installs.
Les Mikesell wrote:
On Wed, Aug 8, 2012 at 2:56 PM, m.roth@5-cent.us wrote:
<flame, but not to you, Heng Su> VCS's that let multiple people check the same object out at the same time.... You're *exactly* back where you were before people were using VCSs.
</flame>
Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.
You're wrong. I've worked in small and large teams, and *ALWAYS* we checked out with locks. If two people need to work on one file, then either they need to work together on one copy, and check it back in together, or the file needs to be split into more than one, so that one person can work on each. This is the way it was at a medium sized environmental company I worked at (that was working on ISO 9000), and it was the way it was at a Baby Bell I worked at, and it was the way it was when I worked on the City of Chicago 911 system.
I have vehemently been against the fad of the last half a dozen or so years, with multiple people checking out and working on the same file. I've seen hours or days of a developer's work wiped out, when a team lead hacked some quick fixes, then merged the file back in.
Extract to test, and test the damn thing. Then label it. Then, when they agree it's ok, you, the admin, get to install it, NOT THE DEVELOPERS!!!!! AND you extract it by label (or whatever the VCS calls it) to production directly from the VCS. You're guaranteed that the wrong file won't be moved to production.
That part is true enough, although it is not so much who does the work, it is following the procedure. If you are going to be picky about who does what, there should really be a QA person involved that makes the actual decision about what version should be running in production in between the developers making changes and the operators doing the installs.
I haven't had q/a move to prod; that was always the prod admin's job, after q/a was done, and had promoted it to prod.
mark
On Wed, Aug 8, 2012 at 4:03 PM, m.roth@5-cent.us wrote:
Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.
You're wrong. I've worked in small and large teams, and *ALWAYS* we checked out with locks. If two people need to work on one file, then either they need to work together on one copy, and check it back in together, or the file needs to be split into more than one, so that one person can work on each.
If you want to force your team to wait for your change, fine - and sometimes it is even a good idea, but the tool should not make that decision for you.
I have vehemently been against the fad of the last half a dozen or so years, with multiple people checking out and working on the same file. I've seen hours or days of a developer's work wiped out, when a team lead hacked some quick fixes, then merged the file back in.
You can't do that without knowing it. If the user ignores the other changes in a conflict or doesn't resolve them correctly, blame the user just like you would if he typed that in as part of his own changes.
That part is true enough, although it is not so much who does the work, it is following the procedure. If you are going to be picky about who does what, there should really be a QA person involved that makes the actual decision about what version should be running in production in between the developers making changes and the operators doing the installs.
I haven't had q/a move to prod; that was always the prod admin's job, after q/a was done, and had promoted it to prod.
OK, both QA and operations should agree - QA as to whether a version can be released and operations as to when it happens.
Les Mikesell wrote:
On Wed, Aug 8, 2012 at 4:03 PM, m.roth@5-cent.us wrote:
Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.
You're wrong. I've worked in small and large teams, and *ALWAYS* we checked out with locks. If two people need to work on one file, then either they need to work together on one copy, and check it back in together, or the file needs to be split into more than one, so that one person can work on each.
If you want to force your team to wait for your change, fine - and sometimes it is even a good idea, but the tool should not make that decision for you.
Yes, I do want to force them to wait for what one person's working on - it's not like everyone else isn't working on *other* things. And each should be independent - changing an interface; that is, the parameters a function (sorry, "messages that a method) is expecting is always a big deal.
I have vehemently been against the fad of the last half a dozen or so years, with multiple people checking out and working on the same file. I've seen hours or days of a developer's work wiped out, when a team lead hacked some quick fixes, then merged the file back in.
You can't do that without knowing it. If the user ignores the other changes in a conflict or doesn't resolve them correctly, blame the user just like you would if he typed that in as part of his own changes.
Yes... and one of the main points of a correctly configured VCS is explicitly to prevent one person from screwing up others' work.
That part is true enough, although it is not so much who does the work, it is following the procedure. If you are going to be picky about who does what, there should really be a QA person involved that makes the actual decision about what version should be running in production in between the developers making changes and the operators doing the installs.
I haven't had q/a move to prod; that was always the prod admin's job, after q/a was done, and had promoted it to prod.
OK, both QA and operations should agree - QA as to whether a version can be released and operations as to when it happens.
Absolutely, though in a small shop, that tends to be developers and admin. Not that many places, unfortunately, have one or more folks who are only q/a.
mark
On Wed, Aug 8, 2012 at 4:33 PM, m.roth@5-cent.us wrote:
If you want to force your team to wait for your change, fine - and sometimes it is even a good idea, but the tool should not make that decision for you.
Yes, I do want to force them to wait for what one person's working on - it's not like everyone else isn't working on *other* things. And each should be independent - changing an interface; that is, the parameters a function (sorry, "messages that a method) is expecting is always a big deal.
Interface/protocol changes aren't particularly tied to a single file or even a single project. If you are going to make changes that affect other things either everyone else needs to know what to expect or you need to be working on a branch that is kept isolated until everything else matches. It doesn't really matter if the file was locked when you make that change or not.
OK, both QA and operations should agree - QA as to whether a version can be released and operations as to when it happens.
Absolutely, though in a small shop, that tends to be developers and admin. Not that many places, unfortunately, have one or more folks who are only q/a.
Or worse, the developer may also change hats and be the admin... But developers should be doing new, experimental things and admins should insist on testing before going to production.
Am 08.08.2012 23:03, schrieb m.roth@5-cent.us:
Les Mikesell wrote:
On Wed, Aug 8, 2012 at 2:56 PM, m.roth@5-cent.us wrote:
<flame, but not to you, Heng Su> VCS's that let multiple people check the same object out at the same time.... You're *exactly* back where you were before people were using VCSs.
</flame>
Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.
You're wrong. I've worked in small and large teams, and *ALWAYS* we checked out with locks. If two people need to work on one file, then either they need to work together on one copy, and check it back in together, or the file needs to be split into more than one, so that one person can work on each. This is the way it was at a medium sized environmental company I worked at (that was working on ISO 9000), and it was the way it was at a Baby Bell I worked at, and it was the way it was when I worked on the City of Chicago 911 system.
I have vehemently been against the fad of the last half a dozen or so years, with multiple people checking out and working on the same file. I've seen hours or days of a developer's work wiped out, when a team lead hacked some quick fixes, then merged the file back in.
It seems you are vehemently against the development model the Linux kernel is thriving on. Or perhaps you just never had a chance to look at git.
T.
On Wed, Aug 8, 2012 at 2:07 PM, Heng Su ste.suheng@gmail.com wrote:
On 08/09/2012 02:46 AM, Les Mikesell wrote:
On Wed, Aug 8, 2012 at 1:23 PM, Heng Su ste.suheng@gmail.com wrote:
Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.
Why does it need root permissions to update this file? It doesn't cost anything to add a user to own your application's resources.
OK, assuming there is an jboss application server running under user 'jboss' in PRD server, and we have 4 developers want to update the jar file in that server.
A small team that is supposed to know what they are doing is a somewhat different picture than I had before.
they always login use same user 'jboss' to do updating file in server, how can I tell which guy doing what things cause the server down as they use same user account 'jboss'?
Still, your first thought should be about how to prevent the problem instead of how to blame the right person. You basically need to ensure that the only way to get the jar file into place is copy it from a known/tested version using a predictable deployment script where the script tests usability and has a way to back out the change. For a couple of jar files this might be as simple as a script wrapping rsync with ssh keys set up for each potential user (from their staging/text box), or you might commit to a subversion repository and update production from there. For anything more complicated you probably want a jenkins CI server testing things (trivial to set up) and one of its deployment plugins.
On Wed, Aug 8, 2012 at 11:56 AM, Heng Su ste.suheng@gmail.com wrote:
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally.
No, it is not a common situation. Normally you should not let anyone you don't trust become root. For fairly obvious reasons...
I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
First, why do so many users need the root password? If they are developers testing things, give them their own VM to break. If they are doing a few routine things, make them log in as themselves and use restricted sudo commands (i.e. don't permit 'sudo su -'. In any case, backups are your friend. Keep copies of anything you might need updated with frequent rsync's from a different, more restricted machine - including the log files you might want to track.
On 08/09/2012 01:54 AM, Les Mikesell wrote:
On Wed, Aug 8, 2012 at 11:56 AM, Heng Su ste.suheng@gmail.com wrote:
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally.
No, it is not a common situation. Normally you should not let anyone you don't trust become root. For fairly obvious reasons...
Let said if you want get low price to set up multiple application servers and outsource different server set up thing to different person on internet. You have to give the root rights to them, maybe you even don't know which command limitation should be given as you are not a master. so just give all permission to them. I think this scenario happens in small company have no enough man power to do it.
I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
First, why do so many users need the root password? If they are developers testing things, give them their own VM to break. If they are doing a few routine things, make them log in as themselves and use restricted sudo commands (i.e. don't permit 'sudo su -'. In any case, backups are your friend. Keep copies of anything you might need updated with frequent rsync's from a different, more restricted machine - including the log files you might want to track.
previous scenario also applicable, different developer do code updating in server due to above reason. you can not limit such as do not let them user 'cp' or other common commands as I want to know which guy overwrite wrong file. Even two user, I also need to know which one do wrong things.
Thanks for your suggestions.
On Wed, Aug 8, 2012 at 1:13 PM, Heng Su ste.suheng@gmail.com wrote:
No, it is not a common situation. Normally you should not let anyone you don't trust become root. For fairly obvious reasons...
Let said if you want get low price to set up multiple application servers and outsource different server set up thing to different person on internet. You have to give the root rights to them, maybe you even don't know which command limitation should be given as you are not a master. so just give all permission to them. I think this scenario happens in small company have no enough man power to do it.
Yes, outsourcing happens, but what is your role in this picture? If someone else is managing the machine let them do their job and take the blame if it breaks. If someone else is managing an application, normally that application should run as a non-root user and should not need root access for most configuration/update tasks. If you need to be able to fix it regardless of what happens, take frequent backups.
previous scenario also applicable, different developer do code updating in server due to above reason.
If the production server matters, developers should be working on a test/staging server with some sort of automated updates pushed to production.
you can not limit such as do not let them user 'cp' or other common commands as I want to know which guy overwrite wrong file. Even two user, I also need to know which one do wrong things.
In that sort of environment I would try to split the services onto separate VMs or smaller servers, each managed by one person or team that is responsible for fixing it if anything breaks. Knowing who did something wrong really isn't going to be that much help in making things work again, especially for the parts managed by someone else that just happen to be on a shared server.
On Wed, 8 Aug 2012 21:00:59 +0300 Mihamina Rakotomandimby mihamina@rktmb.org wrote:
Use sudo.
Weak! Real fascists use sudosh!
Rui
ps: I'm sure there are some fascists who are more fascist so feel free to point out even better options ;)
On Wed, Aug 8, 2012 at 12:56 PM, Heng Su ste.suheng@gmail.com wrote:
I want to protect the history file from deleted for all users except
user 'root' can do it, is that possible? For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use 'sudo' then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.
How should I do. I believe everyone encountered such things
normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?
Thanks! Su Heng
Capturing history files is error-prone and a very bad way to approach this problem. You should instead look into using process accounting, provided by the psacct package. You can read about it here: http://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounti...
❧ Brian Mathis
On 08/08/2012 11:34 AM, Brian Mathis wrote:
Capturing history files is error-prone and a very bad way to approach this problem. You should instead look into using process accounting, provided by the psacct package. You can read about it here: http://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounti...
+1
bash_history is not a log for the admin, it's a convenience for the user. Users who want to hide their tracks can unset HISTFILE or switch to a different shell. Process accounting is the only solution that's even remotely reliable.