On Wed, Dec 24, 2008 at 09:43:19AM -0800, Bill Campbell wrote:
> On Wed, Dec 24, 2008, jkinz(a)kinz.org wrote:
> >Top posting to ask a question regarding the article below:
> > Summary: Enable ssh to allow login from any random point on
> > the internet
>
> I always have my laptop with me,
An excellent strategy Bill. I use it myself, but I explicitly excluded
it in my question. Why? because there are lots of scenarios in the world
where people won't be able to use their laptop or netbook and will have
to fall back on using someone else's equipment.
Two examples :
You are visiting the Otis Public Library in Norwich CT. They have Linux
based public workstations (w/Internet access).
(http://www.otislibrarynorwich.org/index.htm)
Or you are a consultant visiting a corporate client who doesn't allow
"outside equipment" to be used on their network, so they maintain
specific machines for "guests" to use. (Hint, "DOD" )
(I have run into both of these. :-) )
example three - A TSA attendant "accidentally" drops your
laptop.. in front of a forklift... (Merry Christmas!)
All your ideas are good ones to which I would add using port knocking
(not perfect at all but adds an additional small barrier)
The best technique I have used is to put up an https web page
that requires the person desiring entry to be presented with a
challenge<->response dialog that is generated from a specific one-time
use pad of CR key pairs. That way, each session requires a unique
response to enable it. This is awkward but help keep the unwanted
visitors out. This would be a variation on your SSL webmin
suggestion.
Unfortunately, the worst case scenario ( a compromised machine
that does key logging) which you pointed out, will always be a
potential problem..
So when on the road, perhaps we should restrict doing
online banking to just the cell phone.. :-) hmm.......
> accept only authorized_keys, (b) allow access from any IP, and (c) use
> fail2ban to limit the number of log entries from failed attempts to access
> the systems. All logins to our customer sites are then initiated from
> inside our network once I have established the initial connection from the
> remote location so those connections can be much more restrictive if
> necessary.
>
> One possibility would be to have a machine configured to allow password
> access from the world which one could log into, then execute ssh-agent, and
> ssh-add (with a strong pass phrase) on that machine to get access to other
> systems on your network.
>
> If there is some reason that an ssh cannot be established, usually it's
> possible to connect with OpenVPN, which works nicely behind NAT firewalls
> and does not require kernel hacking on CentOS as things like PPTP do.
>
> You make the job much more difficult when asking that you be able to get in
> from any old machine you might find in public space. Other than the fact
> that the owners of these machines generally don't allow people to install
> software on them, I would be very reluctant to do anything on them that
> involved secure logins as who knows what key capture or other spyware is
> running on them.
>
> One may be able to access you systems using webmin or its usermin module
> over an SSL connection, and webmin has a terminal interface allowing one to
> get a connection to systems. If I remember correctly, this does require
> Java(tm) on the connecting machine, and that webmin be configured to permit
> use of the terminal module.
>
> I much prefer restrict webmin and usermin access though as I have seen far
> too many systems cracked through it because it only has username, password
> authentication, and too many times, user's passwords are easily cracked.
> Once somebody is logged into usermin, for instance, they may have access to
> tools such as the chfn (change finger information) command which at one
> time on SuSE systems allowed them to change their uid to ``0'' and gain
> root access to the system.
>
> In summary, I would be extremely reluctant to allow access from public
> machines where there is no assurance how much malware is running on top of
> the Microsoft virus, Windows. It's very easy to revoke authorized_keys or
> OpenVPN access for a lost or stolen laptop. Allowing password access by
> any means opens up a large can of worms.
>
> ...
> Bill
> --
> INTERNET: bill(a)celestial.com Bill Campbell; Celestial Software LLC
> URL: http://www.celestial.com/ PO Box 820; 6641 E. Mercer Way
> Voice: (206) 236-1676 Mercer Island, WA 98040-0820
> Fax: (206) 232-9186
>
> If the government can take a man's money without his consent, there is no
> limit to the additional tyranny it may practise upon him; for, with his
> money, it can hire soldiers to stand over him, keep him in subjection,
> plunder him at discretion, and kill him if he resists.
> Lysander Spooner, 1852
> _______________________________________________
> CentOS mailing list
> CentOS(a)centos.org
> http://lists.centos.org/mailman/listinfo/centos
--
On Tue, 2005-08-02 at 11:47 -0400, James B. Byrne wrote:
> I am attempting to build from a src.rpm (knowing very little about
> rpm at all) and the spec file notes that the architecture should be
> set on the command line:
>
> > # platform defines - set one below or define the build_xxx on the
> command line
>
> Now. My question is this, how does one do this using rpmbuild?
> The syntax rpmbuild --target centos4 package.src.rpm seemingly has
> no effect.
Should be something like:
$ rpmbuild --rebuild --target=i386-centos4-linux package_name.src.rpm
^^^^
Could possibly be i586, i686, x86_64, ...
or alternately (you'll end up here anyway if the build fails):
$ rpm -ivh package.src.rpm
$ cd rpmbuild/package_name # assuming setup as follows below
$ rpmbuild -ba --target=i386-centos4-linux package_name.spec
Edit spec and/or debug and repeat rpmbuild as necessary until a state of
joy is achieved. :-)
Another bit of unsolicited advice - setting up an end-user build
environment is a prudent approach. Some badly-behaved spec files have
been known to clobber system files when building as root. I build as a
user and put the resulting RPMS in a local repo so they can be installed
with yum. Here's my well-tested setup (as stolen from Mike Harris)...
[prs@wx1 ~]$ cat .rpmmacros
# Custom RPM macros configuration file for building RPM packages
# as a non-root user.
#
# Author: Mike A. Harris <mharris(a)redhat.com>
#
# This is a copy of my own personal RPM configuration which I use
# on my workstation for building and testing packages for Red Hat Linux.
# There are many different possibilities on how to configure RPM, so
# feel free to tweak however you desire. Make sure to create any
# directories that are referenced prior to using. RPM will automatically
# create some of them if missing, but not all of them. Which ones it
# auto-creates is only known by the extraterrestrial aliens that have
# created RPM.
#
# For ANY help with anything related to RPM development, packaging,
# or customization, please join the Red Hat RPM mailing list by sending
# an email message to: rpm-list-request(a)redhat.com with the word
# "subscribe" in the Subject: line.
#
# Any suggestions/comments/ for improvements to this setup appreciated.
# %_topdir defines the top directory to be used for RPM building purposes
# By defaultROOT of the buildsystem
%_topdir %(echo $HOME)/rpmbuild
# %_sourcedir is where the source code tarballs, patches, etc. will be
# placed after you do an "rpm -ivh somepackage.1.0-1.src.rpm"
%_sourcedir %{_topdir}/%{name}-%{version}
# %_specdir is where the specfile gets placed when installing a src.rpm. I
# prefer the specfile to be in the same directory as the source tarballs, etc.
%_specdir %{_sourcedir}
# %_tmppath is where temporary scripts are placed during the RPM build
# process as well as the %_buildroot where %install normally dumps files
# prior to packaging up the final binary RPM's.
%_tmppath %{_topdir}/tmp
# %_builddir is where source code tarballs are decompressed, and patches then
# applied when building an RPM package
%_builddir %{_topdir}/BUILD
# %_buildroot is where files get placed during the %install section of spec
# file processing prior to final packaging into rpms. This is oddly named
# and probably should have been called "%_installroot" back when it was
# initially added to RPM. Alas, it was not. ;o)
%_buildroot %{_topdir}/%{_tmppath}/%{name}-%{version}-root
# %_rpmdir is where binary RPM packages are put after being built.
%_rpmdir %{_topdir}/RPMS
# %_srcrpmdir is where src.rpm packages are put after being built.
%_srcrpmdir %{_topdir}/SRPMS
# %_rpmfilename defines the naming convention of the produced RPM packages,
# and should not be modified. It is listed here because I am overriding
# RPM's default behaviour of dropping binary RPM's each in their own
# separate subdirectories. I hate that. Grrr.
%_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm
# Customized tags for local builds
# %packager is the info that will appear in the "Packager:" field in the
# RPM header on built packages. By default I have it read your username
# and hostname. This should be customized appropriately.
# %packager Joe Blow <joe(a)blow.com>
%packager %(echo ${USER}@)%(hostname)
%distribution LaRC/RTD/ESB Build
# GNU GPG config below
#%_signature gpg
#%_gpg_name Joe Blow <joeblow(a)somewhere.com>
#%_gpg_path %(echo $HOME)/.gnupg
# The following will try to create any missing directories required above
# (Not implemented yet)
# Change default RPM query format to show ARCH
%_query_all_fmt %%{name}-%%{version}-%%{release}.%%{arch}
[prs@wx1 ~]$ cat .rpmrc
include: /usr/lib/rpm/rpmrc
macrofiles: /usr/lib/rpm/macros:/usr/lib/rpm/%{_target}/macros:/etc/rpm/macros.specspo:/etc/rpm/macros:/etc/rpm/%{_target}/macros:~/.rpmmacros
[prs@wx1 ~]$ ll rpmbuild/
total 24
drwxr-xr-x 3 prs users 4096 Jul 29 17:52 BUILD
-rw-r--r-- 1 prs users 4491 Sep 9 2001 README
drwxr-xr-x 2 prs users 4096 Jul 29 17:55 RPMS
drwxr-xr-x 2 prs users 4096 Jul 29 17:55 SRPMS
drwxr-xr-x 2 prs users 4096 Jul 29 17:55 tmp
[prs@wx1 ~]$ cat rpmbuild/README
Building RPM packages as a non-root user.
by Mike A. Harris <mharris(a)redhat.com>
This document will teach you how to build RPM packages without
being logged in as the root user, which is unfortunately the
default in any new system installation.
Building packages logged in as "root" is a very very dangerous
thing to do. Many things can go wrong, and can actually destroy
files located throughout your system. Typos, and other errors
in RPM spec files, makefiles, etc. often produce unexpected results,
and that can lead to "very bad things(tm)" happening during
development.
Software development processes should never modify the build
environment they are using, nor overwrite installed system software.
As such, building RPM packages should never ever be done logged in
as root, but instead should be done by being logged into a regular
user account.
To start off with, you will need 3 things. The first is a custom
RPM config file (.rpmrc) in your home directory, and the second is
a custom RPM macrofile (.rpmmacros) also in your homedir. These
two files are included in this package. The .rpmrc should be used
as is, unless you really really really know what you are doing. The
.rpmmacros file can be tweaked to taste, but is useable by default.
These two files are copies of the exact configuration I use when
building packages for Red Hat Linux. Different developers each
have their own preferences however, so I urge everyone to play with
the config to find something that suits their needs.
These two files together tweak the locations of where RPM looks
for files when it is building packages, and where it places files
and scripts during building and packaging.
The third thing you need, is to create the directory structure
for building RPM packages. These dirs are completely configureable
via the .rpmmacros file, but RPM in general does not create them
for you if they do not exist. So if you change the dirs, make
absolutely sure that the new directories exist before testing.
My default supplied config uses a top level directory in your home
account "~/rpmbuild". All other directories are subdirectories
of the ~/rpmbuild directory.
SRPMS - where src.rpms get placed
RPMS - where all binary and noarch rpm packages get placed
BUILD - where software gets decompressed, patched and built
tmp - where the buildroot is and %install dumps files
The SOURCES and SPECS directories used by RPM in a default
installation of Red Hat Linux, I personally find irritating, as
I had to jump back and forth between two directories all the
time, and I found it very innefficient to work with. As such
I have merged the two into a single directory. This makes my
life so much easier it isn't even funny. ;o)
The other thing that annoys me about the default systemwide
RPM build environment is that if you have many many source
RPM packages installed, the SOURCES and SPECS directories were
huge steaming piles of random files that are impossible to
know which file came from which src.rpm package. Also, it is
possible 2 src.rpms might have the exact same filename contained
in them, but be different files. Ouch.
My solution was to have the sources for each package installed
into its own private per package-version directory. So when
you install a package ie: pine-4.33-15.src.rpm, it will be
placed into the directory: ~/rpmbuild/pine-4.33
This keeps each source package separate, and makes life a
highway.
I also hated RPM putting binary packages into RPMS/i386, RPMS/i686,
RPMS/alpha, RPMS/noarch, etc. GRRR!! So.. Now, instead of that,
ALL architectures are put into one single RPMS dir. Keep It Simple
Stupid! ;o)
I hope this RPM configuration is useful to other developers, and
makes life easier for you all as well. I am very much interested
in any feedback anyone may have from using these files, suggestions
for enhancement, etc.
Feel free to email me your suggestions for improvements. If you
have any problems with the configurations, I regret that I may
not be able to give individual assistance with configuring RPM or
tweaking these configurations as I am a very busy person usually.
Fortunately, Red Hat maintains a mailing list just for this
purpose, where you can ask for professional level help on RPM.
Please join the RPM mailing list if you have any questions about
customizing RPM, etc. To subscribe, send a message containing
the word "subscribe" to: rpm-list-request(a)redhat.com
Happy hacking!
TTYL
Vreme: 10/20/2011 10:22 AM, Müfit Eribol piše:
> On 19.10.2011 21:07, Ljubomir Ljubojevic wrote:
>> Vreme: 10/19/2011 06:34 PM, Müfit Eribol piše:
>>> Hi,
>>>
>>> My host and guest are CentOS 6. The guest is going to be a web server in
>>> production. I am trying to resize (extend) of the base partition of my
>>> guest. But I can of course start the installation of CentOS 6 guest all
>>> over again with a larger image size. However, just for the sake of
>>> better understanding I an trying to solve things not to be end up in a
>>> dead end after some years.
>>>
>>> 1. I created a guest CentOS 6 with 12G total disk (on a iscsi drive). No
>>> Desktop, just for terminal use. No LVM, just a simple basic partitioning.
>>> 2. Later I wanted to increase the size of total image to 200G.
>>> 3. I managed to resize the image to 200G on my iscsi drive. So, there is
>>> 188G unallocated/unformatted volume within the guest image.
>>>
>>> Now, the hardest part. I have to resize the partition. I have been
>>> trying to find a way to do that. A search on Google showed that GParted
>>> is tool to do that. I had to install all Desktop and X as Gparted is a
>>> GUI tool. Installed vncserver. Then, I found out that GParted can not
>>> resize the live guest. So, I downloaded GParted Live CD.
>>>
>>> Now, the questions:
>>>
>>> 1. If it was a physical machine I would boot from the CD. If I can boot
>>> it from host CDROM but then how should I operate on a specific guest?
>>> What is the easiest way to access GUI of the guest if I boot from Live CD.
>>> 2. I am wondering if a simple LVM route at the beginning would be
>>> preferred. Changing size of the iscsi volume on my NAS is easy. I
>>> thought there was no need for more complication, so went with basic
>>> /boot / and swap partitions. Is resizing partitions for LVM easier than
>>> basic partitioning (without LVM)?
>>> 3. Is there a specific tool in KVM suit which performs resizing
>>> partition within the image? Or as I prefer command line tools, is there
>>> a way to achieve resizing without any graphical tool like GParted? With
>>> GParted I had to install all the X and Gnome files, vncserver which
>>> otherwise I don't need.
>>>
>>> I would appreciate any information/hint/experience.
>>>
>>> All the best.
>> Hi.
>>
>> My view is:
>>
>> a) Use LVM so you can manipulate size of partition(s). Resizing etx4
>> partitions is horrible job, long and dangerous.
>>
>> b) You can mount ISO image file of any CD via Guests VirtualCD, no need
>> to mess with physical CD/DVD drives. There is System Rescue CD, CentOS
>> LiveCD (I have one 5.3 with mdadm raid support and bunch of tools,I will
>> soon be making 6.1 version) or Hiren's Boot CD - Parted. Root partition
>> needs offline resize since extX partitions can not be mounted at the
>> time of the resizing.
>>
>> c) All text-based resize tools require higher knowledge and/or
>> experience, like alignment to sectors and similar mambo-jumbo. When you
>> need to make it happen on production server without experimentation and
>> you have done it only once 3 years ago it IS mambo-jumbo.
>>
>> d) As far as I know, KVM can not mount virtual hard drives, so meesing
>> with them is not an option, unless you use "raw" partition on the Host
>> (still haven't tried it).
>>
>
> It is good to know at the very beginning that LVM is the way to go. So,
> I am reinstalling the server with LVM. It is good to know about it so early.
>
> Just for learning, could you please provide some more info about booting
> up the LiveCD ISO image (uploaded to the host) to work on a guest? How
> is the command line?
>
> Thank you for your kind help.
>
> _______________________________________________
> CentOS mailing list
> CentOS(a)centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>
Sorry, I do not use command line for KVM. I use my Desktop to connect to
Servers KVM Domain:
Virtual Machine Manager -> File -> Add New Connection -> Fill:
Hipervisor: QEMU/KVM; Connect to remote host; Metod: SSHl username +
password; Hostname: xxx
And you should have full access to your servers KVM domain.
But even if you need to use command line, I am sure you will be able to
find it by googling for "kvm linux boot from cd command line".
Also check out CentOS-virt mailing list Archive (on this same mailing
list server).
http://www.linux-kvm.org is official site for KVM.
--
Ljubomir Ljubojevic
(Love is in the Air)
PL Computers
Serbia, Europe
Google is the Mother, Google is the Father, and traceroute is your
trusty Spiderman...
StarOS, Mikrotik and CentOS/RHEL/Linux consultant
On 10/05/2013 03:49 AM, Marios Zindilis wrote:
> This intrigued me enough, to fire up two VMs in VirtualBox, one Win7, one
> CentOS, bridged network, and copied your Samba configuration, used it as
> you pasted it in the original message.
>
> There are some questionable options in your smb.conf. For example, there is
> an "interfaces" option, which is meant to limit the network interfaces on
> which Samba listens, but then there is "bind interfaces only = no" which
> negates the former. Also there is "browsable = yes" in the [homes] share
> definition, which I think makes one user's home directory visible to other
> users, which is not usually what you want. That said, your configuration
> should still work fine.
>
> In my case, to get it to work, I had to do "smbpasswd -a admin" and give
> admin a samba password, which made it possible for user admin to browse his
> own share _on_the_localhost_ (on CentOS machine).
>
> To be able to browse if from Windows, either:
>
> 1. You need to also be logged in as "admin" in Windows 7 (worked for me
> when I logged in as "admin" on Win7) or,
>
> 2. You need to create a user mapping, but adding a line in the [Global]
> section of /etc/samba/smb.conf reading "username map =
> /etc/samba/username.map" and the edit "/etc/samba/usename.map" and add one
> line in it with "SambaUsername = WindowsUsername". For example. the line
> "admin = marios" inside /etc/samba/username.map worked for me while logged
> in Win7 as "marios" (not as admin any more).
>
> I hope the above are useful.
>
>
> On Sat, Oct 5, 2013 at 7:54 AM, Chris Weisiger <cweisiger(a)bellsouth.net>wrote:
>
>> when I set it to share I don’t need a password....its configure like an
>> anonymous file server. but I can tune the settings in actual shared section
>> of the conf file
>>
>> -----Original Message-----
>> From: John R Pierce
>> Sent: Friday, October 04, 2013 11:43 PM
>> To: centos(a)centos.org
>> Subject: Re: [CentOS] Samba problem
>>
>> On 10/4/2013 9:27 PM, Chris Weisiger wrote:
>>> You can set "security = share"
>>>
>>> I had mine set to see the user share but I changed my setup
>> are share passwords even supported anymore? that was the default mode
>> for windows 3.x and 95-98 sharing, each share could have two passwords,
>> one for read-only and one for write, and there was no concept of a user.
>>
>> what Ive always found works adequately is to create a smbpassword for
>> each windows user, with the same password as they log onto their
>> desktop. then windows will just autoconnect. if you have unix clients,
>> use nfs, not smb!!
>>
>> what works *best* is to have active directory or another ldap+kerberos
>> implementation, and have all your windows systems joined to the domain
>> and users logging onto domain accounts. THEN you share to the domain
>> accounts and its all good.
>>
>> windows 7 and newer default to requiring more strict encryption and
>> authentication, which older systems may not provide by default.
>>
>>
>> --
>> john r pierce 37N 122W
>> somewhere on the middle of the left coast
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS(a)centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS(a)centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>
>
I thank you and everyone who replied to my Samba problem.
What worked for me is to have a Windows user with the same name as the
user on the CentOS computer and set a Samba password to be the same on
both machines. What confused me is that many times I log on to a service
on another computer and only have to know the name and password of the
computer I am logging in to.
My real home network consists of my wife's Windows computer and multiple
Linux desktops. I backup my computers using rsync. For the windows
computer I looked at Cygwin which has the rsync program but decided
instead to map a drive letter on her computer to a Samba share. She
then could use the Windows backup program and backup to the Samba
share. Afaik, the Windows backup program mirrors the selected files on
the backup device. She has no need of restoring files prior to a
certain date.
On 5/25/07, Les Mikesell <lesmikesell(a)gmail.com> wrote:
> But that's a per-box per-repo specific change, with per-distro, per-repo
> variations for the end users to figure out for themselves. And then it
> won't fail over at all if the url you pick goes away. Aren't computers
> supposed to work for you instead of the other way around?
Sigh!!!!! Why do people insist on doing things the hard way and insist
on doing it over and over. As explained in the previous post of mine,
I have a local repo, I use pxeboot with a kickstart file. There is
one kickstart per distro, per version. The kickstart files are
identicle except for the URL to the distro/arch specific repo and
minor diffs for arch specific stuff. In the %post section of
kickstart I have a ton of stuff automated and one of the things is
modifiying the location of my repo in
/etc/yum.repos.d/CentOS-Base.repo file. If people have a ton of RPM
based servers(RHEL, CentOS, WBEL, Fedora, etc) that they manage and
regularly image/reimage it makes sense to check out how great
kickstart and pxebooting can benefit you. We have over 200+ servers,
50+ VMs and growing and it makes life so much simpler. I can image up
a server in about 10 minutes and it will be totally customized the way
I want it with all the latest updates. We've also added into our repo
the ability to pxeboot install VMWare ESX and Windows 2k3. I haven't
checked it out yet, but I've heard Cobbler takes this one step further
and makes it very very simple. Not sure if Cobbler will handle the
VMWare and Windows though. Little bit of time upfront will save you a
ton of time later. Here's an example totally custom kickstart file
below, search the net for pxeboot or pxeinstall and kickstart, it will
save you so much time.
-matt
[root@install kickstart]# cat centos-4.4-i386.ks.cfg
install
text
reboot
url --url http://install.mydomain.com/install/centos/4.4/os/i386/
lang en_US.UTF-8
langsupport --default=en_US.UTF-8 en_US.UTF-8
keyboard us
#xconfig --card "ATI Radeon 7000" --videoram 8192 --hsync 31.5-37.9
--vsync 50-70 --resolution 800x600 --depth 16 --defaultdesktop gnome
network --device eth0 --bootproto dhcp
rootpw --iscrypted mypw
firewall --disabled
selinux --disabled
authconfig --enableshadow --enablemd5
timezone America/New_York
bootloader --location=mbr
autopart
zerombr yes #windows mbr removal
clearpart --all --drives=sda #s/linux/all (windows)
part /boot --fstype ext3 --size=200 --asprimary
part swap --size=1000 --grow --maxsize=2000 --ondisk=sda
part pv.01 --size=1024 --grow --ondisk=sda
volgroup vg.01 pv.01
logvol / --fstype ext3 --size=1024 --vgname=vg.01 --name=rootvol --grow
logvol /opt --fstype ext3 --size=2048 --vgname=vg.01 --name=junkvol
#part / --fstype ext3 --size=1024 --grow --ondisk=sda
%packages
@ compat-arch-development
@ editors
@ emacs
@ mysql
@ admin-tools
@ development-tools
@ text-internet
@ compat-arch-support
lvm2
kernel-smp-devel
kernel-smp
e2fsprogs
screen
sysstat
net-snmp
%post
CMDLINE="`cat /proc/cmdline`"
HOSTNAME=`expr "$CMDLINE" : '.*hostname=\([^ ]*\).*'` #grab hostname
from pxeboot
export CMDLINE HOSTNAME
set > /root/ks.env
if [ "$HOSTNAME" ]
then
hostname $HOSTNAME
fi
# use these next few lines if you have a RHN account
#rpm --import /usr/share/rhn/RPM-GPG-KEY
#rhnreg_ks --force --profilename
${HOSTNAME:-"tmp-host-name.mydomain.com"} --username myrhnuser
--password myrhnpasswd
#up2date --nox --force --update -v
# use these lines if you're using CentOS
wget -O /etc/yum.repos.d/CentOS-Base.repo
http://install.mydomain.com/install/kickstart/CentOS-Base.repo #get my
custom CentOS-Base.repo file
rpm --import http://install.mydomain.com/install/CentOS/RPM-GPG-KEY-CentOS-4
yum -y update #why not update to the latest packages right after install
# look I'm setting custom resolv.conf info
cat > /etc/resolv.conf <<EOF
domain mydomain.com
nameserver x.x.x.x
nameserver x.x.x.x
EOF
chkconfig ntpd on
chkconfig cups off
chkconfig cups-config-daemon off
chkconfig sendmail off
# set machine to boot at init3
perl -i -pe 's/id\:5\:initdefault\:/id\:3\:initdefault\:/g' /etc/inittab
ed /etc/sysconfig/i18n <<EOF
%s/en_US\.UTF-8/en_US/g
w
EOF
ed /etc/mail/sendmail.mc <<DONE
%s/.*MASQUERADE_AS.*/MASQUERADE_AS(\`mydomain.com\')dnl/
w
q
DONE
make -C /etc/mail
umount /opt
lvremove -f /dev/vg.01/junkvol
ed /etc/fstab <<EOF
%s/\/dev\/vg.01\/junkvol/\#\/dev\/vg.01\/junkvol/g
w
EOF
makewhatis
cat >> /etc/bashrc <<EOF
alias dir='ls -lasF'
EOF
cat >> /etc/profile <<EOF
EDITOR=vi
SVN_EDITOR=vi
export EDITOR SVN_EDITOR
EOF
# install a custom snmpd.conf
wget -O /etc/snmp/snmpd.conf http://install.mydomain.com/install/snmp/snmpd.conf
chkconfig snmpd on
# create a directory for software installs
mkdir /opt/install
Hi Niki,
The principle to work by here is 'least required access'. There's two
functional types of users we care about, the one executing the PHP
code (probably apache or php-fpm) and admins like yourself with
FTP/shell access. Upstream wordpress documents application write
requirements at
https://codex.wordpress.org/Hardening_WordPress#File_Permissions -
read it to know where the web server will expect write access, but
don't follow the instructions - especially the numbers for chmod - by
rote!
On Sat, Dec 2, 2017 at 3:30 AM, Nicolas Kovacs <info(a)microlinux.fr> wrote:
>
> Hi,
>
> Until a few months ago, when I had to setup a web server under CentOS, I
> assigned (I'm not sure about the correct english verb for "chown"ing)
> all the web pages to the apache user and group. To give you an example,
> let's say I have a static website under /var/www/myserver on a CentOS
> server running Apache. Then I would configure permissions for the web
> content like this:
>
> # chown -R apache:apache /var/www/myserver
> # find /var/www/myserver -type d -exec chmod 0750 {} \;
> # find /var/www/myserver -type f -exec chmod 0640 {} \;
>
> Some time ago a fellow sysadmin (Remi Collet on the fr.centos.org forum)
> pointed out that this is malpractice in terms of security, and that the
> stuff under /var/www should *not* be owned by the user/group running the
> webserver.
Right, this gives Apache write access over *everything*. That means
that Apache could potentially change your site code. Many attack
vectors rely on changing wordpress files or creating new files, so
this should not be possible.
> Which means that for the static website above, I could have
> something like this, for example:
>
> # chown -R microlinux:microlinux /var/www/myserver
> # find /var/www/myserver -type d -exec chmod 0755 {} \;
> # find /var/www/myserver -type f -exec chmod 0644 {} \;
>
> Or even this:
>
> # chown -R nobody:nobody /var/www/myserver
> # find /var/www/myserver -type d -exec chmod 0755 {} \;
> # find /var/www/myserver -type f -exec chmod 0644 {} \;
I don't like the convention of creating an arbitrarily named user to
own website files. Nicolas is logging in and working on the server,
make an ie nkovacs user for yourself to do your work. Shared hosting
companies tend to follow the "one FTP user named after website" or
"one shell user named after customer" model and expect their customers
to share a single login account, but if you have root access to the
server there's no restrict yourself this way. It also leads to a
solution where a group of folks who need to work on the site will
share the single login account, making it impossible to answer
questions like "who changed this file" or "who is logged in right
now". If any kind of compliance is a concern, generic/anonymous login
is a no-go. If compliance is not a concern, there's still no real
benefit to making up usernames for yourself on a production system
that are not your own name, and sharing credentials is still bad
practice in principle.
>
> Now I'm hosting quite a few Wordpress sites on various CentOS servers.
> Some stuff in Wordpress has to be writable by Apache. If I want to keep
> stuff as secure as possible, here's the permissions I have to define.
>
> # cd /var/www
> # chown -R microlinux:microlinux wordpress-site/
> # find wordpress-site/ -type d -exec chmod 0755 {} \;
> # find wordpress-site/ -type f -exec chmod 0644 {} \;
> # cd wordpress-site/html
> # chown -R microlinux:apache wp-content/
> # find wp-content/ -type d -exec chmod 0775 {} \;
> # find wp-content/ -type f -exec chmod 0664 {} \;
>
> As far as I know, this is the most secure setup for Wordpress as far as
> permissions are concerned.
Wordpress plugins are in wp-content. Allowing a wordpress plugin to
be compromised is functionally equivalent to allowing the core code to
be compromised, we do not want Apache to write plugin code.
`wp-content/uploads` is the only *stock* directory I'm aware of that
Wordpress *requires* write access too. Some plugins might have
additional directories they write to, this should be documented for
each such plugin.
With an application like Wordpress, Apache only needs to create files
for things like images uploaded for posts. It should never be allowed
to write in a directory where PHP files are. Conversely, any
directory where it *can* write should not be used for PHP code. You
can block that with the snippet below, again from upstream wordpress:
<Directory "/var/www/myserver/wp-content/uploads">
# Kill PHP Execution
<Files ~ "\.ph(?:p[345]?|t|tml)$">
deny from all
</Files>
</Directory>
You might notice that I used a <Directory> block where the page I
linked to does not. The upstream example has you drop a <Files> block
into a .htaccess file; in that context, the <Directory> is implicitly
inherited from the immediate parent directory of the .htaccess file.
It's a convenient way to adjust Apache configuration if you do not
have privileged shell access, but it also means the .htaccess file
will be read and interpreted anew for *every request*. You *do* have
privileged shell access, and can put the directives into the config
file that's read once, when the server loads, to avoid much recurring
IO expense.
> The problem is, I can't use automatic updates
> anymore. Whenever Wordpress releases a new version, I have to set
> permissions temporarily like this:
>
> # chown -R apache:apache /var/www/wordpress-site
>
> Then I can launch the update from within the Wordpress dashboard. And
> once the update is complete, I have to redefine sane permissions as
> above. Which is quite a bit tedious if you have two dozen Wordpress
> sites to manage, even if you have little scripts to define the permissions.
>
> So I'm finally coming to my question. How problematic is it really to
> have the apache user and group owning the stuff under /var/www? I admit
> I followed the users' advice out of respect for his competence. But as
> far as I know, sometimes you get security advice where the resulting
> hassle far outweighs the real benefits.
>
> Any suggestions?
>
> Cheers,
>
> Niki
>
> --
You're describing Wordpress's http direct filesystem access method.
There should be an option to chose another method. You can install
the ssh2 php module and use the ssh access method, or the FTP access
method. This way, when you update, you would enter your linux user
credentials into Wordpress's update form and it would work because
'nkovacs' owns those files,or is a member of the owning group. If you
chose FTP, do not allow public access, because FTP transmits
credentials in the clear. Admins with real FTP clients can use sftp,
which comes with the openssh server for free. With ssh or ftp update
methods, Apache *never* gets write access to PHP files directly, it
acts as an [s]ftp client to do the update work.
When 'nkovacs' creates a new file, it will be owned by nkovacs:nkovacs
and inherit octal permissions from the parent directory and umask.
That might break some of our assumptions about the role played by
group membership, so make the group ownership 'sticky' with `chmod g+s
-R /var/www/mysite/`. Now, new files will inherit group membership
from the parent directory instead of the creating user.
If 'nkovacs' is the only admin who will ever update wordpress, or work
with the site in a shell or FTP session, you can put 'apache' in the
group owner slot, and set the octal permissions on directories to
appropriately restrict it - ie 2755 everywhere except
`wp-content/uploads`, which could be 2775.
If others will also perform those tasks, then you need a user group
for them, ie 'microsites_devs' or 'mysite_admins'. We still want
sticky groups, but we can't use the group slot to give Apache
permissions anymore. Those image uploads are coming from the public
internet, so it's not *too* much of a stretch to use the 'other' slot
to allow apache writes via a 2777 mode - but this violates the 'least
required privilege' principle because any user on the system, not just
apache, can tamper with the files. A judicious application of facls
fits this use case, ie
setfacl -m u:apache:rwX -R /var/www/mysite/wp-content/uploads
setfacl -m d:u:apache:rwX -R /var/www/mysite/wp-content/uploads
The 'X' sets only directories for execute, in contrast to 'x' which
makes files executable too. The first invocation applies to existing
files and the 'd:' sets the 'default' ACL inherited by new files.
Also, don't forget that /var/www by default has the SELinux context
httpd_sys_content_t, which will not allow writes regardless of octal
permissions or ACLs. Wrap up by changing the context of directories
you've determined should be writeable:
semanage fcontext -a -t httpd_sys_rw_content_t
"/var/www/mysite/wp-content/uploads(/.*)?"
restorecon -R /var/www/mysite/
TL;DR my process is:
- Make a list of real humans that need to work on the site
- Assume the web server user should have at least read access on all
files in the site documentroot, or we'd put them somewhere else.
- Make a list of directories (uploads, cache, session files, etc) the
web server must have write access to.
- Use various permissions utilities to make sure humans and web server
can do their assigned work and nothing more.
The first three steps are basically requirements gathering; for best
results, don't skip ahead to applying permissions changes until you've
established what permissions are needed.
HTH,
-- Pete
On 2019-04-18 23:05, Kaushal Shriyan wrote:
> On Tue, Apr 16, 2019 at 10:30 PM Kaushal Shriyan <kaushalshriyan(a)gmail.com>
> wrote:
>
> > Hi,
> >
> > Is there a way to measure network bandwidth per process in CentOS Linux
> > release 7.6.1810 (Core) using any utility? I was reading about nethogs but
> > it does not have the option to run it in daemon mode so that we can take a
> > look at historical data to figure out the process which was consuming high
> > network bandwidth instead it is a good tool for Live monitoring.
> >
> > Please suggest. Thanks in Advance.
> >
> > Best Regards,
> >
> > Kaushal
Hi Kaushal,
You might take this as a starting point. Please read carefully and
test on a non-critical system before implementing, in other words,
USE AT YOUR OWN RISK. TL;DR: use iptables UID/GID matching to count
packets.
Best regards,
--
Charles Polisher
#!/bin/bash
# Report on network usage by user, using iptables
# C. Polisher 2013-06-03 Based on a post by H. LaDerrick on NANOG-L
# Which network interface to use
IFACE=eth0
# Sampling interval for recording traffic (integer seconds)
INTERVAL=10
# Number of intervals to collect stats for.
COUNT=10
# For verbose execution set to value to 1, 2 is very verbose
DEBUG=0
# --- no changes should be required below this line
if [ $UID -ne 0 ] ; then
echo "Must run as root"
exit 1
fi
if [ $COUNT -eq 0 ] ; then
echo "Count must be a positive integer from 1 to $((2**32-1))"
exit 1
fi
# FIXME Magic constant 1000 should be based on some fact from the OS;
# OS users could possibly of interest too (all users).
userlist=`awk -F':' '{if ($3 >= 1000)print $1;}' /etc/passwd | sort | tr '\n' ' '`
OFILE=`mktemp -t "nettrack.XXXXXX"` || exit 1
approvedelete=0
#
# Delete pre-existing rules from pre-existing chains
#
ruleindex=0
for i in $userlist;
do
for j in tcp udp icmp ;
do
ruleindex=$(( ruleindex + 1 ))
RULENAME="${i}_${j}"
## preflight=`iptables --list-rules --line-numbers | fgrep "$RULENAME"`
preflight=`iptables -L --line-numbers | fgrep "$RULENAME"`
[ $DEBUG -gt 0 ] && echo "Username: $i Proto: $j Rulename: $RULENAME Ruleindex:$ruleindex"
[ $DEBUG -gt 1 ] && echo '-------->>>>>>>>>>>----------'
[ $DEBUG -gt 1 ] && echo $preflight
[ $DEBUG -gt 1 ] && echo '--------<<<<<<<<<<<----------'
if [[ ( "X$preflight" != X ) && ( $approvedelete -eq 0 ) ]] ; then
read -p "Found pre-existing ruleset(s). Delete (Y/N)." m
if [ "X$m" != XY ] ; then
echo "User abort."
exit 1
else
approvedelete=1
fi
fi
if [[ ( "X$preflight" != X ) && ( $approvedelete -eq 1 ) ]] ; then
[ $DEBUG -gt 0 ] && echo "iptables --delete OUTPUT 1 ($RULENAME) ($ruleindex)"
iptables --delete OUTPUT 1
fi
done
done
#
# Delete pre-existing chains
#
for i in $userlist;
do
for j in tcp udp ;
do
RULENAME="${i}_${j}"
preflight=`iptables -S | fgrep "$RULENAME"`
if [[ ( "X$preflight" != X ) || ( $approvedelete -eq 1 ) ]] ; then
[ $DEBUG -gt 0 ] && echo "iptables --delete-chain $RULENAME"
iptables --delete-chain "$RULENAME"
fi
done
done
#
# Instantiate a rule for each user/protocol combination
#
chaincreated=0
for j in tcp udp ;
do
for i in $userlist;
do
RULENAME="${i}_${j}"
[ $DEBUG -gt 0 ] && echo "iptables -N $RULENAME"
iptables -N "$RULENAME"
[ $DEBUG -gt 0 ] && echo iptables -I OUTPUT -m owner -o ${IFACE} -p $j --uid-owner ${i} -j "$RULENAME"
iptables -I OUTPUT -m owner -o ${IFACE} -p $j --uid-owner ${i} -j "$RULENAME"
done
done
# TODO: prompt for display on stdout; prompt for save results in $OFILE
echo "You may wish to tail the report file $OFILE"
echo "Date Time Packets Bytes User Proto Iface" > $OFILE
for i in `seq 1 $COUNT` ;
do
echo "Sleeping for $INTERVAL seconds on cycle $i of $COUNT" > /dev/stderr
sleep $INTERVAL
# Parameter --zero zeroes the rules counters after outputting them
iptables -n -v --exact --zero -L 2>&1 \
| egrep -v '^$|^Chain|^ pkts|^Zeroing' \
| egrep '[_]tcp|[_]udp' \
| awk '
{if ( $1 > 0 ) printf("%s %d %d %s %s %s\n",strftime("%F %H:%M:%S%z"),$1,$2,$3,$4,$7);}
' \
| sort -nr >> $OFILE
done
exit 0
# Re: Security Guideance
#
# From: LaDerrick H.
# Date: Tue Feb 23 15:49:32 2010
# List-archive: http://mailman.nanog.org/mailman/nanog
# List-id: North American Network Operators Group <nanog.nanog.org>
#
# Paul Stewart wrote:
# > We have a strange series of events going on in the past while.... Brief
# > history here, looking for input from the community - especially some of
# > the security folks on here.
# >
# > We provide web hosting services - one of our hosting boxes was found a
# > while back with root kits installed, un patched software and lots of
# > other "goodies". With some staff changes in place (don't think I need
# > to elaborate on that) we are trying to clean up several issues including
# > this particular server. A new server was provisioned, patched, and
# > deployed. User data was moved over and now the same issue is coming
# > back....
# > The problem is that a user on this box appears to be launching high
# > traffic DOS attacks from it towards other sites. These are UDP based
# > floods that move around from time to time - most of these attacks only
# > last a few minutes.
#
# Counting outbound udp bytes and packets can help spot anomalies.
# Something like this would help but may be unwieldy if you have thousands
# of users on a single box:
#
# WANIF=eth0
# userlist="userA userB user..."
# for i in ${userlist}
# do
# iptables -N ${i}_UDP
# iptables -I OUTPUT -m owner -o ${WANIF} -p udp --uid-owner ${i} -j ${i}_UDP
# done
#
# Then look at counters with:
# iptables -nvL OUTPUT | grep _UDP | sort.......
#
#
# I wouldn't leave this in place full-time for thousands of accounts
# though without attempting to measure the impact on network performance.
#
# > I've done tcpdumps within seconds of the attack starting and to date
# > been unable to find the source of this attack (we know the server,
# > just not sure which customer it is on the server that's been
# > compromised). Several hours of scanning for php, cgi, pl type files
# > have been wasted and come up nowhere...
# >
# > It's been suggested to dump IDS in front of this box and I know I'll
# > get some feedback positive and negative in that aspect.
# >
# > What tools/practices do others use to resolve this issue? It's a
# > Centos 5.4 box running latest Plesk control panel.
# >
# > Typically we have found it easy to track down the offending script or
# > program - this time hasn't been easy at all...
by GRIFFO Consultoria & Treinamentos - Rodrigo Griffo
Pessoal boa tarde
Estou usando o CentOS 5.7 32b com o squid 2.6 e com o sarg 2.3.1.1 e estou
com esta mensagem de erro:
sarg -f /etc/sarg/sarg.conf
SARG: Records in file: 78847, reading: 100,00%
SARG: (grepday) Fontname
/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf not found
Eu ja instalei esta fonte, ja coloquei ela no caminho que ela fala que nao
encontra...ja coloquei na pasta fonts do sarg, e nada
segue conf do sarg
===========================
# sarg.conf
#
# TAG: access_log file
# Where is the access.log file
# sarg -l file
#
#access_log /usr/local/squid/var/logs/access.log
access_log /var/log/squid/access.log
# TAG: graphs yes|no
# Use graphics where is possible.
# graph_days_bytes_bar_color blue|green|yellow|orange|brown|red
#
#graphs yes
#graph_days_bytes_bar_color orange
# TAG: graph_font
# The full path to the TTF font file to use to create the graphs. It
is required
# if graphs is set to yes.
#
graph_font /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf
# TAG: title
# Especify the title for html page.
#
title "Squid User Access Reports"
# TAG: font_face
# Especify the font for html page.
#
font_face Tahoma,Verdana,Arial
# TAG: header_color
# Especify the header color
#
#header_color darkblue
# TAG: header_bgcolor
# Especify the header bgcolor
#
#header_bgcolor blanchedalmond
# TAG: font_size
# Especify the text font size
#
#font_size 9px
# TAG: header_font_size
# Especify the header font size
#
#header_font_size 9px
# TAG: title_font_size
# Especify the title font size
#
#title_font_size 11px
# TAG: background_color
# TAG: background_color
# Html page background color
#
# background_color white
# TAG: text_color
# Html page text color
#
#text_color #000000
# TAG: text_bgcolor
# Html page text background color
#
#text_bgcolor lavender
# TAG: title_color
# Html page title color
#
#title_color green
# TAG: logo_image
# Html page logo.
#
#logo_image none
# TAG: logo_text
# Html page logo text.
#
#logo_text ""
# TAG: logo_text_color
# Html page logo texti color.
#
#logo_text_color #000000
# TAG: logo_image_size
# Html page logo image size.
# width height
#
#image_size 80 45
# TAG: background_image
# Html page background image
#
#background_image none
# TAG: password
# User password file used by Squid authentication scheme
# If used, generate reports just for that users.
#
#password none
# TAG: temporary_dir
# Temporary directory name for work files
# sarg -w dir
#
#temporary_dir /tmp
# TAG: output_dir
# The reports will be saved in that directory
# sarg -o dir
#
#output_dir /var/www/html/squid-reports
output_dir /var/www/sarg/ONE-SHOT
# TAG: output_email
# Email address to send the reports. If you use this tag, no html
reports will be generated.
# sarg -e email
#
#output_email none
# TAG: resolve_ip yes/no
# Convert ip address to dns name
# sarg -n
#resolve_ip no
resolve_ip yes
# TAG: user_ip yes/no
# Use Ip Address instead userid in reports.
# sarg -p
#user_ip no
# TAG: topuser_sort_field field normal/reverse
# Sort field for the Topuser Report.
# Allowed fields: USER CONNECT BYTES TIME
#
#topuser_sort_field BYTES reverse
# TAG: user_sort_field field normal/reverse
# Sort field for the User Report.
# Allowed fields: SITE CONNECT BYTES TIME
#
#user_sort_field BYTES reverse
# TAG: exclude_users file
# users within the file will be excluded from reports.
# you can use indexonly to have only index.html file.
#
#exclude_users none
# TAG: exclude_hosts file
# Hosts, domains or subnets will be excluded from reports.
#
# Eg.: 192.168.10.10 - exclude ip address only
# 192.168.10.0/24 - exclude full C class
# s1.acme.foo - exclude hostname only
# *.acme.foo - exclude full domain name
#
#exclude_hosts none
# TAG: useragent_log file
# useragent.log file patch to generate useragent report.
#
#useragent_log none
# TAG: date_format
# Date format in reports: e (European=dd/mm/yy), u
(American=mm/dd/yy), w (Weekly=yy.ww)
#
#date_format u
# TAG: per_user_limit file MB
# Saves userid on file if download exceed n MB.
# This option allow you to disable user access if user exceed a
download limit.
#
#per_user_limit none
# TAG: lastlog n
# How many reports files must be keept in reports directory.
# The oldest report file will be automatically removed.
# 0 - no limit.
#
#lastlog 0
# TAG: remove_temp_files yes
# Remove temporary files: geral, usuarios, top, periodo from root
report directory.
#
#remove_temp_files yes
# TAG: index yes|no|only
# Generate the main index.html.
# only - generate only the main index.html
#
index yes
#
#index_tree file
# TAG: overwrite_report yes|no
# yes - if report date already exist then will be overwrited.
# no - if report date already exist then will be renamed to
filename.n, filename.n+1
#
#overwrite_report no
# TAG: records_without_userid ignore|ip|everybody
# What can I do with records without user id (no authentication) in
access.log file ?
#
# ignore - This record will be ignored.
# ip - Use ip address instead. (default)
# everybody - Use "everybody" instead.
#
#records_without_userid ip
# TAG: use_comma no|yes
# Use comma instead point in reports.
# Eg.: use_comma yes => 23,450,110
# use_comma no => 23.450.110
#
#use_comma no
# TAG: mail_utility
# Mail command to use to send reports via SMTP. Sarg calls it like
this:
# mail_utility -s "SARG report, date" "output_email" <"mail_content"
#
# Therefore, it is possible to add more arguments to the command by
specifying them
# here.
#
# If you need too, you can use a shell script to process the content
of /dev/stdin
# (/dev/stdin is the mail_content passed by sarg to the script) and
call whatever
# command you like. It is not limited to mailing the report via SMTP.
#
# Don't forget to quote the command if necessary (i.e. if the path
contains
# characters that must be quoted).
#
#mail_utility mailx
mail_utility mail
# TAG: topsites_num n
# How many sites in topsites report.
#
#topsites_num 100
# TAG: topsites_sort_order CONNECT|BYTES A|D
# Sort for topsites report, where A=Ascendent, D=Descendent
#
#topsites_sort_order CONNECT D
# TAG: index_sort_order A/D
# Sort for index.html, where A=Ascendent, D=Descendent
#
#index_sort_order D
# TAG: exclude_codes file
# Ignore records with these codes. Eg.: NONE/400
# Write one code per line. Lines starting with a # are ignored.
# Only codes matching exactly one of the line is rejected. The
# comparison is not case sensitive.
#
#exclude_codes /usr/local/sarg/exclude_codes
# TAG: replace_index string
# Replace "index.html" in the main index file with this string
# If null "index.html" is used
#replace_index <?php echo str_replace(".", "_", $REMOTE_ADDR); echo
".html"; ?>
# TAG: max_elapsed milliseconds
# If elapsed time is recorded in log is greater than max_elapsed use 0
for elapsed time.
# Use 0 for no checking
#
#max_elapsed 28800000
# 8 Hours
# TAG: report_type type
# What kind of reports to generate.
# topusers - users, sites, times, bytes, connects, links to
accessed sites, etc
# topsites - site, connect and bytes report
# sites_users - users and sites report
# users_sites - accessed sites by the user report
# date_time - bytes used per day and hour report
# denied - denied sites with full URL report
# auth_failures - autentication failures report
# site_user_time_date - sites, dates, times and bytes report
# downloads - downloads per user report
#
# Eg.: report_type topsites denied
#
#report_type topusers topsites sites_users users_sites date_time denied
auth_failures site_user_time_date downloads
# TAG: usertab filename
# You can change the "userid" or the "ip address" to be a real user
name on the reports.
# If resolve_ip is active, the ip address is resolved before being
looked up into this
# file. That is, if you want to map the ip address, be sure to set
resolv_ip to no or
# the resolved name will be looked into the file instead of the ip
address. Note that
# it can be used to resolve any ip address known to the dns and then
map the unresolved
# ip addresses to a name found in the usertab file.
# Table syntax:
# userid name or ip address name
# Eg:
# SirIsaac Isaac Newton
# vinci Leonardo da Vinci
# 192.168.10.1 Karol Wojtyla
#
# Each line must be terminated with '\n'
# If usertab have value "ldap" (case ignoring), user names
# will be taken from LDAP server. This method as approaches for
reception
# of usernames from Active Didectory
#
#usertab none
# TAG: LDAPHost hostname
# FQDN or IP address of host with LDAP service or AD DC
# default is '127.0.0.1'
#LDAPHost 127.0.0.1
# TAG: LDAPPort port
# LDAP service port number
# default is '389'
#LDAPPort 389
# TAG: LDAPBindDN CN=username,OU=group,DC=mydomain,DC=com
# DN of LDAP user, who is authorized to read user's names from LDAP
base
# default is empty line
#LDAPBindDN cn=proxy,dc=mydomain,dc=local
# TAG: LDAPBindPW secret
# Password of DN, who is authorized to read user's names from LDAP
base
# default is empty line
#LDAPBindPW secret
# TAG: LDAPBaseSearch OU=users,DC=mydomain,DC=com
# LDAP search base
# default is empty line
#LDAPBaseSearch ou=users,dc=mydomain,dc=l
.
.
.
.
.
daki pra baixo e padrao
=========================
Alguem sabe onde esta esse "amado" erro ?
--
Desde já agradeço
RODRIGO GRIFFO
Gestor de TI
27-9999-5733
http://rodrigogriffo.blogspot.com
MSN: rodrigogriffo(a)gmail.com
SKYPE: rodrigogriffo
SERVIDORES WINDOWS E LINUX
Squid-Samba-Iptables
Bloqueios de Sites
Monitoramento e Relatorios
Rede Sem Fio - WPA2
Contratos de Manutençao
Hi everyone,
I'm replying to myself to help anyone else who happens to get the polkit
timeouts. Our CentOS7 machines are joined to our Active Directory domain
and use AD for authentication and account lookups (Using the SSSD AD
provider). We're NOT using FreeIPA. The polkit timeouts were caused by sssd
taking too long to respond to user information lookups for users that were
in Active Directory.
The solution was to set "enumerate = False" in /etc/sssd/sssd.conf and
restart the sssd service or reboot the machine. If "enumerate" is not
present in sssd.conf, then it defaults to False.
In addition to the polkit hangs, we were also experiencing the following
problems, which went away or improved after the sssd.conf change was made:
- Running "id $USERNAME" was taking many seconds when looking up users
in Active Directory
- Logins were taking a while (5+ seconds) or would just hang
- Unlocking a machine from the screensaver would sometimes fail.
- General system sluggishness.
- High system CPU/load with no obvious culprits according to "top"
- The sssd_be process was often taking 5% or more of a CPU.
The problems were more prevalent on our big time-sharing systems (64
cores/512GB RAM), that have multiple (15+) simultaneous users running
large memory or CPU interactive jobs. The problems also hit some of our
single-user workstations, but the most-affected users were still our
compute-heavy research users.
For others' reference, I'm also running the sssd cache on a tmpfs
filesystem and here is my sanitized sssd.conf file.:
> [sssd]
> config_file_version = 2
> services = nss, pam
> domains = subdomain.example.com
>
> [domain/subdomain.example.com]
> ad_domain = subdomain.example.com
> krb5_realm = subdomain.example.com
> realmd_tags = manages-system joined-with-samba
> cache_credentials = True
> id_provider = ad
> auth_provider = ad
> chpass_provider = ad
>
> enumerate = False
> access_provider = ad
> krb5_store_password_if_offline = True
> ldap_id_mapping = False
> use_fully_qualified_names = False
> krb5_renewable_lifetime = 60d
> krb5_lifetime = 60d
> krb5_renew_interval = 600s
>
Sincerely,
Jason
---------------------------------------------------------------------------
Jason Edgecombe | Linux Administrator
UNC Charlotte | The William States Lee College of Engineering
9201 University City Blvd. | Charlotte, NC 28223-0001
Phone: 704-687-1943
jwedgeco(a)uncc.edu | http://engr.uncc.edu | Facebook
---------------------------------------------------------------------------
If you are not the intended recipient of this transmission or a person
responsible for delivering it to the intended recipient, any disclosure,
copying, distribution, or other use of any of the information in this
transmission is strictly prohibited. If you have received this transmission
in error, please notify me immediately by reply e-mail or by telephone at
704-687-1943. Thank you.
On Fri, Mar 10, 2017 at 1:01 PM, Edgecombe, Jason <jwedgeco(a)uncc.edu> wrote:
> Hi everyone,
>
> We seem to be having issues on multiple CentOS 7.3 machines. The problem
> seems to revolve around polkitd. At some random time, polkitd seems to stop
> responding on my systems. Along with this, there might be hundreds of
> defunct pkla-check-authorization processes. If I reboot, then things are
> fine for a while.
>
> I don't see any activity in the unabridged journal to suggest anything
> that might be triggering polkitd. The puppet run finished 5 minutes before
> polkitd lost it's head.
>
> polkit version is polkit-0.112-11.el7_3.x86_64
>
> Any help is appreciated.
>
> Thanks,
> Jason
>
> Here is some condensed output from the "journalctl -u polkit" command:
> Mar 09 04:02:14 myhost systemd[1]: Starting Authorization Manager...
> Mar 09 04:02:14 myhost polkitd[1018]: Started polkitd version 0.112
> Mar 09 04:02:14 myhost polkitd[1018]: Loading rules from directory
> /etc/polkit-1/rules.d
> Mar 09 04:02:14 myhost polkitd[1018]: Loading rules from directory
> /usr/share/polkit-1/rules.d
> Mar 09 04:02:14 myhost polkitd[1018]: Finished loading, compiling and
> executing 7 rules
> Mar 09 04:02:14 myhost systemd[1]: Started Authorization Manager.
> Mar 09 04:02:14 myhost polkitd[1018]: Acquired the name
> org.freedesktop.PolicyKit1 on the system bus
> Mar 09 04:02:53 myhost polkitd[1018]: Registered Authentication Agent for
> unix-session:c1 (system bus name :1.41 [gnome-shell --mode=gdm], object
> path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
> Mar 09 04:08:25 myhost polkitd[1018]: Reloading rules
> Mar 09 04:08:25 myhost polkitd[1018]: Collecting garbage unconditionally...
> Mar 09 04:08:25 myhost polkitd[1018]: Loading rules from directory
> /etc/polkit-1/rules.d
> Mar 09 04:08:25 myhost polkitd[1018]: Loading rules from directory
> /usr/share/polkit-1/rules.d
> Mar 09 04:08:25 myhost polkitd[1018]: Finished loading, compiling and
> executing 8 rules
> Mar 09 04:08:25 myhost polkitd[1018]: Reloading rules
> Mar 09 04:08:25 myhost polkitd[1018]: Collecting garbage unconditionally...
> Mar 09 04:08:25 myhost polkitd[1018]: Loading rules from directory
> /etc/polkit-1/rules.d
> Mar 09 04:08:25 myhost polkitd[1018]: Loading rules from directory
> /usr/share/polkit-1/rules.d
> Mar 09 04:08:25 myhost polkitd[1018]: Finished loading, compiling and
> executing 8 rules
> Mar 09 04:08:53 myhost polkitd[1018]: Reloading rules
> ... (snipped more rules loading)
> Mar 09 04:10:39 myhost polkitd[1018]: Collecting garbage unconditionally...
> Mar 09 04:10:39 myhost polkitd[1018]: Loading rules from directory
> /etc/polkit-1/rules.d
> Mar 09 04:10:39 myhost polkitd[1018]: Loading rules from directory
> /usr/share/polkit-1/rules.d
> Mar 09 04:10:39 myhost polkitd[1018]: Finished loading, compiling and
> executing 7 rules
> Mar 09 16:59:42 myhost polkitd[1018]: /etc/polkit-1/rules.d/49-polkit-pkla-compat.rules:21:
> Error: Error spawning helper: Timed out after 10 seconds (g-io-error-quark,
> 24)
> Mar 09 16:59:42 myhost polkitd[1018]: Error evaluating authorization rules
> Mar 10 04:13:34 myhost polkitd[1018]: /etc/polkit-1/rules.d/49-polkit-pkla-compat.rules:21:
> Error: Error spawning helper: Timed out after 10 seconds (g-io-error-quark,
> 24)
> Mar 10 04:13:34 myhost polkitd[1018]: Error evaluating authorization rules
> Mar 10 04:14:32 myhost polkitd[1018]: /etc/polkit-1/rules.d/49-polkit-pkla-compat.rules:21:
> Error: Error spawning helper: Timed out after 10 seconds (g-io-error-quark,
> 24)
> ... (snipped more lines about error evaluating rules)...
>
> ------------------------------------------------------------
> ---------------
> Jason Edgecombe | Linux Administrator
> UNC Charlotte | The William States Lee College of Engineering
> 9201 University City Blvd. | Charlotte, NC 28223-0001
> Phone: 704-687-1943
> jwedgeco(a)uncc.edu | http://engr.uncc.edu | Facebook
> ------------------------------------------------------------
> ---------------
> If you are not the intended recipient of this transmission or a person
> responsible for delivering it to the intended recipient, any disclosure,
> copying, distribution, or other use of any of the information in this
> transmission is strictly prohibited. If you have received this transmission
> in error, please notify me immediately by reply e-mail or by telephone at
> 704-687-1943. Thank you.
>
On 1/24/2013 3:27 PM, James Freer wrote:
> On Wed, Jan 23, 2013 at 10:15 AM, Leon Fauster
> <leonfauster(a)googlemail.com> wrote:
>> Am 22.01.2013 um 23:03 schrieb James Freer <jessejazza3.uk(a)gmail.com>:
>>> On Tue, Jan 22, 2013 at 9:15 PM, Johnny Hughes <johnny(a)centos.org> wrote:
>>>> On 01/22/2013 03:01 PM, James Freer wrote:
>>>>> I've just installed v6.3 as a desktop (from
>>>>> Centos-6.3-i386-LiveCD.iso) and to get the hang of the Centos approach
>>>>> and then hope to move on to a server. I've been using linux *buntu for
>>>>> 5 years.
>>>>>
>>>>> Hope i don't sound like a nit but i've got a little confused with the
>>>>> repos. Hoping someone would be kind enough just to clarify. This
>>>>> installation is for stability whilst installing the latest versions
>>>>> available.
>>>>>
>>>>>
>>>>> a] What happens if i select these? I assume that Centos6 is correct so
>>>>> why these?
>>>>>
>>>>> In software sources i have checked;
>>>>> Centos6-base, Centos6-extras, Centos6-updates
>>>>>
>>>>> unchecked;
>>>>> Centos6-contrib, Centos6-media, Centos6-plus
>>>>> i did check these three and media wouldn't, and plus wanted to suggest
>>>>> installation kernel 2.6 [gulp]. Each unchecked without installing.
>>>>>
>>>>> Then we also have the same for 6.0, 6.1, 6.2?
>>>>>
>>>>>
>>>>> b] So which repos to use as i'm unsure where apps are i want to use
>>>>> like efax (ok that's in), yumex, pyrenamer, llgal, abiword. at present
>>>>> i'm not sure which repos to use other than the one's i've mentioned.
>>>>>
>>>>> I then started to look at the repos
>>>>> http://wiki.centos.org/AdditionalResources/Repositories
>>>>>
>>>>> Centos extras and plus -- are these the same as above?
>>>>> CS/GFS, Centos testing, Fast track, debuginfo, contrib, CR - would
>>>>> seem not applicable to me.
>>>>>
>>>>> 3rd party repos
>>>>> rpmforge (DAG), and EPEL seem likely
>>>>> other repos seem likely to replace core packages which wouldn't be applicable.
>>>>> Google RPM gets a "use with care". I used the google browser on *buntu
>>>>> from their website with no problems so i was wondering what the
>>>>> problem would be and whether to use this repoo or the website. The
>>>>> fact that a maintainer has made this repo suggests a reason rather
>>>>> than just convenience.
>>>>>
>>>>>
>>>>> c] Could one just use pkgs.org to install from?
>>>>>
>>>>> My google "centos abiword" threw up yet another repo 'puias' and this
>>>>> has the latest Abiword version 2.8.6.
>>>>> http://pkgs.org/centos-6-rhel-6/puias-i386/abiword-2.8.6-3.puias6.i686.rpm.…
>>>>> I could just install from pkgs.org/centos-6 it would seem? Point is
>>>>> the repos are maintained by an expert rather than folk just
>>>>> downloading.
>>>>>
>>>>> I was then confused further by this
>>>>> http://miles.isb.bj.edu.cn/2012/03/09/how-to-install-abiword-on-centos-6-x6…
>>>>> One needs to add epal and rpmforge just for abiword? Surely the
>>>>> dependencies would be in the same repo
>>>>>
>>>>>
>>>>> I'm sure all is quite simple really but i'd be grateful for the
>>>>> guidance before i mess up an installation.
>>>> puias is another rebuild of RHEL code (like CentOS .. but a different
>>>> project) ... they also just changed their name to Springdale Linux:
>>>>
>>>> http://springdale.math.ias.edu/
>>>>
>>>> WRT recommended repos (that also explains what extras is, centosplus is,
>>>> etc):
>>>>
>>>> http://wiki.centos.org/AdditionalResources/Repositories
>>>>
>>>> Basically, for almost everything I need, I can get it from CentOS, EPEL,
>>>> and maybe RPMForge.
>>>>
>>>> As far as the distro is concerned, CentOS-6 is the distro and the point
>>>> releases are basically just point in time freezes to generate new
>>>> install media. You will always get to the latest version of CentOS-6
>>>> (currently 6.3 and updates to that) by doing "yum update"
>>>>
>>>> CR is a way to "point release" updates about a week faster, we release
>>>> the RPMS when they are ready, into CR and then we make ISOs out of them
>>>> and release the next version. It usually takes 5-10 days to make the
>>>> ISOs, test them and seed them.
>>> puias... sorry i overlooked that. Clear enough now. I'll use the epel
>>> and rpmforge repos and see how i get on.
>
> Hi folks
>
>> i do not recommend to use this two repos simultaniuous (or use priorities!).
>> LF
> hmmm - well what is one supposed to do? I've got EPEL installed fine
> but that doesn't have abiword, pyrenamer and some of the the other...
> fairly standard apps. I turned to Centos after Fedora has proved to be
> a bit dodgy. Centos along with debian are supposed to be the TWO main
> community distros. If rpmforge shouldn't be installed (as you've
> advised) it seems i need to look at another distro. The advice is to
> stay with a distro's developer packages and only use other repo if one
> really has to. I can only assume that Centos really is for Server use
> rather than desktop... i was just hoping to use it as a desktop before
> moving onto the server route.
>
> james
> _______________________________________________
> CentOS mailing list
> CentOS(a)centos.org
> http://lists.centos.org/mailman/listinfo/centos
I use some things from both epel and rpmforge on many systems. However,
I have used the yum confs to enable what I need from one and exclude
everything else. Then if I'm worried about the other having what I'm
pulling in from the one, I disable that in its conf file. You should
read up on the power of yum where you can set priorities, do exclusions
and such.
Yes, you can get into trouble if you add 2 without any control. For
instance, something like clamav. One repo might set it up with the
username of clam while the other might use clamav. As the updates come
down, suddenly it dies and you have to figure out that the logs are
owned by the wrong user. This is just one example of many things that
can go wrong with mixed repos.
--
John Hinton
877-777-1407 ext 502
http://www.ew3d.com
Comprehensive Online Solutions