Hi,
Trying to follow the recipe at
http://wiki.centos.org/HowTos/Cacti_on_CentOS_4.x
Which has a bit of an update for 5.x, but no joy.
Anyone know what this from Cacti should suggest?
Data Query Debug Information + Running data query [9]. + Found type = '6 '[script query]. + Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml' + XML file parsed ok. + Executing script for list of indexes '/usr/bin/php -q /var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: index' + Executing script query '/usr/bin/php -q /var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: query index' + Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml' + Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml' + Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml'
The problem is that's returning no information to Cacti to any of the SNMP queries.
My whole goal here is to get something working to graph CPU core use on a handful of systems.
Or is there a better tool than Cacti (or better documented - hopefully in the form of simple recipe rather than many haphazard - often out-of-date - pages of RTFM)? Cacti admits to serious security flaws, not that this'll go on the public net, but I'd be happier to run something safer nontheless.
Thanks, Whit
On Mon, Jun 14, 2010 at 4:29 PM, Whit Blauvelt whit@transpect.com wrote:
Hi,
Trying to follow the recipe at
http://wiki.centos.org/HowTos/Cacti_on_CentOS_4.x
Which has a bit of an update for 5.x, but no joy.
Anyone know what this from Cacti should suggest?
Data Query Debug Information
- Running data query [9].
- Found type = '6 '[script query].
- Found data query XML file at
'/var/www/cacti/resource/script_server/host_cpu.xml'
- XML file parsed ok.
- Executing script for list of indexes '/usr/bin/php -q
/var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: index'
- Executing script query '/usr/bin/php -q
/var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: query index'
- Found data query XML file at
'/var/www/cacti/resource/script_server/host_cpu.xml'
- Found data query XML file at
'/var/www/cacti/resource/script_server/host_cpu.xml'
- Found data query XML file at
'/var/www/cacti/resource/script_server/host_cpu.xml'
The problem is that's returning no information to Cacti to any of the SNMP queries.
My whole goal here is to get something working to graph CPU core use on a handful of systems.
Or is there a better tool than Cacti (or better documented - hopefully in the form of simple recipe rather than many haphazard - often out-of-date - pages of RTFM)? Cacti admits to serious security flaws, not that this'll go on the public net, but I'd be happier to run something safer nontheless.
Thanks, Whit
I don't have an exact answer for you but you may find this tutorial useful.
http://docs.cslabs.clarkson.edu/wiki/Install_Cacti_on_CentOS_5
http://docs.cslabs.clarkson.edu/wiki/Install_Cacti_on_CentOS_5You may find this information useful as well, even if it's specific to the environment it's used in.
http://docs.cslabs.clarkson.edu/wiki/Monitor_a_Remote_System_with_Nagios/SNM...
http://docs.cslabs.clarkson.edu/wiki/Monitor_a_Remote_System_with_Nagios/SNMP HTH, Matt
-- Mathew S. McCarrell Clarkson University '10
mccarrms@gmail.com mccarrms@clarkson.edu 1-518-314-9214
On Mon, Jun 14, 2010 at 04:38:17PM -0400, Mathew S. McCarrell wrote:
I don't have an exact answer for you but you may find this tutorial useful. http://docs.cslabs.clarkson.edu/wiki/Install_Cacti_on_CentOS_5
Thanks. That summarizes nicely the steps I've taken. It's a bit better put together that the several sets of instructions I was working from. The steps it shows are exactly what I ended up doing though.
You may find this information useful as well, even if it's specific to the environment it's used in. http://docs.cslabs.clarkson.edu/wiki/Monitor_a_Remote_System_with_Nagios/SNM...
Should be useful when I extend our Nagios monitoring to include snmp data. We're using Nagios extensively, but it doesn't seem suited to the sort of load graphing we need for our CPU cores - or if it is it's a side of Nagios I'm unfamiliar with (which could be, it's nicely extensible).
Whit
On Mon, Jun 14, 2010 at 5:58 PM, Whit Blauvelt whit@transpect.com wrote:
On Mon, Jun 14, 2010 at 04:38:17PM -0400, Mathew S. McCarrell wrote:
I don't have an exact answer for you but you may find this tutorial useful. http://docs.cslabs.clarkson.edu/wiki/Install_Cacti_on_CentOS_5
Thanks. That summarizes nicely the steps I've taken. It's a bit better put together that the several sets of instructions I was working from. The steps it shows are exactly what I ended up doing though.
You may find this information useful as well, even if it's specific to
the
environment it's used in.
http://docs.cslabs.clarkson.edu/wiki/Monitor_a_Remote_System_with_Nagios/SNM...
Should be useful when I extend our Nagios monitoring to include snmp data. We're using Nagios extensively, but it doesn't seem suited to the sort of load graphing we need for our CPU cores - or if it is it's a side of Nagios I'm unfamiliar with (which could be, it's nicely extensible).
I guess I was thinking originally that you'd find the snmp configuration more useful than anything really related to Nagios on that page. Nagios itself doesn't do graphing but some addons can utilize Nagios data to produce graphs. If you can successfully use snmpwalk on the remote system, then at least you know snmp isn't the problem and you might not find that article very useful.
Matt
-- Mathew S. McCarrell Clarkson University '10
mccarrms@gmail.com mccarrms@clarkson.edu 1-518-314-9214
From: Whit Blauvelt whit@transpect.com
Should be useful when I extend our Nagios monitoring to include snmp data. We're using Nagios extensively, but it doesn't seem suited to the sort of load graphing we need for our CPU cores - or if it is it's a side of Nagios I'm unfamiliar with (which could be, it's nicely extensible).
You could just use PNP and a custom script... http://docs.pnp4nagios.org/
JD
On Tue, Jun 15, 2010 at 03:31:42AM -0700, John Doe wrote:
You could just use PNP and a custom script... http://docs.pnp4nagios.org/
PNP looks like a great project, when it matures. Someday it will be the obvious answer, once someone writes a Nagios plugin that captures per-core CPU load rather than just the generic system load, and the PNP crew enhances their documentation with some working examples, or I learn German so I can dive into the PNP forum (yes, they'll accept questions in English; but the existing answers there mostly aren't.)
I'd probably need to write that Nagios plugin myself, since the standard use for Nagios is spotting systems in danger, not checking how work distributes over CPU cores for performance tuning. Looks like it would need to read data from /proc/stat. Once fed into Nagios, PNP could get it into rrd, and from there out to graphs. Or would it make better sense to feed the data directly to rrd and skip the > Nagios > PNP handoffs?
For watching CPU cores in real time, a bunch of ssh sessions to htop gives us a nice visual. The immediate question is how to get that the data graph over time - one where a 16-core system isn't 16 separate graphs, but one summary one. It does look like that's been solved for Cacti, if I can solve my Cacti problem, and ignore its security problems.
Thanks, Whit
On 6/15/2010 9:04 AM, Whit Blauvelt wrote:
On Tue, Jun 15, 2010 at 03:31:42AM -0700, John Doe wrote:
You could just use PNP and a custom script... http://docs.pnp4nagios.org/
PNP looks like a great project, when it matures. Someday it will be the obvious answer, once someone writes a Nagios plugin that captures per-core CPU load rather than just the generic system load, and the PNP crew enhances their documentation with some working examples, or I learn German so I can dive into the PNP forum (yes, they'll accept questions in English; but the existing answers there mostly aren't.)
I'd probably need to write that Nagios plugin myself, since the standard use for Nagios is spotting systems in danger, not checking how work distributes over CPU cores for performance tuning. Looks like it would need to read data from /proc/stat. Once fed into Nagios, PNP could get it into rrd, and from there out to graphs. Or would it make better sense to feed the data directly to rrd and skip the> Nagios> PNP handoffs?
For watching CPU cores in real time, a bunch of ssh sessions to htop gives us a nice visual. The immediate question is how to get that the data graph over time - one where a 16-core system isn't 16 separate graphs, but one summary one. It does look like that's been solved for Cacti, if I can solve my Cacti problem, and ignore its security problems.
If snmp reports the data you want, you should be able to get either cacti or opennms to graph it. With opennms, if it isn't handled by default, that would involve describing the data collection, storage, and graphing steps in some xml config files. And by the way, opennms can monitor nagios clients - and jmx, wmi and a few other data sources too.
From: Whit Blauvelt whit@transpect.com
the PNP crew enhances their documentation with some working examples, or I learn German
I was able to make some plugins without too much problems (even discovered perl in the process)... http://nagiosplug.sourceforge.net/developer-guidelines.html#PLUGOUTPUT Just copy/paste an existing plugin and adapt it. The main thing to remember is the plugin returns on STDOUT: SERVICE STATUS: Information text|'label1'=value[UOM];[warn];[crit];[min];[max] 'label2'=value[UOM];[warn];[crit];[min];[max] ...
Examples: $ check_memory MEMORY OK - RAM Used: 37% (1127MB/3041MB), SWAP Used: 0% (0MB/2932MB)|RAM=1127MB;2736;2888;0;3041 SWAP=0MB;2902;2932;0;2932 $ check_netstat NETSTAT OK - [LOC] = 24/0/0/8, [LAN] = 15/0/1/44, [WAN] = 0/0/0/0 (ES/SY/FW/TW)|LOC_ES=24;250;1000;0 LOC_SY=0;250;500;0 LOC_FW=0;150;300;0 LOC_TW=8;1500;3000;0 LAN_ES=15;250;1000;0 LAN_SY=0;250;500;0 LAN_FW=1;150;300;0 LAN_TW=44;1500;3000;0 WAN_ES=0;250;1000;0 WAN_SY=0;250;500;0 WAN_FW=0;150;300;0 WAN_TW=0;1500;3000;0 etc...
Then PNP will "automaticaly" plot these values... but yes, if you have n values, you will get n graphs...
someone writes a Nagios plugin that captures per-core CPU load
/proc/stat gives cpuX <user_ticks> <nice_ticks> <system_ticks> <idle_ticks> <uptime> <iowait> <irq> <soft_irq>
What I did for disk perfs by example is save the stats plus a timestamp in a file, then (new_stats-old_stats)/(new_ts-old_ts)... But, in some tricky cases (like /proc/net/dev), watch out for value rollovers and "bumps".
JD
On Wed, Jun 16, 2010 at 02:28:51AM -0700, John Doe wrote:
I was able to make some plugins without too much problems (even discovered perl in the process)...
Agreed, it's easy enough to write Nagios plugins. I've done that too.
Then PNP will "automaticaly" plot these values... but yes, if you have n values, you will get n graphs...
someone writes a Nagios plugin that captures per-core CPU load
/proc/stat gives cpuX <user_ticks> <nice_ticks> <system_ticks> <idle_ticks> <uptime> <iowait> <irq> <soft_irq>
There's a python script claimed to be able to turn that to percentage here: http://ubuntuforums.org/showthread.php?t=148781 - so adapting that to a Nagios plugin (which python's also fine for) could do it.
So PNP is just automagical? Aside from graphing each core separate, rather than a combined graph, it would do what I'm looking for without much special configuration? I got no sense of how to tie PNP in from its sparse docs.
In separate news, what I've learned is that Cacti _used to be able_ to do per-core CPU graphing, but the latest versions aren't comptible with the existing XML files for it - and no solution is on offer in their forum thread on this.
Monin has nice out-of-the-box graphs on other stuff, but not CPU per-core load.
Ganglia has per-core CPU graphing. There are RPMs in the Fedora repository, but at least for x64 there are a bunch of unmet dependencies (for CentOS anyhow). There are also older RPMs on Ganglia's own site. But the per-core CPU stuff is more recent, so I'll be building it from the tar to test it.
Thanks, Whit
Whit Blauvelt wrote:
On Wed, Jun 16, 2010 at 02:28:51AM -0700, John Doe wrote:
I was able to make some plugins without too much problems (even discovered perl in the process)...
Agreed, it's easy enough to write Nagios plugins. I've done that too.
Then PNP will "automaticaly" plot these values... but yes, if you have n values, you will get n graphs...
someone writes a Nagios plugin that captures per-core CPU load
/proc/stat gives cpuX <user_ticks> <nice_ticks> <system_ticks> <idle_ticks> <uptime> <iowait> <irq> <soft_irq>
There's a python script claimed to be able to turn that to percentage here: http://ubuntuforums.org/showthread.php?t=148781 - so adapting that to a Nagios plugin (which python's also fine for) could do it.
So PNP is just automagical? Aside from graphing each core separate, rather than a combined graph, it would do what I'm looking for without much special configuration? I got no sense of how to tie PNP in from its sparse docs.
In separate news, what I've learned is that Cacti _used to be able_ to do per-core CPU graphing, but the latest versions aren't comptible with the existing XML files for it - and no solution is on offer in their forum thread on this.
Monin has nice out-of-the-box graphs on other stuff, but not CPU per-core load.
Ganglia has per-core CPU graphing. There are RPMs in the Fedora repository, but at least for x64 there are a bunch of unmet dependencies (for CentOS anyhow). There are also older RPMs on Ganglia's own site. But the per-core CPU stuff is more recent, so I'll be building it from the tar to test it.
If have firewalling to protect from security issues, why not just run an older version of cacti?
On Wed, Jun 16, 2010 at 08:01:26AM -0500, Les Mikesell wrote:
If have firewalling to protect from security issues, why not just run an older version of cacti?
Sensible suggestion. One, it's not obvious where to find an older version. Two, hours of attempting to get cacti to work have led me to be underimpressed with the whole project. Three, we have good external firewalling, and are a small enough shop not to worry about malicious employees. But if an employee manages to get a virus on their Windows box due to some new drive by zero day exploit, some viruses probe the LAN with requests to check if known-vulerable web apps exist there (ahem, this has happened to us, and I've seen the probes). While we could tighten internal firewall rules more, bottom line is running known-insecure web apps on an LAN isn't a brilliant idea, even if I did a few messages back indicate a willingness to make that compromise.
Whit
On 6/16/2010 8:44 AM, Whit Blauvelt wrote:
On Wed, Jun 16, 2010 at 08:01:26AM -0500, Les Mikesell wrote:
If have firewalling to protect from security issues, why not just run an older version of cacti?
Sensible suggestion. One, it's not obvious where to find an older version.
It's on sourceforge... If you expand the 'all files' list you can go back to 0.5 here: http://sourceforge.net/projects/cacti/files/ and you should be able to grab any revision you want with a subversion client, or browse them here: http://svn.cacti.net/viewvc/.
If you want old Centos RPMs, try http://vault.centos.org.
Two, hours of attempting to get cacti to work have led me to be underimpressed with the whole project.
That's odd because other than the usual php version issues I've always considered cacti to be the easiest of the graphing tools to get working - but I haven't tried the most recent versions.
Three, we have good external firewalling, and are a small enough shop not to worry about malicious employees. But if an employee manages to get a virus on their Windows box due to some new drive by zero day exploit, some viruses probe the LAN with requests to check if known-vulerable web apps exist there (ahem, this has happened to us, and I've seen the probes). While we could tighten internal firewall rules more, bottom line is running known-insecure web apps on an5 LAN isn't a brilliant idea, even if I did a few messages back indicate a willingness to make that compromise.
If you are willing to hack some ugly-looking xml files that specify the oids and time intervals you can probably make opennms work for you - and you might find its other features (thresholding, notifications, etc.) useful too.
Two, hours of attempting to get cacti to work have led me to be underimpressed with the whole project.
That's odd because other than the usual php version issues I've always considered cacti to be the easiest of the graphing tools to get working
- but I haven't tried the most recent versions.
Last I looked at Cacti, the hack to get some plugin support didn't work for me and I didn't have the patience to waste time with it, dropped it.
Munin never had zooming graphs, and needed a cgi to prevent obscene load in anything other than a trivial environment, dropped it.
I have Nagios and PNP and it works well. Since your first reco to me about OpenNMS I have been intrigued, it looks like a very nice project and is very active. Ironically I do almost all my Nagios monitoring via snmp and where I can't normally use snmp, I create extends...
If you are willing to hack some ugly-looking xml files that specify the oids and time intervals you can probably make opennms work for you - and you might find its other features (thresholding, notifications, etc.) useful too.
Yeah, I also want to take the time to learn this package, it does look very powerful.
Whit, if you are starting from scratch, I would second the reco to invest the time in OpenNMS and just learn something solid from day one.
jlc
There is always ZenOSS. I would definitely take a look at ZenOSS. Very active, very powerful, nice interface, SMNP/SSH/WMI based monitoring, etc.
jb
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Joseph L. Casale Sent: Wednesday, June 16, 2010 10:30 AM To: 'CentOS mailing list' Subject: Re: [CentOS] Cacti/snmp question
Two, hours of attempting to get cacti to work have led me to be underimpressed with the whole project.
That's odd because other than the usual php version issues I've always considered cacti to be the easiest of the graphing tools to get working
- but I haven't tried the most recent versions.
Last I looked at Cacti, the hack to get some plugin support didn't work for me and I didn't have the patience to waste time with it, dropped it.
Munin never had zooming graphs, and needed a cgi to prevent obscene load in anything other than a trivial environment, dropped it.
I have Nagios and PNP and it works well. Since your first reco to me about OpenNMS I have been intrigued, it looks like a very nice project and is very active. Ironically I do almost all my Nagios monitoring via snmp and where I can't normally use snmp, I create extends...
If you are willing to hack some ugly-looking xml files that specify the
oids and time intervals you can probably make opennms work for you -
and
you might find its other features (thresholding, notifications, etc.) useful too.
Yeah, I also want to take the time to learn this package, it does look very powerful.
Whit, if you are starting from scratch, I would second the reco to invest the time in OpenNMS and just learn something solid from day one.
jlc _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/16/2010 10:30 AM, Joseph L. Casale wrote:
Two, hours of attempting to get cacti to work have led me to be underimpressed with the whole project.
That's odd because other than the usual php version issues I've always considered cacti to be the easiest of the graphing tools to get working
- but I haven't tried the most recent versions.
Last I looked at Cacti, the hack to get some plugin support didn't work for me and I didn't have the patience to waste time with it, dropped it.
Perhaps - I meant things that were available via snmp where it is basically a matter of adding the device name/ip and community string. One thing that is particularly nice about cacti is that there is a data export link associated with each graph if you want to do more detailed analysis of the samples with some other program. Opennms has a way to get avg/min/max of values for a specified time span but it is cumbersome if you want fine-grained samples.
I have Nagios and PNP and it works well. Since your first reco to me about OpenNMS I have been intrigued, it looks like a very nice project and is very active. Ironically I do almost all my Nagios monitoring via snmp and where I can't normally use snmp, I create extends...
If you are willing to hack some ugly-looking xml files that specify the oids and time intervals you can probably make opennms work for you - and you might find its other features (thresholding, notifications, etc.) useful too.
Yeah, I also want to take the time to learn this package, it does look very powerful.
It would probably be a good time to start since they just released the 1.8 version. It comes up basically working if you just give it IP ranges to discover so you don't have to learn much to get started. You do need decent hardware if you expect to collect a lot of graph data though.
From: Whit Blauvelt whit@transpect.com
Then PNP will "automaticaly" plot these values... but yes, if you have n values, you will get n graphs...
So PNP is just automagical?
Let me rephrase: once properly setup, it will 'automagicaly' plot any new data... ^_^ In fact, I had a look at my notes and they are a little longer and more complex than what I remembered... See http://docs.pnp4nagios.org/pnp-0.6/modes Choose your data collection mode and then follow the install pages... Basicaly: tell nagios to dump perf data and tell the npcd daemon to work on it... I went with "Bulk Mode with NPCD", but with the new version they have: Bulk Mode with npcdmod: "This scenario includes npcdmod.o, an NEB-module. This module reduces the configuration of the “Bulk Mode with NPCD” to a mere two lines in nagios.cfg" So maybe it will be easier... never tried it. And for your security problem, here we password protect (htaccess) our web applications directories (nagios, cacti...)...
JD
From: Whit Blauvelt
Should be useful when I extend our Nagios monitoring to include snmp data. We're using Nagios extensively, but it doesn't seem suited to the sort of load graphing we need for our CPU cores - or if it is it's a side of Nagios I'm unfamiliar with (which could be, it's nicely
extensible).
Take a look at ganglia - http://ganglia.sourceforge.net/
This may do what you need.
On Tue, Jun 15, 2010 at 04:07:57PM +0100, Simon Billis wrote:
Take a look at ganglia - http://ganglia.sourceforge.net/
This may do what you need.
It's what I've ended up going with. (Munin also looked promising - if I could get the syntax right to modify its CPU test for individual cores, which looks quite possible, I just didn't achieve it yet).
A few notes on Ganglia 3.1.7 build/install:
- best complied from source, there are big dependency problems with the available rpms
- dependencies to satisfy before compilation include (among others): apr-devel libconfuse libconfuse-devel expat-devel pcre-devel - for the libconfuse I went to dag/rpmforge
- the "make install" stage doesn't fully install, despite a required --sysconfdir flag being used. In particular, "gmond -t > gmond.conf" will provide the missing file to add to your config dir for that. The ganglia-3.1.7/gmond/modules/conf.d contents should be copied to /etc/ganglia/conf.d. Then a line with "include ('/etc/ganglia/conf.d/*.conf')" should be added to gmond.conf. And the man pages (in "mans" and one for gmond.conf in "gmond") may be copied /usr/share/man/man1 and man5 as appropriate. Also, the init files for gmond and gmetad need to be copied to init.d - but at least this, unlike the other hand-installation requirements, is documented.
- Beyond that, it's good to change the cluster "name =" in gmond.conf to something appropriate before you start to run. You only need gmetad compiled on the system to run the web reporting front end (and it takes an configure flag to do that). On other systems just rsycing over the /etc/ganglia contents will handle configuration just fine (assuming this is a single cluster). The web pages merely require copying to someplace in your PHP-capable server's space.
- The multi-core CPU graphing module - the main functionality I was after - requires some uncommenting in its conf file to get it going. The PCRE section is enough to uncomment, with pcre installed on your system.
It's pretty simple once the dependencies are installed, and the "make install" deficiencies are worked around. It gives a _lot_ of graphs (probably too many, but studying them over time will tell).
Whit
On 17/06/2010 23:20, Whit Blauvelt wrote:
- best complied from source, there are big dependency problems with the available rpms
I find that very hard to believe - to the extent that I don't believe you at all. Or did you mean to say that its not easy to locate a well done rpm set for ganglia ?
I've never used ganglia in anger, but know lots and lots of people who do - its the most used trending tool in the hpc world.
Also, one thing you did'nt mention is that its exceptionally insecure out of the box, by design. Its meant to be easy to get going and offloads security to site and network policy since most implementations run on isolated management networks no where near the internet. So if you are using it in a situation where you care about who can connect to our agents and what data is seen over the wires - start by spending a few hours securing your install.
- KB
On Fri, Jun 18, 2010 at 12:37:11AM +0100, Karanbir Singh wrote:
On 17/06/2010 23:20, Whit Blauvelt wrote:
- best complied from source, there are big dependency problems with the available rpms
I find that very hard to believe - to the extent that I don't believe you at all. Or did you mean to say that its not easy to locate a well done rpm set for ganglia ?
I should care what you believe? Stay ignorant, if you like. If not, take a CentOS system, add the EPEL repository for ganglia, try "yum install ganglia", and prepare to see all sorts of package conflicts. Plus it's not the current ganglia anyway. Better to build from tar.
I've never used ganglia in anger, but know lots and lots of people who do - its the most used trending tool in the hpc world.
What the heck do you mean, "used ganglia in anger"? That's just incoherent. I'm happy with it. It's working nicely now. But the "make install" scripting is buggy, so I posted what I've learned about working around that.
Also, one thing you did'nt mention is that its exceptionally insecure out of the box, by design. Its meant to be easy to get going and offloads security to site and network policy since most implementations run on isolated management networks no where near the internet. So if you are using it in a situation where you care about who can connect to our agents and what data is seen over the wires - start by spending a few hours securing your install.
I did't say my notes were a full article on it! My implementation is, as you suggest, far from the internet. I'll be happy to discuss firewalling and network segmentation if those questions come up.
Regards, Whit
On Thu, Jun 17, 2010 at 08:09:11PM -0400, Whit Blauvelt wrote:
I should care what you believe? Stay ignorant, if you like. If not, take a CentOS system, add the EPEL repository for ganglia, try "yum install ganglia", and prepare to see all sorts of package conflicts. Plus it's not the current ganglia anyway. Better to build from tar.
Is this vitriol really necessary? I installed ganglia; not a single conflict.
If you want shiny and new, why not do it properly and build rpms?
John
From: John R. Dennison jrd@gerdesas.com
On Thu, Jun 17, 2010 at 08:09:11PM -0400, Whit Blauvelt wrote:
I should care what you believe?
Is this vitriol really necessary?
I think it is just a reaction to the "I don't believe you at all", which some people would take as "you are a liar"... That's the problem with internet communications. The sender say things he would not say face to face, and the recipient does not know the "mood" of the sender. Emoticons cannot completely solve it... :/
JD
On Thu, Jun 17, 2010 at 08:22:35PM -0500, John R. Dennison wrote:
Is this vitriol really necessary? I installed ganglia; not a single conflict.
Why yes, John, it is. The fine man said outright he didn't believe my honest account, accusing me of making something up when I was only giving the facts. He was calling me a liar. He preferred to see my account as a lie so as not to surrender his faith that Ganglia is a pure and perfect project. Attitudes like that are dangerous in computing, since they lead to bugs not being fixed.
If you want shiny and new, why not do it properly and build rpms?
You installed without a conflict, good. Perhaps you were installing on a 32-bit system rather than a 64-bit? Perhaps your system didn't have some of the packages already installed for other functionality that mine did? All I can say is that, for my system, yum saw version conflicts that were blockers.
As for "properly," there are, as Larry Wall says, many ways to do it. It is up to each project, as their first task, IMHO, to see to it that ./configure, make, make install works for their package, with proper, documented flags, on standard Linux distros. Ganglia - a fine and valuable project on the whole - has a broken "make install." But it can be worked around. Finding workarounds is often a sysadmins job. Sharing those workarounds with the community is often how free software stays ahead of the proprietary stuff.
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option. Any serious business will have need of building on occassional program with different flags than the distro's default, whatever the distro. I often end up building a few core applications that way, as do many other sysadmins in serious business settings. If you don't need to, that's fine. Some businesses can wear off-the-rack cloths. Others need tailored garments.
Regards, Whit
Whit Blauvelt wrote:
On Thu, Jun 17, 2010 at 08:22:35PM -0500, John R. Dennison wrote:
<snip>
If you want shiny and new, why not do it properly and build rpms?
<long snip>
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option. Any serious business will have need of building on occassional program with different flags than the distro's default, whatever the distro. I often end up building a few core applications that way, as do many other sysadmins in serious business settings. If you don't need to, that's fine. Some businesses can wear off-the-rack cloths. Others need tailored garments.
you didn't get it Whit, John was not saying "stick to what's in the distro [or trusted 3rd party repos]". He was suggesting to build your own rpms when needed. This allows you to use whatever version, build flags, options etc, just like your configure-make-make install solution. But it has many advantages, including easier housekeeping and dep management, deploying to many systems, pushing new versions, etc....
Whit Blauvelt wrote:
You installed without a conflict, good. Perhaps you were installing on a 32-bit system rather than a 64-bit? Perhaps your system didn't have some of the packages already installed for other functionality that mine did? All I can say is that, for my system, yum saw version conflicts that were blockers.
That doesn't make any sense. Yum pulls whatever it needs from the configured repos if you don't have them. If yum sees conflicts on your system it is because you installed packages from somewhere other than the base and epel repos and thus shouldn't be blaming the package or packager.
As for "properly," there are, as Larry Wall says, many ways to do it.
Yes, but none of them involve setting up unexpected conflicts with the base or epel repository packages.
It is up to each project, as their first task, IMHO, to see to it that ./configure, make, make install works for their package, with proper, documented flags, on standard Linux distros. Ganglia - a fine and valuable project on the whole - has a broken "make install."
Did you mean to say it didn't run on your system? Or that you didn't apply the changes in the rpm spec file before expecting it to work?
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option.
If you are careful to keep the results in /usr/local or /opt, maybe. Otherwise you'll likely overwrite something that should be managed. And call things broken that are your own fault.
On Fri, Jun 18, 2010 at 08:14:02AM -0400, Whit Blauvelt wrote:
On Thu, Jun 17, 2010 at 08:22:35PM -0500, John R. Dennison wrote:
Is this vitriol really necessary? I installed ganglia; not a single conflict.
Why yes, John, it is. The fine man said outright he didn't believe my honest account, accusing me of making something up when I was only giving the facts. He was calling me a liar. He preferred to see my account as a lie so as not to surrender his faith that Ganglia is a pure and perfect project.
there is a big difference in saying you don't believe a person's information and calling them a liar. You may just be saying you perceive things differently or maybe that the person doesn't understand about which he is speaking. He may be entirely truthful and still not be believed.
////jerry
Attitudes like that are dangerous in computing, since they lead to bugs not being fixed.
If you want shiny and new, why not do it properly and build rpms?
You installed without a conflict, good. Perhaps you were installing on a 32-bit system rather than a 64-bit? Perhaps your system didn't have some of the packages already installed for other functionality that mine did? All I can say is that, for my system, yum saw version conflicts that were blockers.
As for "properly," there are, as Larry Wall says, many ways to do it. It is up to each project, as their first task, IMHO, to see to it that ./configure, make, make install works for their package, with proper, documented flags, on standard Linux distros. Ganglia - a fine and valuable project on the whole - has a broken "make install." But it can be worked around. Finding workarounds is often a sysadmins job. Sharing those workarounds with the community is often how free software stays ahead of the proprietary stuff.
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option. Any serious business will have need of building on occassional program with different flags than the distro's default, whatever the distro. I often end up building a few core applications that way, as do many other sysadmins in serious business settings. If you don't need to, that's fine. Some businesses can wear off-the-rack cloths. Others need tailored garments.
Regards, Whit _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/18/2010 8:20 AM, Jerry McAllister wrote:
On Fri, Jun 18, 2010 at 08:14:02AM -0400, Whit Blauvelt wrote:
On Thu, Jun 17, 2010 at 08:22:35PM -0500, John R. Dennison wrote:
Is this vitriol really necessary? I installed ganglia; not a single conflict.
Why yes, John, it is. The fine man said outright he didn't believe my honest account, accusing me of making something up when I was only giving the facts. He was calling me a liar. He preferred to see my account as a lie so as not to surrender his faith that Ganglia is a pure and perfect project.
there is a big difference in saying you don't believe a person's information and calling them a liar. You may just be saying you perceive things differently or maybe that the person doesn't understand about which he is speaking. He may be entirely truthful and still not be believed.
And there's a gray area where what the person says is technically true regarding his observations but then he places blame on others for a situation he created himself. The part not to be believed is the incorrect conclusion, especially when you can easily disprove it yourself - so its not a lie, it is a mistake.
On Fri, Jun 18, 2010 at 08:14:02AM -0400, Whit Blauvelt wrote:
Why yes, John, it is. The fine man said outright he didn't believe my honest account, accusing me of making something up when I was only giving the facts. He was calling me a liar. He preferred to see my account as a lie so as not to surrender his faith that Ganglia is a pure and perfect project. Attitudes like that are dangerous in computing, since they lead to bugs not being fixed.
While KBS could have very well chosen his wording differently he did not call you a liar. That is the interpretation you choose to apply to what he said. If someone tells me that it's going to rain, and I see nothing but blue skies on the horizon and tell them I don't believe them I am not calling them a liar; I am, however, telling them that they are wrong. By this account I find your use of "ignorant" rude and uncalled for.
You installed without a conflict, good. Perhaps you were installing on a 32-bit system rather than a 64-bit? Perhaps your system didn't have some of the packages already installed for other functionality that mine did? All I can say is that, for my system, yum saw version conflicts that were blockers.
Yep, 32-bit. As you didn't point out whether you attempted the 32-bit or 64-bit version I grabbed a test box at random and it happened to be 32-bit.
As far as conflicts go I will say again, I didn't have any. And without further evidence from you there's no way to determine why you are reporting alleged conflicts, nor what those conflicts may be. If there are conflicts it's it is much more likely that they stem from self-installs or poorly chosen 3rd party repos then they do with EPEL.
As for "properly," there are, as Larry Wall says, many ways to do it. It is up to each project, as their first task, IMHO, to see to it that ./configure, make, make install works for their package, with proper, documented flags, on standard Linux distros. Ganglia - a fine and valuable project on the whole - has a broken "make install." But it can be worked around. Finding workarounds is often a sysadmins job. Sharing those workarounds with the community is often how free software stays ahead of the proprietary stuff.
This doesn't carry much weight with me when we are talking about an enterprise distro unless the problems are discovered in the process of building SRPMs.
By the way, did you report this issue upstream and offer them the workarounds in the form of patches?
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option. Any serious
No, it's not.
business will have need of building on occassional program with different flags than the distro's default, whatever the distro. I often end up
And those needs are best met by rolling SRPMs. Heck, you could even give back to the community and make them available for others to make use of.
building a few core applications that way, as do many other sysadmins in serious business settings. If you don't need to, that's fine. Some businesses can wear off-the-rack cloths. Others need tailored garments.
I don't dispute this at all; it's very true and will remain true.
My argument is that building native tarballs and then installing them is *not* the way to go when you are working with a package managed system such as CentOS; take the additional time and make SRPMs that can be properly integrated into the package system. The benefits from such can not be understated and are *well* worth your time. You're not new to the industry so I'm a little confused as to why you don't see this.
John
John R. Dennison wrote:
On the whole, this list is professional. I like that. But look, "./configure, make, make install" is _always_ a proper option. Any serious
No, it's not.
indeed, doing exactly this could very well lead to the conflicts he reported when he tried to install ganglia from EPEL.
On 18/06/2010 01:09, Whit Blauvelt wrote:
I should care what you believe? Stay ignorant, if you like. If not, take a CentOS system, add the EPEL repository for ganglia, try "yum install ganglia", and prepare to see all sorts of package conflicts. Plus it's not the current ganglia anyway. Better to build from tar.
A wise man once said that till such time as a bugreport is filed or evidence to effect is shown, an issue is just a fragment of imagination or a user induced issue they are too embarrassed to admit to. Given that you made a wide sweeping statement that there were no usable rpms for ganglia - I still think you don't think you what you are talking about. I know of, and have used in the past, atleast 2 different set of rpms that worked just fine. On the same platform as you mention, one of those rpm sets having roots in what you hosted at EPEL / Fedora now.
To reconfirm the situation, I've just done a fresh C5 install, rolled in epel and installed ganglia with no issues at all. If you want, I can post a copy of the vm image used to run this test, the ks.cfg used, and the puppet manifest that did the deployment.
What the heck do you mean, "used ganglia in anger"? That's just incoherent.
Its an often used term in the admin / infrastructure circles to indicate if a person has used a technology or app in conditions that would have stressed it or used a near complete feature set.
- KB
On 19/06/2010 02:02, Karanbir Singh wrote:
ganglia - I still think you don't think you what you are talking about.
s/.*/ganglia - I still think you are confused about the issue./
I blame too much mongodb in one day for crazy language skilz :! ( or in my case, lack of )
- KB
On Thu, Jun 17, 2010 at 06:20:03PM -0400, Whit Blauvelt wrote:
- best complied from source, there are big dependency problems with the available rpms
Very few packages are ever best compiled from source on an enterprise distro.
What, specifically, is wrong with the 3.0.7 in EPEL?
John
On Thu, 17 Jun 2010 at 6:51pm, John R. Dennison wrote
On Thu, Jun 17, 2010 at 06:20:03PM -0400, Whit Blauvelt wrote:
- best complied from source, there are big dependency problems with the available rpms
Very few packages are ever best compiled from source on an enterprise distro.
What, specifically, is wrong with the 3.0.7 in EPEL?
Well, if you have more than 4TB of RAM in your grid, the memory graph wraps. :) Other than that, though, it works wonderfully.
That being said, it's trivial to recompile the F13 RPM for 3.1.2 for centos-5.
On Thu, Jun 17, 2010 at 08:01:02PM -0400, Joshua Baker-LePain wrote:
That being said, it's trivial to recompile the F13 RPM for 3.1.2 for centos-5.
And that would be the proper route to go instead of building from native source :)
John
On Thu, Jun 17, 2010 at 08:10:29PM -0500, John R. Dennison wrote:
On Thu, Jun 17, 2010 at 08:01:02PM -0400, Joshua Baker-LePain wrote:
That being said, it's trivial to recompile the F13 RPM for 3.1.2 for centos-5.
And that would be the proper route to go instead of building from native source :)
To get 3.1.7? Disregarding that, I should jump through the hoops of recompiling a F13 RPM rather than just compile from the tar? Why? Every extra stage like that introduces the chance of incidental errors, of stuff that doesn't translate precisely through that stage. I'm not doubting it generally can work, just that there's anything "proper" about it. Generally native source is the gold standard. The farther upstream you go, the better the quality gets, the more bugs are fixed, and the more control you have over how and where the stuff installs on your systems.
There can be an argument that for some stuff that passes through RHEL the extra attention adds some quality control (ignoring the counterexample of the long history of RH manging kernels; they seem to have gotten better about that lately), but stuff in EPEL? Really?
I'm not talking Linux from Scratch here - although I respect that project immensely. I appreciate a solid distro as a foundation. CentOS is. But claims that any distro is so perfect and complete that it's "improper" to custom compile a few apps on its foundation - from the "native" source (with all the connotations that "natives" are scarey and primitive) - should not be well received if we want to continue to have open platforms.
Best, Whit
Whit Blauvelt wrote:
On Thu, Jun 17, 2010 at 08:10:29PM -0500, John R. Dennison wrote:
On Thu, Jun 17, 2010 at 08:01:02PM -0400, Joshua Baker-LePain wrote:
That being said, it's trivial to recompile the F13 RPM for 3.1.2 for centos-5.
And that would be the proper route to go instead of building from native source :)
To get 3.1.7? Disregarding that, I should jump through the hoops of recompiling a F13 RPM rather than just compile from the tar? Why?
Because rpm tracks all the files installed from packages, and yum understands the dependencies. You've clearly broken that on your system. And you probably have no idea how to verify that your tarball-installed files are still the same ones you installed or how to remove all of them cleanly.
Every extra stage like that introduces the chance of incidental errors, of stuff that doesn't translate precisely through that stage. I'm not doubting it generally can work, just that there's anything "proper" about it. Generally native source is the gold standard. The farther upstream you go, the better the quality gets, the more bugs are fixed, and the more control you have over how and where the stuff installs on your systems.
There's always a tradeoff between new code introducing new bugs and fixing old ones. Fedora takes a different position in that tradeoff than RHEL/Centos and sometimes that's what you want for certain applications. And if the src RPM will rebuild painlessly you get the advantage of rpm management for next to no extra work. Plus you know someone else has at least run the code a time or two, something you don't know about the straight upstream source.
There can be an argument that for some stuff that passes through RHEL the extra attention adds some quality control (ignoring the counterexample of the long history of RH manging kernels; they seem to have gotten better about that lately), but stuff in EPEL? Really?
One of EPEL's goals is to not overwrite or conflict with any base rpms. They are't perfect, their idea of 'base' doesn't include centos extras, and their guidlines keep out some things you probably want, but in general they are pretty good and it is a very valuable thing to be able to install any of their packages without worrying about conflicts. Other 3rd party repos don't make the same effort or intentionally update existing system libraries to meet their own goals.
I'm not talking Linux from Scratch here - although I respect that project immensely. I appreciate a solid distro as a foundation. CentOS is. But claims that any distro is so perfect and complete that it's "improper" to custom compile a few apps on its foundation - from the "native" source (with all the connotations that "natives" are scarey and primitive) - should not be well received if we want to continue to have open platforms.
You need to think of rpm as a database with integrity rules - because that's what it is. And think about what happens if you randomly scribble stuff in a database ignoring its rules - because that's what you are doing. There are times you need to do some experimental things, but they should be kept out of the system area or you loose the advantage that package management tools provide. Or you should build your own rpms to incorporate the files into the system properly.
On Fri, Jun 18, 2010 at 08:25:56AM -0400, Whit Blauvelt wrote:
To get 3.1.7? Disregarding that, I should jump through the hoops of recompiling a F13 RPM rather than just compile from the tar? Why? Every extra stage like that introduces the chance of incidental errors, of stuff that doesn't translate precisely through that stage. I'm not doubting it generally can work, just that there's anything "proper" about it. Generally native source is the gold standard. The farther upstream you go, the better the quality gets, the more bugs are fixed, and the more control you have over how and where the stuff installs on your systems.
You really believe this? If so, why do you bother with CentOS, or any package managed distro? Native builds are *never* the way to go, but I quite refuse to waste my time pointing out the many drawbacks of such compared to taking a few moments to properly - yes, *properly* - make SRPMs and and rebuilding *those* on the target platforms.
The "gold standard" is that procedure, not building source kits that can, and *will* walk all over the rest of your system. Just because it may not have happened yet is nothing but pure luck.
There can be an argument that for some stuff that passes through RHEL the extra attention adds some quality control (ignoring the counterexample of the long history of RH manging kernels; they seem to have gotten better about that lately), but stuff in EPEL? Really?
Some quality control? Really? I can see this discussion is going no where and you have your mind made up.
John
On Thu, Jun 17, 2010 at 06:51:52PM -0500, John R. Dennison wrote:
Very few packages are ever best compiled from source on an enterprise distro.
What, specifically, is wrong with the 3.0.7 in EPEL?
Um, that "yum install ganglia" produces a long list of package conflicts on a current CentOS system? Or that only 3.1.7 has a fully working multicpu module, plus a number of significant bug fixes?
If there were a good CentOS build of 3.1.7 I'd happily use it. But getting stuff from EPEL, which is essentially Redhat testing, is as silly as mixing stuff from Debian testing into Debian stable, as far as enterprise systems go. On the other hand, I've run a number of enterprise systems on Gentoo. I'm sure the compiling of everything from source there gives you absolute horrors. But those systems treated me well for years. Now I'm in a mixed Ubuntu/CentOS environment, and I stay with distro packages ... until I don't. When there's a specific program that I need compiled with different options or whatever, well, I've been a Linux sysadmin since '93. I kind of know what I'm doing.
What's with you kids these days? Compiling something from tar isn't going to blow things up. At least it's never bitten me, in 17 years.
Best, Whit
On Thu, Jun 17, 2010 at 08:21:00PM -0400, Whit Blauvelt wrote:
Um, that "yum install ganglia" produces a long list of package conflicts on a current CentOS system? Or that only 3.1.7 has a fully working multicpu module, plus a number of significant bug fixes?
I just tried a ganglia install from EPEL; absolutely no issues at all. Perhaps if you'd bother to actually document these conflicts one of us might be able to help. That is if we're still willing.
I can't speak to your claims of 3.1.7 having bug fixes and the multicpu issue; but I saw no conflicts with EPEL's 3.0.7.
If there were a good CentOS build of 3.1.7 I'd happily use it. But getting stuff from EPEL, which is essentially Redhat testing, is as silly as mixing
Uh, you've confused EPEL and Fedora apparently.
stuff from Debian testing into Debian stable, as far as enterprise systems go. On the other hand, I've run a number of enterprise systems on Gentoo. I'm sure the compiling of everything from source there gives you absolute
Gentoo is fine for a toy os. Claiming Gentoo is "enterprise" is just silly.
horrors. But those systems treated me well for years. Now I'm in a mixed Ubuntu/CentOS environment, and I stay with distro packages ... until I don't. When there's a specific program that I need compiled with different options or whatever, well, I've been a Linux sysadmin since '93. I kind of know what I'm doing.
If you say so.
What's with you kids these days? Compiling something from tar isn't going to blow things up. At least it's never bitten me, in 17 years.
Kids? Heh. 17 years? Heh. You're a youngster. Let me know when you've got 25+ years in the industry and then I might be impressed :)
John
On Thu, Jun 17, 2010 at 08:19:46PM -0500, John R. Dennison wrote:
I just tried a ganglia install from EPEL; absolutely no issues at all. Perhaps if you'd bother to actually document these conflicts one of us might be able to help. That is if we're still willing.
Now you're threatening to expel me from the community? For posting notes on workarounds to get a useful package to work? What's this about? Ganglia's working fine for me.
I can't speak to your claims of 3.1.7 having bug fixes and the multicpu issue; but I saw no conflicts with EPEL's 3.0.7.
My claims? The project's own documents describe this stuff. You saw no conflicts? Great. Not every bug shows up on every box. You believe one instance of not seeing a bug means no on else will? That's Microsoft-style quality control.
If there were a good CentOS build of 3.1.7 I'd happily use it. But getting stuff from EPEL, which is essentially Redhat testing, is as silly as mixing
Uh, you've confused EPEL and Fedora apparently.
Sorry. If that's confusion, I got it from instructions (several sets of them) out on the web for installing Ganglia from EPEL, which referred to it as a Fedora repository.
Gentoo is fine for a toy os. Claiming Gentoo is "enterprise" is just silly.
No point in including a long list of serious enterprises which run on Gentoo. You're in fanboi mode and it's not your team. Fine.
Kids? Heh. 17 years? Heh. You're a youngster. Let me know when you've got 25+ years in the industry and then I might be impressed :)
I said I've been a Linux sysadmin since '93. I've been in the industry since '82. Thanks for mistaking me for a youngster though!
Whit
On Thu, Jun 17, 2010 at 08:19:46PM -0500, John R. Dennison wrote:
If there were a good CentOS build of 3.1.7 I'd happily use it. But getting stuff from EPEL, which is essentially Redhat testing, is as silly as mixing
Uh, you've confused EPEL and Fedora apparently.
Hey John,
https://fedoraproject.org/wiki/EPEL:
"Extra Packages for Enterprise Linux (EPEL) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on ..."
Enough said.
Whit
On 6/18/2010 9:01 AM, Whit Blauvelt wrote:
If there were a good CentOS build of 3.1.7 I'd happily use it. But getting stuff from EPEL, which is essentially Redhat testing, is as silly as mixing
Uh, you've confused EPEL and Fedora apparently.
Hey John,
https://fedoraproject.org/wiki/EPEL:
"Extra Packages for Enterprise Linux (EPEL) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on ..."
Enough said.
Apparently not, since you don't seem to understand the purpose of the project, the relationship to the sponsor organization, or the value of high-quality, well maintained packages. Or even the value of having machines where for spans of many years, all you ever have to do is "yum update" and the right thing will happen to every installed application.
On Fri, Jun 18, 2010 at 10:01:38AM -0400, Whit Blauvelt wrote:
"Extra Packages for Enterprise Linux (EPEL) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on ..."
Enough said.
Apparently not as that bears no indication of it being a test base as your initial claim stated.
John
On Fri, Jun 18, 2010 at 08:41:26AM -0400, Whit Blauvelt wrote:
Now you're threatening to expel me from the community? For posting notes on workarounds to get a useful package to work? What's this about? Ganglia's working fine for me.
I'm honored that you think I have that much sway in this community that I would be able to expel you from it. The reality, however, is quite different. I don't speak for the project, nor do I speak for the community as a whole; I have enough difficulty speaking for myself.
My issues were your building from native source doing the standard three-step; it's wrong to do so in an rpm-managed distro.
My claims? The project's own documents describe this stuff. You saw no conflicts? Great. Not every bug shows up on every box. You believe one instance of not seeing a bug means no on else will? That's Microsoft-style quality control.
Yes, *claims*. You've provided no evidence except your claims that it didn't work. And please understand that I said it worked *for me* and that *I* didn't see a conflict. I never said it wasn't an issue for others. Had I noticed a problem I'd also have taken the time to document such to the parties responsible, including this mailing list.
Sorry. If that's confusion, I got it from instructions (several sets of them) out on the web for installing Ganglia from EPEL, which referred to it as a Fedora repository.
Yep, confusion. You do, I hope, realize that EL and the offspring of EL including CentOS are based on Fedora? This makes Fedora the test base for future EL cuts. EPEL is just a 3rd party repo providing (mostly) Fedora kit rebuilt for EL use in CentOS, SL, etc.
I said I've been a Linux sysadmin since '93. I've been in the industry since '82. Thanks for mistaking me for a youngster though!
That's nice. With an illustrious background such as yours I'd expect less argument over the merits of SRPMs vs native builds and a better understanding of EPEL's role.
John
John R. Dennison wrote:
On Fri, Jun 18, 2010 at 08:41:26AM -0400, Whit Blauvelt wrote:
<snip>
My issues were your building from native source doing the standard three-step; it's wrong to do so in an rpm-managed distro.
Up until now, I had to build the gspca driver separately, every time I upgraded those servers with the cameras attached. I also *always* have to do something - mostly reinstall - when I upgrade the boxes, mostly older, with nvidia drivers. (And let's not talk about the newest upgrade to FC 13, which has none....)
Even such a large install as CentOS/RHEL can't cover all hardware.
mark
On Fri, Jun 18, 2010 at 03:15:41PM -0400, m.roth@5-cent.us wrote:
Up until now, I had to build the gspca driver separately, every time I upgraded those servers with the cameras attached. I also *always* have to do something - mostly reinstall - when I upgrade the boxes, mostly older, with nvidia drivers. (And let's not talk about the newest upgrade to FC 13, which has none....)
And what is the problem with the dkms-gspca stuff at rpmforge?
Even such a large install as CentOS/RHEL can't cover all hardware.
Nor should it have to. There exist vetted 3rd-party repos that provide support for much that EL does not.
John
Am 14.06.2010 22:29, schrieb Whit Blauvelt:
The problem is that's returning no information to Cacti to any of the SNMP queries.
My whole goal here is to get something working to graph CPU core use on a handful of systems.
Or is there a better tool than Cacti (or better documented - hopefully in the form of simple recipe rather than many haphazard - often out-of-date - pages of RTFM)? Cacti admits to serious security flaws, not that this'll go on the public net, but I'd be happier to run something safer nontheless.
You can try Munin for this.
regards,
Detlef
On Mon, Jun 14, 2010 at 10:41:06PM +0200, Detlef Peeters wrote:
You can try Munin for this.
Thanks. Hadn't look at that. A lively project in current development - always good. Can't find anything about whether it can specifically graph separate CPU core use - guess I'll have to install it and see if that's built in. Nothing in their plugins collection directed at that task, but if it's built in....
Whit
On 6/14/2010 3:29 PM, Whit Blauvelt wrote:
Hi,
Trying to follow the recipe at
http://wiki.centos.org/HowTos/Cacti_on_CentOS_4.x
Which has a bit of an update for 5.x, but no joy.
Anyone know what this from Cacti should suggest?
Data Query Debug Information
- Running data query [9].
- Found type = '6 '[script query].
- Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml'
- XML file parsed ok.
- Executing script for list of indexes '/usr/bin/php -q /var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: index'
- Executing script query '/usr/bin/php -q /var/www/cacti/scripts/ss_host_cpu.php 127.0.0.1 1 2:161:500:1:10:public:::MD5::DES: query index'
- Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml'
- Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml'
- Found data query XML file at '/var/www/cacti/resource/script_server/host_cpu.xml'
The problem is that's returning no information to Cacti to any of the SNMP queries.
My whole goal here is to get something working to graph CPU core use on a handful of systems.
Or is there a better tool than Cacti (or better documented - hopefully in the form of simple recipe rather than many haphazard - often out-of-date - pages of RTFM)? Cacti admits to serious security flaws, not that this'll go on the public net, but I'd be happier to run something safer nontheless.
I happen to like OpenNMS (http://www.opennms.org) but it is considerably more complicated than cacti to set up. And I think your SNMP server setup is the real problem. Do you get a response with snmpwalk using the same community name?
-- Les Mikesell lesmikesell@gmail.com
On Mon, Jun 14, 2010 at 03:55:10PM -0500, Les Mikesell wrote:
I happen to like OpenNMS (http://www.opennms.org) but it is considerably more complicated than cacti to set up.
Thanks. I don't mind complicated if the documentation is clear. Cacti is in that fuzzy area where it's not quite simple, and the docs aren't quite clear (at least not to my learning style). It looks like OpenNMS is mostly in the same space as Nagios, which we're already happy with and have no motivation to replace. Would there be a stripped-down usage to just give us the per-core CPU usage graphs which are what we currently need (and have no notion how to add to Nagios, if it can even be done); does OpenNMS already have a per-core CPU usage graphing capability.
And I think your SNMP server setup is the real problem. Do you get a response with snmpwalk using the same community name?
Yes, snmpwalk gives a good response. (Although to confuse things, the CentOS man page for snmpwalk is years out of date and doesn't present the current syntax - still, it has a current built-in help page.)
Whit
OpenNMS Supports Multi Core CPUs
http://demo.observernms.org/device/6/health/processors/
-klank
On 6/14/2010 3:20 PM, Whit Blauvelt wrote:
On Mon, Jun 14, 2010 at 03:55:10PM -0500, Les Mikesell wrote:
I happen to like OpenNMS (http://www.opennms.org) but it is considerably more complicated than cacti to set up.
Thanks. I don't mind complicated if the documentation is clear. Cacti is in that fuzzy area where it's not quite simple, and the docs aren't quite clear (at least not to my learning style). It looks like OpenNMS is mostly in the same space as Nagios, which we're already happy with and have no motivation to replace. Would there be a stripped-down usage to just give us the per-core CPU usage graphs which are what we currently need (and have no notion how to add to Nagios, if it can even be done); does OpenNMS already have a per-core CPU usage graphing capability.
Opps.
It's http://www.observernms.org/ that supports Multi-Core CPUs.
Don't think it will help you out since you are using nagios anyways.
As a side note in the past I have searched for snmp polling and graphings of multi core, but was never able to make anything work. Let the list know if you find something that works.
-klank
On 6/14/2010 3:30 PM, klank wrote:
OpenNMS Supports Multi Core CPUs
http://demo.observernms.org/device/6/health/processors/
-klank
On 6/14/2010 3:20 PM, Whit Blauvelt wrote:
On Mon, Jun 14, 2010 at 03:55:10PM -0500, Les Mikesell wrote:
I happen to like OpenNMS (http://www.opennms.org) but it is considerably more complicated than cacti to set up.
Thanks. I don't mind complicated if the documentation is clear. Cacti is in that fuzzy area where it's not quite simple, and the docs aren't quite clear (at least not to my learning style). It looks like OpenNMS is mostly in the same space as Nagios, which we're already happy with and have no motivation to replace. Would there be a stripped-down usage to just give us the per-core CPU usage graphs which are what we currently need (and have no notion how to add to Nagios, if it can even be done); does OpenNMS already have a per-core CPU usage graphing capability.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/14/2010 5:20 PM, Whit Blauvelt wrote:
On Mon, Jun 14, 2010 at 03:55:10PM -0500, Les Mikesell wrote:
I happen to like OpenNMS (http://www.opennms.org) but it is considerably more complicated than cacti to set up.
Thanks. I don't mind complicated if the documentation is clear. Cacti is in that fuzzy area where it's not quite simple, and the docs aren't quite clear (at least not to my learning style). It looks like OpenNMS is mostly in the same space as Nagios, which we're already happy with and have no motivation to replace.
The big difference is that OpenNMS typically needs no agent or per-host configuration because it works with snmp and auto-discovery of most services - and it handles routers/switches as well has hosts. It's actually not that hard to get started if you want to try it since you can use their yum repository and they just had a new stable release.
Would there be a stripped-down usage to just give us the per-core CPU usage graphs which are what we currently need (and have no notion how to add to Nagios, if it can even be done); does OpenNMS already have a per-core CPU usage graphing capability.
I'm not sure of the details of how this works. With the default setup I get a single CPU usage graph on linux targets and windows targets may show none or one per CPU. I think it is up to what the snmp agent returns.
And I think your SNMP server setup is the real problem. Do you get a response with snmpwalk using the same community name?
Yes, snmpwalk gives a good response. (Although to confuse things, the CentOS man page for snmpwalk is years out of date and doesn't present the current syntax - still, it has a current built-in help page.)
Does 'good' mean many pages of output if you don't specify an oid?
On Mon, Jun 14, 2010 at 05:46:16PM -0500, Les Mikesell wrote:
The big difference is that OpenNMS typically needs no agent or per-host configuration because it works with snmp and auto-discovery of most services - and it handles routers/switches as well has hosts.
Getting off my topic, but we're using custom Nagios plugins for stuff that's at levels that can't be read from SNMP - tests that need by their nature to run locally and report back. Probably some way to do that in OpenNMS too? Anyway, that being the standard Nagios way, it fits that need well.
I'm not sure of the details of how this works. With the default setup I get a single CPU usage graph on linux targets and windows targets may show none or one per CPU. I think it is up to what the snmp agent returns.
I'm assuming the Cacti XML files to get multi-core graphs are SNMP ... if that's right then standard Linux net-SNMP should allow that.
Yes, snmpwalk gives a good response. (Although to confuse things, the CentOS man page for snmpwalk is years out of date and doesn't present the current syntax - still, it has a current built-in help page.)
Does 'good' mean many pages of output if you don't specify an oid?
Yes. Is that a good 'good'?
Whit