Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind "bind-chroot-9.3.6-4.P1.el5_5.3" when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
Thanks
Greg Machin Systems Administrator - Linux Infrastructure Group, Information Services
On Wed, Feb 23, 2011 at 9:08 PM, Machin, Greg Greg.Machin@openpolytechnic.ac.nz wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
The "bind97" packages is in RHEL 5.6. RedHat pubishes such major component upgrades as separate packages, so people using the older version get updates, but who want the major upgrades are free to install them and get separate support.
Our faithful CentOS maintainers have not yet completed their publication of CentOS 5.6. I'm sure they'd appreciate your help doing so, although I've had some difficulty reverse engineering enough of their build structure to parallel their work.
On 02/24/2011 02:24 AM, Nico Kadel-Garcia wrote:
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind
The "bind97" packages is in RHEL 5.6.
... and available in c5-testing, pending centos-5.6 release; so if you want to get it now, get it eary - thats a good place to grab it from.
Also, if you do use the package from c5-testing; make sure to feedback comments to the centos-devel list so they can be incorporated into the CentOS-5.6 Release Notes;
- KB
On Thu, 2011-02-24 at 15:08 +1300, Machin, Greg wrote:
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
It is my understanding the security issue neither affects the Red Hat version of Bind nor the Centos derivative for operating system releases 4 and 5.
This subject was mentioned here with some passion in the last 48 hours but I don't keep copies.
Please suggest to your "guy" he needs to do some Googling to find recent emails from this mailing list and other sources which may provide further information.
On 02/24/2011 01:08 PM, Machin, Greg wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind "bind-chroot-9.3.6-4.P1.el5_5.3" when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
Hi Greg
Probably an idea to point your N&S guys at the RH 'backporting' Page - https://access.redhat.com/security/updates/backporting/?sc_cid=3093
Basically, the version is kept the same to minimise impact on users, whilst bugfixes and security errata from future versions are 'backported' to the version that ships with the relevant RHEL version.
Also worthwhile pointing them at the BIND CVE in the Redhat Bugzilla, which advises on the impact on the RHEL versions - https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2011-0414
Regards
Steve
On Feb 23, 2011, at 9:08 PM, "Machin, Greg" Greg.Machin@openpolytechnic.ac.nz wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
Please check out:
https://access.redhat.com/security/updates/backporting/?sc_cid=3093
RHEL maintains application binary interfaces during the lifetime of their releases. Only for applications that can no longer be feasibly maintained through backporting (ie firefox) do they update the version mid release.
A lot of people don't understand the backporting way of maintaining a stable platform across a release, it took me a while to appreciate it.
-Ross
Thank you all for helping to clarify this.
Thanks
Greg Machin Systems Administrator - Linux Infrastructure Group, Information Services
From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ross Walker Sent: Thursday, 24 February 2011 3:51 p.m. To: CentOS mailing list Cc: centos@centos.org Subject: Re: [CentOS] current bind version
On Feb 23, 2011, at 9:08 PM, "Machin, Greg" Greg.Machin@openpolytechnic.ac.nz wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
Please check out:
https://access.redhat.com/security/updates/backporting/?sc_cid=3093
RHEL maintains application binary interfaces during the lifetime of their releases. Only for applications that can no longer be feasibly maintained through backporting (ie firefox) do they update the version mid release.
A lot of people don't understand the backporting way of maintaining a stable platform across a release, it took me a while to appreciate it.
-Ross
On 02/23/11 6:08 PM, Machin, Greg wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
to put it bluntly, your security guy is pretty much worthless as such if he thinks security is audited by checking version numbers.
sadly, this is too common.
On Feb 23, 2011, at 10:23 PM, John R Pierce pierce@hogranch.com wrote:
On 02/23/11 6:08 PM, Machin, Greg wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
to put it bluntly, your security guy is pretty much worthless as such if he thinks security is audited by checking version numbers.
sadly, this is too common.
Let's face it most auditors these days are just accountants with Infosys Mgmt text books.
The ridiculously high levels of regulations has created a demand for auditors that can no longer be filled by competent IT skilled auditors.
Oh well these are the days.
-Ross
On Wed, Feb 23, 2011 at 10:45 PM, Ross Walker rswwalker@gmail.com wrote:
Let's face it most auditors these days are just accountants with Infosys Mgmt text books.
Or former sysadmins who didn't make it in the "management track" but still wanted to be able to lord it over others...
On Wed, Feb 23, 2011 at 10:23 PM, John R Pierce pierce@hogranch.com wrote:
On 02/23/11 6:08 PM, Machin, Greg wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
to put it bluntly, your security guy is pretty much worthless as such if he thinks security is audited by checking version numbers.
sadly, this is too common.
No, it's actually useful. Backporting is painful, expensive, and often unreliable, and leaves various any unpublished zero-day exploits in the wild. It also indicates feature incompatibility with other tools that rely on the new features.
I went through this last week with OpenSSH version 5.x (not currently available for RHEL or CentOS 5 except by third party provided software), and bash. Turns out that OpenSSH 5.x doesn't read your .bashrc for non-login sessions, OpenSSH 4.x did. RHEL 6 addressed this for normal use by updating bash so it gets handled more like people expect it to behave, but I had users very upset that the new OpenSSH with the new features did not handle their reset PATH settings from their .bashrc.
On 02/24/2011 07:12 AM, Nico Kadel-Garcia wrote:
On Wed, Feb 23, 2011 at 10:23 PM, John R Pierce pierce@hogranch.com wrote:
On 02/23/11 6:08 PM, Machin, Greg wrote:
Hi.
I have had an enquiry from the Network and Security guy. He wants to know why CentOS 5.5 /RHEL 5 is using a very old version of bind “bind-chroot-9.3.6-4.P1.el5_5.3” when the latest release that has many security fixes is on 9.7.3 . I understand that its to maintain a known stable platform by in introducing new elements etc .. Is there an official explanation / document that I can direct him to.
to put it bluntly, your security guy is pretty much worthless as such if he thinks security is audited by checking version numbers.
sadly, this is too common.
No, it's actually useful. Backporting is painful, expensive, and often unreliable, and leaves various any unpublished zero-day exploits in the wild. It also indicates feature incompatibility with other tools that rely on the new features.
The above may or may not be true (I think red hat does a very good job of addressing security and stability with backporting) ... BUT ... if you do not like backports, then RHEL (and since we rebuild those sources, CentOS) is not the distribution that you want to be using. Backporting is what red hat does to fix most security issues. If you have a philosophical problem with backporting (many people do, that is their prerogative) then some other enterprise Linux version would be a much better choice.
I am not saying this to be a smart a$$ or be negative ... just saying that other enterprise distributions exist that provide long term stability without backports ... Unbuntu LTS is a free example. They also provide integration of all their system libraries and audit their software for security compliance.
I went through this last week with OpenSSH version 5.x (not currently available for RHEL or CentOS 5 except by third party provided software), and bash. Turns out that OpenSSH 5.x doesn't read your .bashrc for non-login sessions, OpenSSH 4.x did. RHEL 6 addressed this for normal use by updating bash so it gets handled more like people expect it to behave, but I had users very upset that the new OpenSSH with the new features did not handle their reset PATH settings from their .bashrc.
I would think that using an enterprise distribution of Linux where several hundreds of developers are testing the integration would serve you better than building your own openssh, your own bind, your own "everything else" and trying to bolt it onto the backport model that red hat uses to keep your stuff secure.
On Thu, Feb 24, 2011 at 9:31 AM, Johnny Hughes johnny@centos.org wrote:
On 02/24/2011 07:12 AM, Nico Kadel-Garcia wrote:
I went through this last week with OpenSSH version 5.x (not currently available for RHEL or CentOS 5 except by third party provided software), and bash. Turns out that OpenSSH 5.x doesn't read your .bashrc for non-login sessions, OpenSSH 4.x did. RHEL 6 addressed this for normal use by updating bash so it gets handled more like people expect it to behave, but I had users very upset that the new OpenSSH with the new features did not handle their reset PATH settings from their .bashrc.
I would think that using an enterprise distribution of Linux where several hundreds of developers are testing the integration would serve you better than building your own openssh, your own bind, your own "everything else" and trying to bolt it onto the backport model that red hat uses to keep your stuff secure.
Nice try. It was a commercially provided OpenSSH distribution, sold for RHEL users, with thousands of users. (I'll send you vendor name privately, if you're curious.)
I agree it gets into serious pain: this is one of the many reasons that I try to dissuade people from inserting their own components, built directly from source, not under RPM.
On Feb 24, 2011, at 9:31 AM, Johnny Hughes johnny@centos.org wrote:
I am not saying this to be a smart a$$ or be negative ... just saying that other enterprise distributions exist that provide long term stability without backports ... Unbuntu LTS is a free example. They also provide integration of all their system libraries and audit their software for security compliance.
I think the primary driving factor for Redhat to employ the backport method is to maintain a stable ABI across a release, and the primary reason for that is for third party application support.
Redhat wants to provide a platform for which commercial vendors can develop their wares such that they can say it supports RHEL 5 or 6 and it will actually run on said platform without loss of functionality or stability.
I doubt the same can be said about Ubuntu LTS or even SLES where a change in a library can result in either the third party application not working or working with limited functionality.
-Ross
On 02/24/2011 05:43 PM, Ross Walker wrote:
On Feb 24, 2011, at 9:31 AM, Johnny Hughes <johnny@centos.org mailto:johnny@centos.org> wrote:
I am not saying this to be a smart a$$ or be negative ... just saying that other enterprise distributions exist that provide long term stability without backports ... Unbuntu LTS is a free example. They also provide integration of all their system libraries and audit their software for security compliance.
I think the primary driving factor for Redhat to employ the backport method is to maintain a stable ABI across a release, and the primary reason for that is for third party application support.
Redhat wants to provide a platform for which commercial vendors can develop their wares such that they can say it supports RHEL 5 or 6 and it will actually run on said platform without loss of functionality or stability.
I doubt the same can be said about Ubuntu LTS or even SLES where a change in a library can result in either the third party application not working or working with limited functionality.
That is absolutely true and I agree with you 100% ... I like the constant ABI across the release and the backport model, otherwise I would be building "something else".
But I also know that there are people who think backporting is the "Devil".
I was only trying to provide sane advise for those people ... I think it is much safer (and more stable) to use unbuntu than to try and build your own latest bind and your own latest ssh and your own latest apache and your own latest php and "other stuff" and then bolt that into CentOS.
If you start breaking the constant ABI and introducing lots of new shared libs, etc, then you are totally negating the only 2 things (ABI and stability) that makes CentOS an enterprise OS. You are even likely better off using Fedora than trying to replace massive parts of CentOS with newer stuff.
Now ... I have done some custom things myself (like roll in Samba 3.4.x for Windows 7 PDC support into c4 and c5 and CentOS 5 LDAP in CentOS 4 so I could add new C5 machines as Domain controllers in new offices with some older C4 machines as domain controllers in the old offices without having to replace the older OSes).
So, with limited changes, it is possible.
On 2/24/11 7:37 PM, Johnny Hughes wrote:
On 02/24/2011 05:43 PM, Ross Walker wrote:
On Feb 24, 2011, at 9:31 AM, Johnny Hughes<johnny@centos.org mailto:johnny@centos.org> wrote:
I am not saying this to be a smart a$$ or be negative ... just saying that other enterprise distributions exist that provide long term stability without backports ... Unbuntu LTS is a free example. They also provide integration of all their system libraries and audit their software for security compliance.
I think the primary driving factor for Redhat to employ the backport method is to maintain a stable ABI across a release, and the primary reason for that is for third party application support.
Redhat wants to provide a platform for which commercial vendors can develop their wares such that they can say it supports RHEL 5 or 6 and it will actually run on said platform without loss of functionality or stability.
I doubt the same can be said about Ubuntu LTS or even SLES where a change in a library can result in either the third party application not working or working with limited functionality.
That is absolutely true and I agree with you 100% ... I like the constant ABI across the release and the backport model, otherwise I would be building "something else".
Can someone remind me why VMware server 2.x broke with a RHEL/CentOS 5.x glibc update? I switched back to 1.x which I like better anyway, but if the reason for putting up with oldness is to keep that from happening, it didn't work.
On Thu, Feb 24, 2011 at 08:04:08PM -0600, Les Mikesell wrote:
Can someone remind me why VMware server 2.x broke with a RHEL/CentOS 5.x glibc update? I switched back to 1.x which I like better anyway, but if the reason for putting up with oldness is to keep that from happening, it didn't work.
You may want to try VMware-player if you, (like almost everyone else) preferred 1.x to 2.x. The later versions of player are more like 1.x, allowing you to install an operating system from ISO or whatever, and work quite well with 64 bit CentOS.
I have always had issues with VMware server and compiling of kernel modules, normally ended up costing a couple of days effort .. I have found 2 is more resource intensive than 1. Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Greg Machin Systems Administrator - Linux Infrastructure Group, Information Services
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Scott Robbins Sent: Friday, 25 February 2011 3:14 p.m. To: CentOS mailing list Subject: [CentOS] VMware (was Re: current bind version)
On Thu, Feb 24, 2011 at 08:04:08PM -0600, Les Mikesell wrote:
Can someone remind me why VMware server 2.x broke with a RHEL/CentOS
5.x glibc
update? I switched back to 1.x which I like better anyway, but if the
reason
for putting up with oldness is to keep that from happening, it didn't
work.
You may want to try VMware-player if you, (like almost everyone else) preferred 1.x to 2.x. The later versions of player are more like 1.x, allowing you to install an operating system from ISO or whatever, and work quite well with 64 bit CentOS.
On Fri, Feb 25, 2011 at 03:44:32PM +1300, Machin, Greg wrote:
<snip of good information>
Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Agreed--I haven't really found server, however, to be all that great for 24/7, so I assumed (and we know what happens when one assumes), that it was being used for testing. For any sort of production use, ESXi 4.1 is quite good.
On 2/24/11 8:56 PM, Scott Robbins wrote:
On Fri, Feb 25, 2011 at 03:44:32PM +1300, Machin, Greg wrote:
<snip of good information>
Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Agreed--I haven't really found server, however, to be all that great for 24/7, so I assumed (and we know what happens when one assumes), that it was being used for testing. For any sort of production use, ESXi 4.1 is quite good.
Player isn't good for most of my usage because most of the time I don't want the console display at all - I just connect to the guests remotely with freenx/ssh/vnc when necessary. And I have Server 1.x setups that have run for years with no attention or downtime. I agree that ESXi is better, but it wasn't free when I built the VMs and I'm running some native Centos stuff on the host along with several guests.
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
On 2/24/11 8:56 PM, Scott Robbins wrote:
On Fri, Feb 25, 2011 at 03:44:32PM +1300, Machin, Greg wrote:
<snip of good information>
Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Agreed--I haven't really found server, however, to be all that great for 24/7, so I assumed (and we know what happens when one assumes), that it was being used for testing. For any sort of production use, ESXi 4.1 is quite good.
Player isn't good for most of my usage because most of the time I don't want the console display at all - I just connect to the guests remotely with freenx/ssh/vnc when necessary. And I have Server 1.x setups that have run for years with no attention or downtime. I agree that ESXi is better, but it wasn't free when I built the VMs and I'm running some native Centos stuff on the host along with several guests.
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
Simple solution really, bring up an ESXi box and use Vmware's free converter tool to convert the old VMs to ESXi (in most cases while they are running). It is a pretty seamless changeover, and ESXi is far better from a supportability and performance standpoint.
On Thu, 2011-02-24 at 22:47 -0600, Les Mikesell wrote:
Player isn't good for most of my usage because most of the time I don't want the console display at all - I just connect to the guests remotely with freenx/ssh/vnc when necessary. And I have Server 1.x setups that have run for years with no attention or downtime. I agree that ESXi is better, but it wasn't free when I built the VMs and I'm running some native Centos stuff on the host along with several guests.
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
I went through this a while back both at work and at home. At work I converted the whole shebang from VMware Server 2.0 over to KVM. At home I went with ESXi. Both were fairly painless to do, though with ESXi you need a Windows box to manage it. Eventually, I'll probably convert the home machine to KVM. Maybe. OTOH, I like not having a boot drive (other than the SD card) on the box.
Hmm...
(thinking aloud) Is anyone doing KVM on a box from a USB stick or SD card? Saves a disk, and that's what VMware is doing with ESXi...
-I
On 02/24/2011 10:47 PM, Les Mikesell wrote:
On 2/24/11 8:56 PM, Scott Robbins wrote:
On Fri, Feb 25, 2011 at 03:44:32PM +1300, Machin, Greg wrote:
<snip of good information>
Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Agreed--I haven't really found server, however, to be all that great for 24/7, so I assumed (and we know what happens when one assumes), that it was being used for testing. For any sort of production use, ESXi 4.1 is quite good.
Player isn't good for most of my usage because most of the time I don't want the console display at all - I just connect to the guests remotely with freenx/ssh/vnc when necessary. And I have Server 1.x setups that have run for years with no attention or downtime. I agree that ESXi is better, but it wasn't free when I built the VMs and I'm running some native Centos stuff on the host along with several guests.
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
The ABI is not for things like VMWare when they screw up their updates ... it is for custom 3rd party software that you have spent $1,000,000.00 having developed that will stop working when the ABI changes.
In the case of VMWare, they support RHEL, Fedora, Ubuntu, SuSE, etc. out of the box and they made a mistake with their RH compile.
#1 is a far bigger issue than #2.
On 2/25/11 4:48 AM, Johnny Hughes wrote:
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
The ABI is not for things like VMWare when they screw up their updates
This was not a VMWare update. It was a glibc update - and the breakage was dramatic, not just the slow memory leak someone else mentioned.
... it is for custom 3rd party software that you have spent $1,000,000.00 having developed that will stop working when the ABI changes.
Can you elaborate on that? I thought ABI stabity was a yes or no kind of question. What's different about VMWare other than RH selling a similar product?
In the case of VMWare, they support RHEL, Fedora, Ubuntu, SuSE, etc. out of the box and they made a mistake with their RH compile.
It seemed really odd - because at least the next few VMWare updates didn't fix it and they aren't exactly new at this game. The multi-distribution support mostly comes from including a bazillion libraries in the package that the distributions refuse to standardize enough to count on, but that doesn't include glibc... The workaround to switch to ESXi or Server 1.x was straightforward enough so I don't need more advice about that, but the situation seemed like more than a simple mistake. Does anyone have inside information on why it happened and wasn't fixed immediately by one side or the other?
On 25/02/11 14:52, Les Mikesell wrote:
On 2/25/11 4:48 AM, Johnny Hughes wrote:
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
The ABI is not for things like VMWare when they screw up their updates
This was not a VMWare update. It was a glibc update - and the breakage was dramatic, not just the slow memory leak someone else mentioned.
I don't know this case specifically. But generally speaking, there are some cases where applications are built depending on a bug in a library to work properly. When that bug gets fixed in the library, the application breaks.
ABI doesn't ensure that all applications will work forever. It only assures that the application binary interface doesn't change. That means that arguments being passed through library functions does not change, that functions does not disappear, looses or gains more arguments or that the return type from functions doesn't change. It does not guarantee that the behaviour of the functions doesn't change, if the behaviour was wrong to start with.
kind regards,
David Sommerseth
On Feb 25, 2011, at 5:48 AM, Johnny Hughes johnny@centos.org wrote:
On 02/24/2011 10:47 PM, Les Mikesell wrote:
On 2/24/11 8:56 PM, Scott Robbins wrote:
On Fri, Feb 25, 2011 at 03:44:32PM +1300, Machin, Greg wrote:
<snip of good information>
Rather use ESXi 4.1 and get up and running quickly. If your hardware is not on the supported list there are other lists of tested hardware where people have it running on "Unsupported" hardware.
Player is not a solution if the Virtual machine needs to be running 24/7. It's more suited to testing and demo use.
Agreed--I haven't really found server, however, to be all that great for 24/7, so I assumed (and we know what happens when one assumes), that it was being used for testing. For any sort of production use, ESXi 4.1 is quite good.
Player isn't good for most of my usage because most of the time I don't want the console display at all - I just connect to the guests remotely with freenx/ssh/vnc when necessary. And I have Server 1.x setups that have run for years with no attention or downtime. I agree that ESXi is better, but it wasn't free when I built the VMs and I'm running some native Centos stuff on the host along with several guests.
Anyway, my point was that the fabled library ABI stability of RHEL turned out not to work for VMware Server 2.0. But CentOS did come through with bug-for-bug compatibility as promised, causing the same crashing behavior after the same minor-rev update.
The ABI is not for things like VMWare when they screw up their updates ... it is for custom 3rd party software that you have spent $1,000,000.00 having developed that will stop working when the ABI changes.
In the case of VMWare, they support RHEL, Fedora, Ubuntu, SuSE, etc. out of the box and they made a mistake with their RH compile.
#1 is a far bigger issue than #2.
Also, VMware could have made their module load across kernel updates without recompile if they had set their kernel module up to support KABI (kernel ABI) tracking, but they didn't.
-Ross
On 2/25/2011 8:36 AM, Ross Walker wrote:
Also, VMware could have made their module load across kernel updates without recompile if they had set their kernel module up to support KABI (kernel ABI) tracking, but they didn't.
That was the other strange thing. RHEL5 was never a 'supported' platform, so a stable module wasn't included. I sometimes see conspiracies where they might not exist, but the whole thing felt like running a Lotus product under a Microsoft OS back in the day when ever update would "accidentally" break the competitors product (all the way through SP6 to NT). And even before the absolute breakage, it was very difficult to stabilize the guest clock.
The net effect here was that our internal developers who lean towards Windows anyway but will tolerate Linux if it works just gave up. The Windows-hosted version of Server 2.x didn't have those problems.
On Friday, February 25, 2011 11:04:23 am Les Mikesell wrote:
RHEL5 was never a 'supported' platform, so a stable module wasn't included.
According to VMware's documentation, RHEL5 was and is a fully supported platform for VMware Server 2.0 (see page 26 of the current 'VMware Server User's Guide' available at vmware.com for confirmation). The binary modules are found, for the x86_64 distribution, in vmware-server-distrib/lib/modules/binary/bld-2.6.18-8.el5-x86_64smp-RHEL5/
VMware Workstation has no issues with the glibc update; VMware is just not properly supporting VMware Server, has nothing to do with Red Hat (Ubuntu is also listed as a supported OS, yet when you do the glibc update that matches the one that causes the issues on RHEL, the same thing happens there). VMware would prefer you run ESX or ESXi instead of 'ye olde' GSX product now known as VMware Server.
VMware Workstation has no issues with the glibc update; VMware is just not properly supporting VMware Server, has nothing to do with Red Hat (Ubuntu is also listed as a supported OS, yet when you do the glibc update that matches > > the one that causes the issues on RHEL, the same thing happens there). VMware would prefer you run ESX or ESXi instead of 'ye olde' GSX product now known as VMware Server.
VMware server is a dead product and has been dead for many years, VMware refocused all of their efforts to ESXi/ESX back in 2008.
-David
On Fri, Feb 25, 2011 at 9:11 AM, David Brian Chait dchait@invenda.com wrote:
VMware Workstation has no issues with the glibc update; VMware is just not properly supporting VMware Server, has nothing to do with Red Hat (Ubuntu is also listed as a supported OS, yet when you do the glibc update that matches > > the one that causes the issues on RHEL, the same thing happens there). VMware would prefer you run ESX or ESXi instead of 'ye olde' GSX product now known as VMware Server.
VMware server is a dead product and has been dead for many years, VMware refocused all of their efforts to ESXi/ESX back in 2008.
... in the server world, maybe. VMWare Workstation is actively maintained and, as Lamar said, it has no issues with the glibc update.
Akemi
On 2/25/2011 11:24 AM, Akemi Yagi wrote:
On Fri, Feb 25, 2011 at 9:11 AM, David Brian Chaitdchait@invenda.com wrote:
VMware Workstation has no issues with the glibc update; VMware is just not properly supporting VMware Server, has nothing to do with Red Hat (Ubuntu is also listed as a supported OS, yet when you do the glibc update that matches> > the one that causes the issues on RHEL, the same thing happens there). VMware would prefer you run ESX or ESXi instead of 'ye olde' GSX product now known as VMware Server.
VMware server is a dead product and has been dead for many years, VMware refocused all of their efforts to ESXi/ESX back in 2008.
... in the server world, maybe. VMWare Workstation is actively maintained and, as Lamar said, it has no issues with the glibc update.
And we didn't see any similar problems with VMWare server hosted on windows.
On 02/25/11 8:04 AM, Les Mikesell wrote:
Windows-hosted version of Server 2.x didn't have those problems.
I found all versions of VMware Server 2.0.x to be unstable under load on multiple different platforms and essentially unusable. That was when I switched those systems over to VBox
On 25/02/2011 1:13 PM, Scott Robbins wrote:
On Thu, Feb 24, 2011 at 08:04:08PM -0600, Les Mikesell wrote:
Can someone remind me why VMware server 2.x broke with a RHEL/CentOS 5.x glibc update? I switched back to 1.x which I like better anyway, but if the reason for putting up with oldness is to keep that from happening, it didn't work.
You may want to try VMware-player if you, (like almost everyone else) preferred 1.x to 2.x. The later versions of player are more like 1.x, allowing you to install an operating system from ISO or whatever, and work quite well with 64 bit CentOS.
I have begun to switch all my hosts without hardware virtualization, so can't use ESXi, to VirtualBox.
With the addition of an init.d script it works well as a headless virtual host. The VirtualBox commandline support is far superior to VMware Server. With the help of puppet I have automated the entire host install, configuration, guest vm creation and guest install and configuration.
VirtualBox was far easier to wrap puppet around than VMware Server was too.
Ben
On 02/24/11 9:18 PM, Ben wrote:
I have begun to switch all my hosts without hardware virtualization, so can't use ESXi, to VirtualBox.
ESXi only needs hardware virtualization support for 64bit guest VMs. as long as you can live with 32bit VMs, you're good with older CPUs. I have it running a dozen or more VMs on a quad Opteron 850 system (4 x single core 2.4Ghz)
On 25/02/2011 4:51 PM, John R Pierce wrote:
On 02/24/11 9:18 PM, Ben wrote:
I have begun to switch all my hosts without hardware virtualization, so can't use ESXi, to VirtualBox.
ESXi only needs hardware virtualization support for 64bit guest VMs. as long as you can live with 32bit VMs, you're good with older CPUs. I have it running a dozen or more VMs on a quad Opteron 850 system (4 x single core 2.4Ghz)
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks, I did not know that. I could've swarn I had tested it on some old IBM x306. Will have to take a look into that.
I still like that automation that I get with CentOS, puppet and VirtualBox.
Ben
Thanks, I did not know that. I could've swarn I had tested it on some old IBM x306. Will have to take a look into that.
I still like that automation that I get with CentOS, puppet and VirtualBox.
Ben
I think you need to download the VI3 rather than 4.1 to use 32 bit support, but it does work. I have it in production on some older hardware and it has not let me down yet.
On Thu, Feb 24, 2011 at 10:36:28PM -0800, David Brian Chait wrote:
I think you need to download the VI3 rather than 4.1 to use 32 bit support, but it does work. I have it in production on some older hardware and it has not let me down yet.
I believe David is correct. We had some old machines we were going to use with 4.1, but they were 32 bit and so we weren't able to use them.
On 2/25/11 7:33 AM, Scott Robbins wrote:
On Thu, Feb 24, 2011 at 10:36:28PM -0800, David Brian Chait wrote:
I think you need to download the VI3 rather than 4.1 to use 32 bit support, but it does work. I have it in production on some older hardware and it has not let me down yet.
I believe David is correct. We had some old machines we were going to use with 4.1, but they were 32 bit and so we weren't able to use them.
There are cpus that are 64 bit (with the lm flag) but don't have hardware virtualization (vt). Not sure if they can run ESXi or not, but with server/player they can run 32-bit guests only. CPUs with the vt option (and in some cases this can be enabled/disabled in the bios) can run 64-bit guests under vmware server/player even if the host runs a 32-bit OS.
On Feb 25, 2011, at 9:01 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 2/25/11 7:33 AM, Scott Robbins wrote:
On Thu, Feb 24, 2011 at 10:36:28PM -0800, David Brian Chait wrote:
I think you need to download the VI3 rather than 4.1 to use 32 bit support, but it does work. I have it in production on some older hardware and it has not let me down yet.
I believe David is correct. We had some old machines we were going to use with 4.1, but they were 32 bit and so we weren't able to use them.
There are cpus that are 64 bit (with the lm flag) but don't have hardware virtualization (vt). Not sure if they can run ESXi or not, but with server/player they can run 32-bit guests only. CPUs with the vt option (and in some cases this can be enabled/disabled in the bios) can run 64-bit guests under vmware server/player even if the host runs a 32-bit OS.
I think the support goes like this:
32-bit OS (VI3) can run 32-bit software virtualization, but not 64-bit virtualization without hardware virtualization support.
64-bit OS (VI4) can't run without hardware virtualization support at all.
VMware doesn't see software virtualization as performing well enough to continue supporting modern workloads, so it dropped it in 4.1 I believe. I think 4.0 might have supported 32-bit non-hardware virtualization though.
-Ross
You may want to try VMware-player if you, (like almost everyone else) preferred 1.x to 2.x. The later versions of player are more like 1.x, allowing you to install an operating system from ISO or whatever, and work quite well with 64 bit CentOS.
If you want automation, forget player. Current versions are distributed as installer scripts, not rpm, and I don't think they have fixed the problem/feature that prevents running the script non-interactively.
On 02/24/2011 06:04 PM, Les Mikesell wrote:
Can someone remind me why VMware server 2.x broke with a RHEL/CentOS 5.x glibc update? I switched back to 1.x which I like better anyway, but if the reason for putting up with oldness is to keep that from happening, it didn't work.
Ultimately it broke because VMware was never interested in actually supporting VMServer 2. It had 'issues' right from the start such as some type of resource leak that would (still does) slowly degrade performance unless it was rebooted every week or two. It would stop running on a kernel upgrade unless you wrote a script to automatically recompile the necessary drivers when a kernel upgrade was detected. After two sub-point releases they never addressed the glib incompatibility at all. Those of us who continued to use it did so by hacking around so an older glibc was loaded just for it. Then there was the 'the manager console only works with SSLv2' issue that was never addressed and known security problems they pretty much said 'not going to fix'. Finally they 'redefined' their way out of their own support policy where a previous support level became 'you can look for any solutions on the forum'. You couldn't even *buy* support for it.
It has been abandonware for years. I've been migrating our systems to KVM for some months now.
On Feb 24, 2011, at 8:37 PM, Johnny Hughes johnny@centos.org wrote:
On 02/24/2011 05:43 PM, Ross Walker wrote:
On Feb 24, 2011, at 9:31 AM, Johnny Hughes <johnny@centos.org mailto:johnny@centos.org> wrote:
I am not saying this to be a smart a$$ or be negative ... just saying that other enterprise distributions exist that provide long term stability without backports ... Unbuntu LTS is a free example. They also provide integration of all their system libraries and audit their software for security compliance.
I think the primary driving factor for Redhat to employ the backport method is to maintain a stable ABI across a release, and the primary reason for that is for third party application support.
Redhat wants to provide a platform for which commercial vendors can develop their wares such that they can say it supports RHEL 5 or 6 and it will actually run on said platform without loss of functionality or stability.
I doubt the same can be said about Ubuntu LTS or even SLES where a change in a library can result in either the third party application not working or working with limited functionality.
That is absolutely true and I agree with you 100% ... I like the constant ABI across the release and the backport model, otherwise I would be building "something else".
But I also know that there are people who think backporting is the "Devil".
I was only trying to provide sane advise for those people ... I think it is much safer (and more stable) to use unbuntu than to try and build your own latest bind and your own latest ssh and your own latest apache and your own latest php and "other stuff" and then bolt that into CentOS.
If you start breaking the constant ABI and introducing lots of new shared libs, etc, then you are totally negating the only 2 things (ABI and stability) that makes CentOS an enterprise OS. You are even likely better off using Fedora than trying to replace massive parts of CentOS with newer stuff.
Now ... I have done some custom things myself (like roll in Samba 3.4.x for Windows 7 PDC support into c4 and c5 and CentOS 5 LDAP in CentOS 4 so I could add new C5 machines as Domain controllers in new offices with some older C4 machines as domain controllers in the old offices without having to replace the older OSes).
So, with limited changes, it is possible.
I was pretty sure you understood, it was more for the audience.
Also to add, there is nothing wrong with adding custom builds of software, just make sure it goes in '/usr/local' for 'make install' builds and their updated libraries if they need updated libraries. If one is doing custom RPM builds it is still better to locate in '/usr/local' if possible, otherwise make damn sure it doesn't conflict with the base CentOS RPMs or one may find his/her self in dependency hell.
-Ross