Hi,
Does somebody know when RHEL 5.3 will be released? What is generally the delay between RHEL/CentOS releases?
Regards Alain
Alain PORTAL wrote:
Does somebody know when RHEL 5.3 will be released? What is generally the delay between RHEL/CentOS releases?
The official 5.3 beta period is now over, so it would be a fair guess to say we expect 5.3 anytime from early to end Feb.
The aim is to get CentOS-5.3 released within a few weeks of public release upstream.
Le vendredi 16 janvier 2009 à 15:27, Karanbir Singh a écrit :
The official 5.3 beta period is now over, so it would be a fair guess to say we expect 5.3 anytime from early to end Feb.
Humm... I though it was at the beginning of january.
The aim is to get CentOS-5.3 released within a few weeks of public release upstream.
What means "a few"? ;-) Well, I have to understand that Centos 5.3 will be release near the end of march, I can't wait a such long time. So, I'll install 5.2
Thanks! Alain
Alain PORTAL wrote:
So, I'll install 5.2
you might want to look into exactly what the .2 and .3 signify in there.
on 1-16-2009 6:32 AM Alain PORTAL spake the following:
Le vendredi 16 janvier 2009 � 15:27, Karanbir Singh a �crit�:
The official 5.3 beta period is now over, so it would be a fair guess to say we expect 5.3 anytime from early to end Feb.
Humm... I though it was at the beginning of january.
The aim is to get CentOS-5.3 released within a few weeks of public release upstream.
What means "a few"? ;-) Well, I have to understand that Centos 5.3 will be release near the end of march, I can't wait a such long time. So, I'll install 5.2
When 5.3 is released there is a "magic" incantation that will transform your 5.2 into 5.3!
Here it is;
Are you ready?
"yum update"
Easy, huh?
Le vendredi 16 janvier 2009, Scott Silva a écrit :
When 5.3 is released there is a "magic" incantation that will transform your 5.2 into 5.3!
Here it is;
Are you ready?
"yum update"
Easy, huh?
Are you really sure about that?
On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Scott Silva a écrit :
When 5.3 is released there is a "magic" incantation that will transform your 5.2 into 5.3!
Here it is;
Are you ready?
"yum update"
Easy, huh?
Are you really sure about that?
I am.
:) -sv
Le vendredi 16 janvier 2009, seth vidal a écrit :
On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:
Are you really sure about that?
I am.
:)
OK. If you say it, I trust you ;-) I thought an upgrade was needed.
Regards Alain
On Fri, 2009-01-16 at 19:16 +0100, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, seth vidal a écrit :
On Fri, 2009-01-16 at 18:59 +0100, Alain PORTAL wrote:
Are you really sure about that?
I am.
:)
OK. If you say it, I trust you ;-) I thought an upgrade was needed.
a long time ago that was potentially true.
the difference between yum update and yum upgrade was whether or not obsoletes were processed. In an upgrade they were, in an update they defaulted to not. This was necessary in a world where mutually obsoleting pkgs were allowed. Since about rhel4 and beyond no one has been letting a distro out the door with mutually obsoleting pkgs. So it was not longer a big deal.
obsoletes=1 is now the yum default.
Hope that helps. -sv
Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Scott Silva a écrit :
When 5.3 is released there is a "magic" incantation that will transform your 5.2 into 5.3!
Here it is;
Are you ready?
"yum update"
Easy, huh?
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
Le vendredi 16 janvier 2009, seth vidal a écrit :
On Fri, 2009-01-16 at 19:16 +0100, Alain PORTAL wrote:
OK. If you say it, I trust you ;-) I thought an upgrade was needed.
a long time ago that was potentially true.
the difference between yum update and yum upgrade was whether or not obsoletes were processed. In an upgrade they were, in an update they defaulted to not. This was necessary in a world where mutually obsoleting pkgs were allowed. Since about rhel4 and beyond no one has been letting a distro out the door with mutually obsoleting pkgs. So it was not longer a big deal.
obsoletes=1 is now the yum default.
Hope that helps.
For understanding, yes.
Thanks
Le vendredi 16 janvier 2009, Les Mikesell a écrit :
Alain PORTAL wrote:
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
No confusion for me. I understood that upgrading X.Y -> X+1.0 is a bad idea
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Scott Silva a écrit :
When 5.3 is released there is a "magic" incantation that will transform your 5.2 into 5.3!
Here it is;
Are you ready?
"yum update"
Easy, huh?
Are you really sure about that?
When the server is at least an hour in the car away you tend to make sure before you do this.
But I have guided Various Centos machines through the minor versions. - From 5.0 to 5.1 and 5.2 and with 4.2 all the steps to 4.7
Never so much as a glitch.
In fact a normal kernel security update in between versions is the only time I need to do a reboot of the hardware and keep my fingers crossed.
Hugo.
- -- hvdkooij@vanderkooij.org http://hugo.vanderkooij.org/ PGP/GPG? Use: http://hugo.vanderkooij.org/0x58F19981.asc
A: Yes. >Q: Are you sure? >>A: Because it reverses the logical flow of conversation. >>>Q: Why is top posting frowned upon?
Bored? Click on http://spamornot.org/ and rate those images.
Nid wyf yn y swyddfa ar hyn o bryd. Anfonwch unrhyw waith i'w gyfieithu.
2009/1/16 Alain PORTAL alain.portal@univ-montp2.fr:
Does somebody know when RHEL 5.3 will be released?
That'll be today ;-)
http://www.redhat.com/about/news/prarchive/2009/rhel_5_3.html
Le mardi 20 janvier 2009, Andy Burns a écrit :
2009/1/16 Alain PORTAL alain.portal@univ-montp2.fr:
Does somebody know when RHEL 5.3 will be released?
That'll be today ;-)
http://www.redhat.com/about/news/prarchive/2009/rhel_5_3.html
Well! Good news! ;-)
On Fri, 16 Jan 2009, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Les Mikesell a �crit :
Alain PORTAL wrote:
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
No confusion for me. I understood that upgrading X.Y -> X+1.0 is a bad idea
I don't think it is a bad idea. I just think that sometimes there are some problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
On Wed, 21 Jan 2009, Charlie Brady wrote:
On Fri, 16 Jan 2009, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Les Mikesell a ?crit :
Alain PORTAL wrote:
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
No confusion for me. I understood that upgrading X.Y -> X+1.0 is a bad idea
I don't think it is a bad idea. I just think that sometimes there are some problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
Feel surprised to find in the RHEL 5.3 release notes the following statement:
---- While anaconda's "upgrade" option will perform an upgrade from Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux 5.3, there is no guarantee that the upgrade will preserve all of a system's settings, services, and custom configurations. For this reason, Red Hat recommends that you perform a fresh installation rather than an upgrade. ----
So they are advising to reinstall RHEL 5.3 even when you're running RHEL 5.2. Which seems scary to me. I would hope this is mostly because of the many Xen improvements and thus mostly for their Advanced Platform.
But still, this is certainly no good evolution.
Charlie Brady wrote:
On Fri, 16 Jan 2009, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Les Mikesell a �crit :
Alain PORTAL wrote:
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
No confusion for me. I understood that upgrading X.Y -> X+1.0 is a bad idea
I don't think it is a bad idea. I just think that sometimes there are some problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
two days ago I've done 3.5 -> 5.2. via anaconda (upgradeany). no real problems except lack of X drivers after install.
On Wed, 2009-01-21 at 15:19 +0100, Dag Wieers wrote:
Feel surprised to find in the RHEL 5.3 release notes the following statement:
While anaconda's "upgrade" option will perform an upgrade from Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux 5.3, there is no guarantee that the upgrade will preserve all of a system's settings, services, and custom configurations. For this reason, Red Hat recommends that you perform a fresh installation rather than an upgrade.
So they are advising to reinstall RHEL 5.3 even when you're running RHEL 5.2.
*speaking _for me / as me_, as always, etc.*
I don't see the above text in the release notes, but what I do see is the top section of:
https://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_No...
...which implies (to me) that the text you quoted is saying something like "although a 4.7 => 5.3 upgrade has the same UI as 5.2 => 5.3, you shouldn't expect it to work as well in all cases".
And, of course, that's all anaconda specific ... going 5.2 => 5.3 via. "yum update" is expected to just work.
On Wed, 21 Jan 2009, James Antill wrote:
On Wed, 2009-01-21 at 15:19 +0100, Dag Wieers wrote:
Feel surprised to find in the RHEL 5.3 release notes the following statement:
While anaconda's "upgrade" option will perform an upgrade from Red Hat Enterprise Linux 4.7 or 5.2 to Red Hat Enterprise Linux 5.3, there is no guarantee that the upgrade will preserve all of a system's settings, services, and custom configurations. For this reason, Red Hat recommends that you perform a fresh installation rather than an upgrade.
So they are advising to reinstall RHEL 5.3 even when you're running RHEL 5.2.
*speaking _for me / as me_, as always, etc.*
I don't see the above text in the release notes, but what I do see is the top section of:
https://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_No...
...which implies (to me) that the text you quoted is saying something like "although a 4.7 => 5.3 upgrade has the same UI as 5.2 => 5.3, you shouldn't expect it to work as well in all cases".
And, of course, that's all anaconda specific ... going 5.2 => 5.3 via. "yum update" is expected to just work.
I meant announcement, not release notes. See:
https://www.redhat.com/archives/rhelv5-announce/2009-January/msg00000.html
If "yum updated" is expected to work, the quoted paragraph is very bad worded. This is the kind of thing the Ubuntu people are using against RPM based distributions.
And how ill-informed it may be, it is better to avoid than to remedy.
Manuel Wolfshant wrote:
Charlie Brady wrote:
I don't think it is a bad idea. I just think that sometimes there are some problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
two days ago I've done 3.5 -> 5.2. via anaconda (upgradeany). no real problems except lack of X drivers after install.
The main problem I see is that sometimes packages get replaced by others.
For example. 2.1 contained wu-imap server. I think from 3 on (and certainly in 4, I've not actually installed 3 anywhere), wu-imap was dropped and now we can choose between cyrus imap and dovecot.
Similarly, wu-ftpd was dropped at some point.
When these package substitutions are made, there is no chance at all of the old configuration being translated into the new.
And then there's postgresql. One has to backup one's data before upgrading major postgresql releases and then restore into the new.
On Fri, 23 Jan 2009, John Summerfield wrote:
And then there's postgresql. One has to backup one's data before upgrading major postgresql releases and then restore into the new.
I consider that a major upstream bug.
However, at the least a %pre script should create an SQL dump before upgrading major releases, so user is not left with an unusable blob.
Better would be for postgresql to ship a standalone SQL dumper, which can read old file formats.
On Thu, 22 Jan 2009, Charlie Brady wrote:
Better would be for postgresql to ship a standalone SQL dumper...
i.e. one which is self contained, and doesn't require a running postmaster. openldap's slapcat is such a beast, for ldap backend to LDIF dumping.
And then there's postgresql. One has to backup one's data before upgrading major postgresql releases and then restore into the new.
Not to veer completely off-topic, but the PostgreSQL Development Group (PGDG) are very good about making RHEL packages. Unless the application you're using is constrained to the particular PG version supplied by the Upstream Provider, or you are paying the Upstream Provider for support and you want to stick with their packages - using the PGDG packages will provide you with more benefits than sticking with the OS-based packages, provided you can justify the time to dump-restore. There really isn't a compelling reason to stick with 8.1, as 8.3 has many performance benefits.
Cheers, -Josh
Charlie Brady wrote:
On Thu, 22 Jan 2009, Charlie Brady wrote:
Better would be for postgresql to ship a standalone SQL dumper...
There is an ongoing effort to create an in-place-upgrade for PostgreSQL, http://wiki.postgresql.org/images/1/17/Pg_upgrade.pdf
Regards,
Peter
i.e. one which is self contained, and doesn't require a running postmaster. openldap's slapcat is such a beast, for ldap backend to LDIF dumping. _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
I consider that a major upstream bug. Better would be for postgresql to ship a standalone SQL dumper, which can read old file formats.
Charlie,
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
Any major database version upgrade requires the attention of a qualified DBA who knows how to test data and applications against the new DB version, and then dump/upgrade/restore.
For example, PostgreSQL introduced some minor syntactical differences with 8.3. If your application uses the features affected by these changes, it would be impossible to simply 'dump/restore' without some massaging of the data and the application.
PostgreSQL does ship with a dumper, pg_dump. If you have the current version of postmaster, then you use pg_dump to connect to that and dump your data in a version-agnostic format. IMHO, the effort of writing a standalone dumper that can recognize all the old file formats is not worth it, because it is a mistake to delete the old version of postmaster off your system before you've done a dump of the database.
Cheers, -Josh
Joshua Kramer wrote:
Any major database version upgrade requires the attention of a qualified DBA who knows how to test data and applications against the new DB version, and then dump/upgrade/restore.
For example, PostgreSQL introduced some minor syntactical differences with 8.3. If your application uses the features affected by these changes, it would be impossible to simply 'dump/restore' without some massaging of the data and the application.
PostgreSQL does ship with a dumper, pg_dump. If you have the current version of postmaster, then you use pg_dump to connect to that and dump your data in a version-agnostic format. IMHO, the effort of writing a standalone dumper that can recognize all the old file formats is not worth it, because it is a mistake to delete the old version of postmaster off your system before you've done a dump of the database.
So how do you package such a thing in RPM so it can permit both new and old instances to run simultaneously while you do all of this required testing? I suppose these days virtualbox is an almost-reasonable answer but it just seems wrong to have a system that by design doesn't let you test a new instance before replacing the old one.
On Jan 22, 2009, at 12:25 PM, Les Mikesell wrote:
Joshua Kramer wrote:
Any major database version upgrade requires the attention of a qualified DBA who knows how to test data and applications against the new DB version, and then dump/upgrade/restore.
For example, PostgreSQL introduced some minor syntactical differences with 8.3. If your application uses the features affected by these changes, it would be impossible to simply 'dump/restore' without some massaging of the data and the application.
PostgreSQL does ship with a dumper, pg_dump. If you have the current version of postmaster, then you use pg_dump to connect to that and dump your data in a version-agnostic format. IMHO, the effort of writing a standalone dumper that can recognize all the old file formats is not worth it, because it is a mistake to delete the old version of postmaster off your system before you've done a dump of the database.
So how do you package such a thing in RPM so it can permit both new and old instances to run simultaneously while you do all of this required testing? I suppose these days virtualbox is an almost-reasonable answer but it just seems wrong to have a system that by design doesn't let you test a new instance before replacing the old one.
Historical note: A long time ago (RHL 5.2 iirc) transparent upgrades of postgres databases was attempted within *.rpm packaging. The result was a total disaster.
Don't attempt the database conversion while upgrading is the moral.
Arrange paths in postgres packaging so that both old <-> new utilities are available when needed. That can most easily be done by including whatever old utilities are needed in the new package so that the conversion can be done after the old -> new upgrade.
Alternatively, one can also attempt multiple installs of postgres side-by-side kinda like kernel packages are done.
hth
73 de Jeff
Guys,
This is the CentOS-devel list. Will you please take this discussion to the general list.
Thanks. Alan.
On Thu, 22 Jan 2009, Joshua Kramer wrote:
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
No, but I wouldn't choose to use those.
PostgreSQL does ship with a dumper, pg_dump. If you have the current version of postmaster, then you use pg_dump to connect to that and dump your data in a version-agnostic format.
I know all that.
Thanks.
So how do you package such a thing in RPM so it can permit both new and old instances to run simultaneously while you do all of this required testing? I suppose these days virtualbox is an almost-reasonable answer
I think this discussion is a reflection of our different environments. :)
On my websites... when 8.3 came out, I downloaded it to a test machine. I then did a dump of the production data from 8.2, and did an import into my 8.3 test machine. After pointing an Apache dev instance at the test database, I could verify that my applications still worked, and make any code changes that were required.
After I had a test/dev environment that was stable under 8.3, I planned the migration: 1. Dump 8.2; 2. Shutdown 8.2 and remove packages; 3. Move 8.2's data directory; 4. Install 8.3 packages, and initdb; 5. Import data made during the dump and start db; 6. Migrate code changes to web server. After things baked for a week and there were no errors, I deleted the old 8.2 data directories.
I realize that this is much more difficult if you're using a VM on a web host that only allows one machine. Is this the type of environment that is constraining you? As long as you can test your application under the new database version to make sure it's OK, the migration can be done on one machine. But let me ask: in what case would you not want to test your application against a new database version?
--Josh
Joshua Kramer wrote:
So how do you package such a thing in RPM so it can permit both new and old instances to run simultaneously while you do all of this required testing? I suppose these days virtualbox is an almost-reasonable answer
I think this discussion is a reflection of our different environments. :)
I see it as a generic problem with RPM packaging/deployment that you are forced to work around by maintaining duplicate equipment.
On my websites... when 8.3 came out, I downloaded it to a test machine. I then did a dump of the production data from 8.2, and did an import into my 8.3 test machine. After pointing an Apache dev instance at the test database, I could verify that my applications still worked, and make any code changes that were required.
That's great if you have a test machine for every application. For an important production web site it kind of goes with the territory.
After I had a test/dev environment that was stable under 8.3, I planned the migration: 1. Dump 8.2; 2. Shutdown 8.2 and remove packages; 3. Move 8.2's data directory; 4. Install 8.3 packages, and initdb; 5. Import data made during the dump and start db; 6. Migrate code changes to web server. After things baked for a week and there were no errors, I deleted the old 8.2 data directories.
What would you do for something simple that doesn't justify buying a duplicate machine, yet is probably even more likely to break from a version change?
I realize that this is much more difficult if you're using a VM on a web host that only allows one machine. Is this the type of environment that is constraining you?
I'd just like to see a realistic approach to updates via packages.
As long as you can test your application under the new database version to make sure it's OK, the migration can be done on one machine. But let me ask: in what case would you not want to test your application against a new database version?
I do want the ability to test while still running the old version. I just don't see how that is possible with any RPM-deployed package without having duplicate hardware or virtual machines. Postgresql makes a good example because the conversion needs both new and old code present for different steps, and 8.2->8.3 especially so because some implict casts were removed that can break client code in odd ways, but the principle is the same for any change where you want to know the new version works in your environment before the old one is shut down. If you build from source you can make it use different locations and ports and run concurrently, but with RPM binaries you can't.
On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
I'd just like to see a realistic approach to updates via packages.
Reality check:
You have a postgres upstream devel with years of experience packaging postgres and me both saying Don't attempt postgres database upgrades in packaging.
But create your own virtuality reality approach if you want.
Have fun!
73 de Jeff
Jeff Johnson wrote:
On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
I'd just like to see a realistic approach to updates via packages.
Reality check:
You have a postgres upstream devel with years of experience packaging postgres and me both saying Don't attempt postgres database upgrades in packaging.
But create your own virtuality reality approach if you want.
I think you missed my point, which is that RPM packaging doesn't provide facilities for what needs to be done. Postgres upstream is just more honest than most in recognizing the problem. It's not the only thing that ever has non-backward-compatible updates.
On Jan 22, 2009, at 2:23 PM, Les Mikesell wrote:
Jeff Johnson wrote:
On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
I'd just like to see a realistic approach to updates via packages.
Reality check:
You have a postgres upstream devel with years of experience packaging postgres and me both saying Don't attempt postgres database upgrades in packaging.
But create your own virtuality reality approach if you want.
I think you missed my point, which is that RPM packaging doesn't provide facilities for what needs to be done. Postgres upstream is just more honest than most in recognizing the problem. It's not the only thing that ever has non-backward-compatible updates.
I believe that one can easily conclude RPM packaging doesn't provide facilities for what needs to be done from Don't attempt postgres database upgrades in packaging.
You the one who wishes I'd just like to see a realistic approach to updates via packages.
That Does Not Compute with current RPM facilities and existing postgres upgrade mechanisms.
73 de Jeff
Jeff Johnson wrote:
But create your own virtuality reality approach if you want.
I think you missed my point, which is that RPM packaging doesn't provide facilities for what needs to be done. Postgres upstream is just more honest than most in recognizing the problem. It's not the only thing that ever has non-backward-compatible updates.
I believe that one can easily conclude RPM packaging doesn't provide facilities for what needs to be done from Don't attempt postgres database upgrades in packaging.
You the one who wishes I'd just like to see a realistic approach to updates via packages.
Meaning I'd like RPM to be changed so multiple versions of packages could co-exist, as is often necessary in practice.
That Does Not Compute with current RPM facilities and existing postgres upgrade mechanisms.
Agreed, it doesn't work. Nor does any other RPM-managed update where you need to have both old and new packages simultaneously working for a while. The special case for the kernel is about the only place where it even attempts to keep old versions around for an emergency fallback.
On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:
Agreed, it doesn't work. Nor does any other RPM-managed update where you need to have both old and new packages simultaneously working for a while. The special case for the kernel is about the only place where it even attempts to keep old versions around for an emergency fallback.
Honking about RPM deficiencies on a CentOS Devel list is hot air going no place.
FWIW, there's no package system that provides sufficient facilties to undertake a postgres upgrade reliably during upgrade that I'm aware of. Nor is it "recommended" afaik.
But supply a pointer to your favorite package manager that _DOES_ attempt postgres database upgrades and I'll be happy to attempt equivalent in RPM.
Personally, I think that database upgrades have almost nothing to do with instaling packages, but I'd rather add whatever is useful than discuss well known RPM deficiencies for another decade.
73 de Jeff
Jeff Johnson wrote:
On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:
Agreed, it doesn't work. Nor does any other RPM-managed update where you need to have both old and new packages simultaneously working for a while. The special case for the kernel is about the only place where it even attempts to keep old versions around for an emergency fallback.
Honking about RPM deficiencies on a CentOS Devel list is hot air going no place.
FWIW, there's no package system that provides sufficient facilties to undertake a postgres upgrade reliably during upgrade that I'm aware of. Nor is it "recommended" afaik.
But supply a pointer to your favorite package manager that _DOES_ attempt postgres database upgrades and I'll be happy to attempt equivalent in RPM.
Personally, I think that database upgrades have almost nothing to do with instaling packages, but I'd rather add whatever is useful than discuss well known RPM deficiencies for another decade.
The reason the discussion is intertwined with packaging is that if you name the delivered files the same, the old and new can never co-exist as they should for the conversion and test period.
I think the only way it can be done reasonably is to install the new code with different names and/or paths and scripts that can be run later to do the conversion and (after testing) replacement of the old version.
On Jan 22, 2009, at 5:43 PM, Les Mikesell wrote:
Personally, I think that database upgrades have almost nothing to do with instaling packages, but I'd rather add whatever is useful than discuss well known RPM deficiencies for another decade.
The reason the discussion is intertwined with packaging is that if you name the delivered files the same, the old and new can never co- exist as they should for the conversion and test period.
In fact, the old <-> new can/do coexist during upgrade; lib/fsm.c in RPM has had a form of apply/commit since forever that puts the new in place (the apply) but does not remove the old (renaming into old is the commit).
And there are provisions to rename the old into a subdirectory as part of committing the new; at least the necessary path name generation including a subdirectory has been there since forever in RPM. Adding the necessary logic to achieve whatever goal is desired installing files is just not that hard, the code is a state machine.
Personally (and as I pointed out), including old files in the new package is likelier to be reliable, and has the additional benefit that whatever conversions are needed can be done anytime, not just during a "window" during upgrade where old <-> new coexist. A conversion side-effect at the scale of a database conversion is hugely complicated to guarantee reliability during a "window". Are you volunteering to test?
I think the only way it can be done reasonably is to install the new code with different names and/or paths and scripts that can be run later to do the conversion and (after testing) replacement of the old version.
You think, I know, is the difference.
But as always Patches cheerfully accepted.
And you *really* need to take this conversation to an RPM list instead.
I'd add the CC, but I have no idea what RPM you wish to use.
73 de Jeff
Jeff Johnson wrote:
The reason the discussion is intertwined with packaging is that if you name the delivered files the same, the old and new can never co-exist as they should for the conversion and test period.
In fact, the old <-> new can/do coexist during upgrade; lib/fsm.c in RPM has had a form of apply/commit since forever that puts the new in place (the apply) but does not remove the old (renaming into old is the commit).
Co-existing, as in being stored somewhere isn't quite the point. They both have to be able to be run, find the correct libraries, etc, and the one currently known to work has to be found by other applications.
And there are provisions to rename the old into a subdirectory as part of committing the new; at least the necessary path name generation including a subdirectory has been there since forever in RPM. Adding the necessary logic to achieve whatever goal is desired installing files is just not that hard, the code is a state machine.
But RPM can't do it unless it can always succeed with no user/admin input. I don't believe that's possible.
Personally (and as I pointed out), including old files in the new package is likelier to be reliable, and has the additional benefit that whatever conversions are needed can be done anytime, not just during a "window" during upgrade where old <-> new coexist.
But you can't replace my current old files with new ones of the same name until you know they work. And you can't know that they work because you don't know what applications I have.
A conversion side-effect at the scale of a database conversion is hugely complicated to guarantee reliability during a "window". Are you volunteering to test?
Sure, if it is something like running a script, testing an app known not to work with 8.3 (I think some versions of OpenNMS would qualify) and then seeing if the back-out strategy works.
I think the only way it can be done reasonably is to install the new code with different names and/or paths and scripts that can be run later to do the conversion and (after testing) replacement of the old version.
You think, I know, is the difference.
That's why I want the conversion to be scripted.
But as always Patches cheerfully accepted.
And you *really* need to take this conversation to an RPM list instead.
It really doesn't have much to do with RPM. It has to do with naming the replacement files so they don't overwrite the things that have to remain.
Les Mikesell wrote:
Jeff Johnson wrote:
On Jan 22, 2009, at 2:00 PM, Les Mikesell wrote:
I'd just like to see a realistic approach to updates via packages.
Reality check:
You have a postgres upstream devel with years of experience packaging postgres and me both saying Don't attempt postgres database upgrades in packaging.
But create your own virtuality reality approach if you want.
I think you missed my point, which is that RPM packaging doesn't provide facilities for what needs to be done. Postgres upstream is just more honest than most in recognizing the problem. It's not the only thing that ever has non-backward-compatible updates.
It's not hard to rebuild (some, at least) packages to use rpm's prefix option. It allows to relocate the package at install time.
Doubtless Jeff will comment on the practicality of this, it's a feature that's been around for years and years, but I've not seen it used much.
Best, though to have a complete test system where the entire application - OS, database, webserver and anything else required is tested as an integrated whole.
It's getting easier with so many virtualisation choices, but even that aside most organisations of any size should be able to find an old Pentium IV or better to test on.
Jeff Johnson wrote:
On Jan 22, 2009, at 4:40 PM, Les Mikesell wrote:
Agreed, it doesn't work. Nor does any other RPM-managed update where you need to have both old and new packages simultaneously working for a while. The special case for the kernel is about the only place where it even attempts to keep old versions around for an emergency fallback.
Honking about RPM deficiencies on a CentOS Devel list is hot air going no place.
FWIW, there's no package system that provides sufficient facilties to undertake a postgres upgrade reliably during upgrade that I'm aware of. Nor is it "recommended" afaik.
I thought that point was already conceded.
However, there is nothing now that prevents two versions of postgresql from being built with version-dependent directory names (as it almost is): [root@numbat ~]# rpm -qvl postgresql | grep ^d drwxr-xr-x 2 root root 0 Jan 12 2008 /usr/lib/pgsql drwxr-xr-x 2 root root 0 Jan 12 2008 /usr/share/doc/postgresql-8.1.11 drwxr-xr-x 2 root root 0 Jan 12 2008 /usr/share/doc/postgresql-8.1.11/html [root@numbat ~]# Change that to /usr/lib/pgsql-8.1.11, create a bin directory in there and use the alternatives system to choose the default.
The configuration and data directory names need to be changed too.
But supply a pointer to your favorite package manager that _DOES_ attempt postgres database upgrades and I'll be happy to attempt equivalent in RPM.
Personally, I think that database upgrades have almost nothing to do with instaling packages, but I'd rather add whatever is useful than discuss well known RPM deficiencies for another decade.
In-package (or upgrade-time) configuration conversion will always fail for some packages, but I see no reason that users shouldn't be able to run old and new versions of (at least) _some_ packages simultaneously. It would make upgrades easier for sysadmins with just a few systems to maintain - depending on needs they could upgrade a clone and test it and fix and document broken bits without having to start from scratch each time.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Joshua Kramer wrote:
I consider that a major upstream bug. Better would be for postgresql to ship a standalone SQL dumper, which can read old file formats.
Charlie,
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
You windows users: Yes, they woukd
Hugo
- -- hvdkooij@vanderkooij.org http://hugo.vanderkooij.org/ PGP/GPG? Use: http://hugo.vanderkooij.org/0x58F19981.asc
A: Yes. >Q: Are you sure? >>A: Because it reverses the logical flow of conversation. >>>Q: Why is top posting frowned upon?
Bored? Click on http://spamornot.org/ and rate those images.
Nid wyf yn y swyddfa ar hyn o bryd. Anfonwch unrhyw waith i'w gyfieithu.
On Jan 23, 2009, at 1:42 AM, Hugo van der Kooij wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Joshua Kramer wrote:
I consider that a major upstream bug. Better would be for postgresql to ship a standalone SQL dumper, which can read old file formats.
Charlie,
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
You windows users: Yes, they woukd
Actually, its "commercial" vs "FLOSS" for the database that is the distinguishing attribute determining whether upgrades are simple in the above.
Most FLOSS databases, like postgres, are harder to upgrade than "commercial" databases like Oracle.
73 de Jeff
On Fri, 23 Jan 2009, Jeff Johnson wrote:
On Jan 23, 2009, at 1:42 AM, Hugo van der Kooij wrote:
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
You windows users: Yes, they woukd
Actually, its "commercial" vs "FLOSS" for the database that is the distinguishing attribute determining whether upgrades are simple in the above.
Most FLOSS databases, like postgres, are harder to upgrade than "commercial" databases like Oracle.
And it will remain that way, until FLOSS developers consider it legitimate to wonder why it is that way, and consider how to improve the situation.
From my point of view, what's egregious with packaged postgresql is that
it allows you to "upgrade" a postgresql installation to a state where the data is no longer accessable. At the least, one should be able to dump the data to SQL after upgrade.
There's been much discussion about what rpm can and cannot do. One thing rpm can do, however, is to run a pre-script which uses the files of a a previously installed version. A pre script could detect an upgrade from the old version which uses an incompatible backend format, and could then create the SQL dump (starting postmaster and waiting for it if required).
I don't buy the arguments that changes in the supported SQL language make automated upgrades of the backend data impossible. Dump, upgrade, re-import couldn't work if that were the case.
Thanks (over and out).
--- Charlie
Charlie Brady wrote:
From my point of view, what's egregious with packaged postgresql is that
it allows you to "upgrade" a postgresql installation to a state where the data is no longer accessable. At the least, one should be able to dump the data to SQL after upgrade.
There's been much discussion about what rpm can and cannot do. One thing rpm can do, however, is to run a pre-script which uses the files of a a previously installed version. A pre script could detect an upgrade from the old version which uses an incompatible backend format, and could then create the SQL dump (starting postmaster and waiting for it if required).
Maybe. What happens if you run out of space? Or have to choose available space from different partitions or network mounts? Or you don't have the space for the reload in the new format? These are all likely scenarios for database machines.
I don't buy the arguments that changes in the supported SQL language make automated upgrades of the backend data impossible. Dump, upgrade, re-import couldn't work if that were the case.
They may work, but you can't assume that the applications will work unchanged on the new version, or that the applications are all part of the same upgrade. For example, anything that relied on the implict casts that were removed between 8.2 and 8.3 won't work, so you'll need to convert back when you find that out. This doesn't mean the conversion can't be automated, just that the operator may need to make a few choices along the way, including when it is safe to remove the old version.
Charlie Brady wrote:
On Fri, 16 Jan 2009, Alain PORTAL wrote:
Le vendredi 16 janvier 2009, Les Mikesell a �crit :
Alain PORTAL wrote:
Are you really sure about that?
It has worked for every CentOS X.Y -> X.Y+1 so far (at least since 3.0 which was the first I tried). They do tend to be big updates, though. Don't confuse it with an X.Y -> X+1.0 upgrade which can have much bigger changes.
No confusion for me. I understood that upgrading X.Y -> X+1.0 is a bad idea
I don't think it is a bad idea. I just think that sometimes there are some problems, or RedHat is not prepared to say that it will work. CentOS 3 -> 4 worked for me, using 'upgradeany' option to anaconda.
The biggest problem is when the upgrade is from wu-ftpd and wu-imap to vsftpd and cyrus-imapd or such. There's no good way to automate such a process.
Joshua Kramer wrote:
I consider that a major upstream bug. Better would be for postgresql to ship a standalone SQL dumper, which can read old file formats.
Charlie,
Would you expect a "simple" upgrade of Oracle 10i to Oracle 11, for your major enterprise application? Or, MS-SQL 2005 to MS-SQL 2008?
Any major database version upgrade requires the attention of a qualified DBA who knows how to test data and applications against the new DB version, and then dump/upgrade/restore.
I used to work for SPL (Australia), in the early 80s. We were the Australian agent for Software AG, and sold and supported ADABAS and related software in Australian (and I think) NZ.
When our clients upgraded from 3.2.x to 4.1.x the index structures changed (as you might expect, with improved algorithms and maybe increased capacity), but the data on disk was unaffected. In principle, going back or forward required no more than rebuilding indexes (and, of course, the attendant maintenance procedures etc).
For example, PostgreSQL introduced some minor syntactical differences with 8.3. If your application uses the features affected by these changes, it would be impossible to simply 'dump/restore' without some massaging of the data and the application.
PostgreSQL does ship with a dumper, pg_dump. If you have the current
The previous writer said "stand alone." That is not.