nope. its actually quite a major pain to manage..
you forgot to mention what you installed, how you did it and what you expected V/s achieved
I have installed all the packages from the two x86_64 DVDs with (eventually):
yum install --exclude=ovirt* *
I'm not using any internet-based repos for now, because of limited bandwidth at home.
I haven't touched 6.x before 6.2 and just thought it would be as in 5.x (biarch wise).
With 6.2 everything on my X301 semms to be working much better or at least as good as in 5.7.
I will slowly, carefully, and thankfully play with your Christmas present in the next two weeks. :)
-Michael
On 12/28/2011 06:02 AM, Michael Lampe wrote:
nope. its actually quite a major pain to manage..
you forgot to mention what you installed, how you did it and what you expected V/s achieved
I have installed all the packages from the two x86_64 DVDs with (eventually):
yum install --exclude=ovirt* *
I'm not using any internet-based repos for now, because of limited bandwidth at home.
I haven't touched 6.x before 6.2 and just thought it would be as in 5.x (biarch wise).
With 6.2 everything on my X301 semms to be working much better or at least as good as in 5.7.
I will slowly, carefully, and thankfully play with your Christmas present in the next two weeks. :)
-Michael
Biarch is actually only needed for libraries and support packages. Running native i386 application on x86_64 does not make much sense (third-party apps are another thing).
So logic behind biarch is simple. "If your 32-bit app rpm requests 32-bit support package/app it will be installed at the same time as that package".Or you can manually add/install needed package(s), like several packages, for Skype (32-bit) for example. But there is no need to waste useful space for package that will never be used (in case of 64-bit apps).
Ljubomir Ljubojevic wrote:
Biarch is actually only needed for libraries and support packages. Running native i386 application on x86_64 does not make much sense (third-party apps are another thing).
I also like the option to compile, run, test, debug, etc. my own programs as 32 bit. That's why starting with 5.x there's not only the libs, but also the devel-packages.
Biarch is at least to me a valuable feature. Anyway it's all there, just not in the ISOs it seems.
-Michael
On Wed, Dec 28, 2011 at 10:25 AM, Michael Lampe lampe@gcsc.uni-frankfurt.de wrote:
Biarch is actually only needed for libraries and support packages. Running native i386 application on x86_64 does not make much sense (third-party apps are another thing).
I also like the option to compile, run, test, debug, etc. my own programs as 32 bit. That's why starting with 5.x there's not only the libs, but also the devel-packages.
Biarch is at least to me a valuable feature. Anyway it's all there, just not in the ISOs it seems.
Why not use a virtual machine for that and have a cleaner separation of the architectures?
Am 28.12.2011 17:48, schrieb Les Mikesell:
On Wed, Dec 28, 2011 at 10:25 AM, Michael Lampe lampe@gcsc.uni-frankfurt.de wrote:
Biarch is actually only needed for libraries and support packages. Running native i386 application on x86_64 does not make much sense (third-party apps are another thing).
I also like the option to compile, run, test, debug, etc. my own programs as 32 bit. That's why starting with 5.x there's not only the libs, but also the devel-packages.
Biarch is at least to me a valuable feature. Anyway it's all there, just not in the ISOs it seems.
Why not use a virtual machine for that and have a cleaner separation of the architectures?
not only architectures
compilers and devel-packages should usually be seperated from working-computers and the compiled software packed as RPM in a dedicated vritual machine
the only way to keep systems clean, "make install" is the best way to make the whole setup dirty and especially for development/building snapshots of a virtual machine are a hughe benfit
Reindl Harald wrote:
compilers and devel-packages should usually be seperated from working-computers and the compiled software packed as RPM in a dedicated vritual machine
I'm using CentOS not only as a mail/web/etc. server, but also on my development workstation, on a compute server and on an in-house compute cluster. Compiling from source code in both 32- an 64-bit is a requirement of all users of these machines.
-Michael
Am 28.12.2011 18:13, schrieb Michael Lampe:
Reindl Harald wrote:
compilers and devel-packages should usually be seperated from working-computers and the compiled software packed as RPM in a dedicated vritual machine
I'm using CentOS not only as a mail/web/etc. server, but also on my development workstation, on a compute server and on an in-house compute cluster. Compiling from source code in both 32- an 64-bit is a requirement of all users of these machines.
what excatly is the need to use 32bit-software? compiling is not the problem there is ONE virtual machine neough for all users
however i can not imagine a usecase for 32bit software these days
2.6.41.6-1.fc15.x86_64 #1 SMP Wed Dec 21 22:36:55 UTC 2011 [harry@srv-rhsoft:~]$ rpm -qa | grep i686
Reindl Harald wrote:
compiling is not the problem
Indeed. And thanks to biarch, this works ootb.
there is ONE virtual machine neough for all users
Biarch reduces this even to one less: none. It's obvioulsy the simpler solution.
however i can not imagine a usecase for 32bit software these days
I've given three real life examples.
-Michael
Les Mikesell wrote:
Why not use a virtual machine for that and have a cleaner separation of the architectures?
Biarch runs natively and therfore faster, it can use hardware-accelerated OpenGL, it is easier to setup and use, and it is fully supported by TUV. To me the separation of arcitectures is clean enough and you simply switch from 64-bit-mode to 32-bit-mode by typing 'linux32'. How can it be better with a virtual machine?
Also consider for example a compute cluster. It will of course have the 64-bit version of CentOS installed, but some users may also want to run 32-Bit-Code on it (because it's faster in their case, because their code isn't 64-bit-clean yet, or because it's a 32-bit-only commercial code, whatever).
-Michael
On Wed, Dec 28, 2011 at 11:06 AM, Michael Lampe lampe@gcsc.uni-frankfurt.de wrote:
Les Mikesell wrote:
Why not use a virtual machine for that and have a cleaner separation of the architectures?
Biarch runs natively and therfore faster, it can use hardware-accelerated OpenGL, it is easier to setup and use, and it is fully supported by TUV. To me the separation of arcitectures is clean enough and you simply switch from 64-bit-mode to 32-bit-mode by typing 'linux32'. How can it be better with a virtual machine?
Why does a compiler need OpenGL? And with separate machines (physical or virtual) you would just open windows on both at the same time.
Also consider for example a compute cluster. It will of course have the 64-bit version of CentOS installed, but some users may also want to run 32-Bit-Code on it (because it's faster in their case, because their code isn't 64-bit-clean yet, or because it's a 32-bit-only commercial code, whatever).
Having run-time libs for both isn't a problem. But if you want to test that something will run on a real 32 bit machine, a VM would be a more realistic test.
On 12/28/2011 10:25 AM, Michael Lampe wrote:
Ljubomir Ljubojevic wrote:
Biarch is actually only needed for libraries and support packages. Running native i386 application on x86_64 does not make much sense (third-party apps are another thing).
I also like the option to compile, run, test, debug, etc. my own programs as 32 bit. That's why starting with 5.x there's not only the libs, but also the devel-packages.
Biarch is at least to me a valuable feature. Anyway it's all there, just not in the ISOs it seems.
There is a variable in yum.conf called multilib_policy ...
The default in CentOS 5 is all ... the default in CentOS 6 is best. I personally like best better. I only have the bare minimum i386 libraries on my machines (usually none but sometimes a few libraries on workstations)
If you like, you can set multilib_policy to all after you install the i386 items you want on your x86_64 install.
I can tell you that I would personally use something like mock to build or 32-bit items in at least a clean chroot when building/compiling 32 bit things on a 64-bit machine. But to each their own.
Johnny Hughes wrote:
There is a variable in yum.conf called multilib_policy ...
The default in CentOS 5 is all ... the default in CentOS 6 is best.
Ah, ok. Part of my playing around with 6.2 ist finding all the differences with respect to 5.x. ;)
I can tell you that I would personally use something like mock to build or 32-bit items in at least a clean chroot when building/compiling 32 bit things on a 64-bit machine. But to each their own.
I'm somehow confused with all of you loathing biarch so much. I can partly understand this from a packagers point of view, but as an end user?
What you get at the end if you install both 32-bit and 64-bit packages is the 32-bit stuff in (basically) /usr/lib. Otherwise nothing changes. So the added stuff _is_ cleanly separated from the rest of the system.
The kernel runs 32-bit and 64-bit programs anyway, gcc has '-m32' (you cannot even get rid of this), and all you you need to compile an run 32-bit programs is the extra stuff in /usr/lib. (The include/doc/etc. files which are in both packages _must_ be identical, that's checked.)
All the Unix systems from the old days (Irix, Solaris, AIX, ...) had this long before Linux saw 64 bits.
I like this feature very much, I and several others are using it on 5.x for years now, and nobody ever complained.
The only problems I ever had were with you, Dear Packagers/Rebuilders. Sometimes you forgot the updated 32-bit package from the x64 updates repo, an in one case they were even really clashing in an unallowed way. Your fault again. :)
So: what's the beef?
-Michael
On 12/28/2011 12:53 PM, Michael Lampe wrote:
Johnny Hughes wrote:
There is a variable in yum.conf called multilib_policy ...
The default in CentOS 5 is all ... the default in CentOS 6 is best.
Ah, ok. Part of my playing around with 6.2 ist finding all the differences with respect to 5.x. ;)
I can tell you that I would personally use something like mock to build or 32-bit items in at least a clean chroot when building/compiling 32 bit things on a 64-bit machine. But to each their own.
I'm somehow confused with all of you loathing biarch so much. I can partly understand this from a packagers point of view, but as an end user?
What you get at the end if you install both 32-bit and 64-bit packages is the 32-bit stuff in (basically) /usr/lib. Otherwise nothing changes. So the added stuff _is_ cleanly separated from the rest of the system.
The kernel runs 32-bit and 64-bit programs anyway, gcc has '-m32' (you cannot even get rid of this), and all you you need to compile an run 32-bit programs is the extra stuff in /usr/lib. (The include/doc/etc. files which are in both packages _must_ be identical, that's checked.)
When you build things, *-devel files are used. If you have extra stuff (any extra stuff) in the build root, then the configure scripts can find it and link against it since there are many optional things that are searched for in the configure scripts.
This is true if you have curses installed (as an example) ... some program's configure script will find that and link against it. Now, every time you want to run that program, you need to have curses installed.
It is therefore very important to have a very clean build root, with only the absolute minimum amount of packages (or if you like, the minimum libraries and headers) installed that are required to build the package. That way you control what is linked against.
If you have the 32bit headers in /lib/ (instead of in /lib64/) ... and if the some crazy configure script finds it and there and includes it, what does that do to the build?
This is why Red Hat uses mock to build packages. It builds a clean root to build packages.
It also is why OBS (open build system from opensuse) builds a VM or a buildroot for each individual package, installing only the things needed to build against.
All the Unix systems from the old days (Irix, Solaris, AIX, ...) had this long before Linux saw 64 bits.
I like this feature very much, I and several others are using it on 5.x for years now, and nobody ever complained.
The only problems I ever had were with you, Dear Packagers/Rebuilders. Sometimes you forgot the updated 32-bit package from the x64 updates repo, an in one case they were even really clashing in an unallowed way. Your fault again. :)
So: what's the beef?
If you are on a machine that is not building things, then having the 32-bit software also on there is fine ... if you need it.
Now, personally, I don't want anything on my machines that are not required to make them work. If some script kiddie needs /lib/ld-2.12.so for his hacker script to work and I only have /lib64/* stuff then that is good as far as I am concerned.
I don't want things on any of my machines unless it is required ... So, unless I need X and Gnome, it is not installed.
Maybe we're talking about different things here. I'm definitely not talking about how to build a distribution. That's why I'm using your's on not running my own.
I'm talking about the usefulness of biarch. Not in the sense of building packages for redistribution, especially not as RPMs. It's just for building code for one's own purposes.
Take an arbitrary source package and run configure. It may fail even on CentOS 6.2. So what?
Now, some run of configure fails on x86_64 in 32-bit mode. So what again?
To build a distribution (large, but something of a well defined size!), you need a build environment, which works for everything in a well defined way.
I only need an environment, in which I can make concrete things work easily, and that gives me the basics. For any piece of source code outside the core distribution, I'm not getting anything else anyway, not even in 64-bit mode.
People, who write their own code, expect never anything else.
And Biarch gives this to you equally well if you want to compile and run 32-bit programs on 64-bit.
-Michael
PS: This is (of course) not for building RPMs, but the configure scripts I was interested in so far, work with this in my ~/.tcshrc:
------------------------------------------------------------------- ... alias linux32 "linux32 $SHELL" ... if ( `uname -m` == i686 ) then setenv CC "gcc -m32" setenv CXX "g++ -m32" setenv PKG_CONFIG_PATH /usr/lib/pkgconfig endif ... -------------------------------------------------------------------
linux32 configure ... etc. ...
And if you have your own Makefiles, just put in two or three '-m32' and your set.
Am 28.12.2011 23:19, schrieb Michael Lampe:
Maybe we're talking about different things here. I'm definitely not talking about how to build a distribution. That's why I'm using your's on not running my own.
you need not to build a distribution to build clean packages in a clean build-envirnonment - this is simply in your own interest over the long and any quick & dirty solution will eat your time later
end of 2011 we should even consider to let 23bit die at all
and no, ia am no meber of centos i am speaking for me as a user who loves clean and modern systems
Reindl Harald wrote:
you need not to build a distribution to build clean packages in a clean build-envirnonment - this is simply in your own interest over the long and any quick& dirty solution will eat your time later
Please tell me in detail what ends up quick and dirty, when doing what is well established Unix practise since decades. This is nothing else than a simplified (but very convenient!) form of crosscompiling.
-Michael
Am 28.12.2011 23:32, schrieb Michael Lampe:
Reindl Harald wrote:
you need not to build a distribution to build clean packages in a clean build-envirnonment - this is simply in your own interest over the long and any quick& dirty solution will eat your time later
Please tell me in detail what ends up quick and dirty, when doing what is well established Unix practise since decades. This is nothing else than a simplified (but very convenient!) form of crosscompiling.
do what you believe and let us look where you end in 5-6 years after doing a couple of updates with "./configure && make && make install)
it IS DIRTY because it does NOT remove obsoleted files and yes i have seen environemnets where as example mysql did not compile any longer as long all pieces of the old version were not deleted manually
working on a modern OS beside the apckage-managment is just silly you have no clear dependencies, you have no migration-path, you have no clean rollback - you are doing a dirty job working so
but yes, you can, do if you think it is good enough for you for the majority of advanced users it is not and in a prodessional environment it is simply unacceptable
On 12/28/11 2:54 PM, Reindl Harald wrote:
do what you believe and let us look where you end in 5-6 years after doing a couple of updates with "./configure&& make&& make install)
it IS DIRTY because it does NOT remove obsoleted files and yes i have seen environemnets where as example mysql did not compile any longer as long all pieces of the old version were not deleted manually
who says he's building system packages? I got the impression he's building his own applications, stuff that typically runs in $HOME rather than /usr or whatever.
Am 29.12.2011 00:01, schrieb John R Pierce:
On 12/28/11 2:54 PM, Reindl Harald wrote:
do what you believe and let us look where you end in 5-6 years after doing a couple of updates with "./configure&& make&& make install)
it IS DIRTY because it does NOT remove obsoleted files and yes i have seen environemnets where as example mysql did not compile any longer as long all pieces of the old version were not deleted manually
who says he's building system packages? I got the impression he's building his own applications, stuff that typically runs in $HOME rather than /usr or whatever.
on a clean environment $HOME does not contain software this is the apple-way having binaries running where your user have write-access and from the viewpoints of security and modern system-managment worst practice
Reindl Harald wrote:
on a clean environment $HOME does not contain software this is the apple-way having binaries running where your user have write-access and from the viewpoints of security and modern system-managment worst practice
The three Federal Computing Centers in Germany (Juelich, Stuttgart, Munich -- with Stuttgart now hosting Germany's largest Supercomputer to date) all work in this way. How else should they? Most of the codes are developped by the users themselves, they are updated regularly -- and they do contain bugs (64-bit bugs, e.g.) ...
Stuttgarts former top class machine is running CentOS 5. I never tried the 32-bit feature there myself, because my code _is_ 64-bit clean. But I would have been pissed if ...
On Wed, Dec 28, 2011 at 5:13 PM, Michael Lampe lampe@gcsc.uni-frankfurt.de wrote:
Stuttgarts former top class machine is running CentOS 5. I never tried the 32-bit feature there myself, because my code _is_ 64-bit clean. But I would have been pissed if ...
You _can_ cross-compile code for a whole bunch of different environments. That doesn't make it a particularly good idea, even if it does happen to be fairly easy in this one particular case. How many cases do you want to support?
Les Mikesell wrote:
You _can_ cross-compile code for a whole bunch of different environments. That doesn't make it a particularly good idea, even if it does happen to be fairly easy in this one particular case. How many cases do you want to support?
Exactly this one. The only relevant case. Fully supported by TUV for a good reason. And by the CentOS credo, it'll be here, too! It must be! It is! Whew!
(And nobody has compiled the apps on my Android on his! Even if it's now possible to install Debian on Android!)
John R Pierce wrote:
who says he's building system packages? I got the impression he's building his own applications, stuff that typically runs in $HOME rather than /usr or whatever.
Exactly. Wasn't that clear from the very beginning?
-Michael
Reindl Harald wrote:
it IS DIRTY because it does NOT remove obsoleted files and yes i have seen environemnets where as example mysql did not compile any longer as long all pieces of the old version were not deleted manually
Hardly ever do I type 'make install'. I stick to Base/Updates/Epel/Elrepo. Only if it's really necessary do I install other stuff. And I normally put quite some effort into it: I produce proper RPMs.
working on a modern OS beside the apckage-managment is just silly you have no clear dependencies, you have no migration-path, you have no clean rollback - you are doing a dirty job working so
Well ...
I'll tell the users of our cluster (which I happen to manage as an extra) that they cannot submit any jobs any longer because their stuff is not and cannot be installed as an RPM ...
On Wed, Dec 28, 2011 at 4:19 PM, Michael Lampe lampe@gcsc.uni-frankfurt.de wrote:
Maybe we're talking about different things here. I'm definitely not talking about how to build a distribution. That's why I'm using your's on not running my own.
If you are moving binaries to any other machine, you are likely to have odd failures if you don't carefully control the libraries in the build environment. If you aren't moving them to some other machine, then you rarely if ever need anything but the native libraries and development header set.
I'm talking about the usefulness of biarch. Not in the sense of building packages for redistribution, especially not as RPMs. It's just for building code for one's own purposes.
The libraries are useful for 3rd party binary apps, but why build a 32bit app yourself if you are going to run it in a 64bit environment?
I recall at least a couple of update conflicts/failure in the 5.x line caused by having 32bit versions of things installed on a 64bit host. Didn't those affect you? And there is always the extra time wasted doing updates to libraries and programs you don't ever use.
(Sorry to be a little talkative today, but I will easily refute everything.)
Les Mikesell wrote:
If you are moving binaries to any other machine, you are likely to have odd failures if you don't carefully control the libraries in the build environment.
The linker doesn't and cannot link 64-bit objects to 32-bit libs.
There's nothing else. Include files/etc. that are duplicated in 32-bit RPMs must be identical otherwise rpm doesn't install them together.
If you aren't moving them to some other machine, then you rarely if ever need anything but the native libraries and development header set.
That's the basic use case anyway: A user compiles his stuff on the frontend of the cluster and then submits his job.
The libraries are useful for 3rd party binary apps, but why build a 32bit app yourself if you are going to run it in a 64bit environment?
Three examples I have already given. To repeat one: a user has a code base that is not 64-bit clean? What am I to do? Tell him to f*******, fix it myself for him, or what?
I recall at least a couple of update conflicts/failure in the 5.x line caused by having 32bit versions of things installed on a 64bit host. Didn't those affect you?
Also already answerded: They forgot to copy the 32-bit updates to the 64-bit updates repo. In one case there was a real bug. This happend only a couple of times so far in the 5.x time frame. So what? There where other bugs as well.
And there is always the extra time wasted doing updates to libraries and programs you don't ever use.
They update with everything else, there's no bandwidth limitation for these machines and the discs are big enough. (The 'everything' I described shortly elsewhere + a lot of extras totals to ~16 GB of disc space. That's nothing.)
-Michael
Am 28.12.2011 23:54, schrieb Michael Lampe:
Three examples I have already given. To repeat one: a user has a code base that is not 64-bit clean? What am I to do? Tell him to f*******, fix it myself for him, or what?
YES damend
force him to cleanup hsi crap or chain him in a virtual machine or even replace him by one with more knowledge what he is doing because 2012 "not 64-bit clean" is a bad joke
Am 28.12.2011 23:54, schrieb Michael Lampe:
They update with everything else, there's no bandwidth limitation for these machines and the discs are big enough. (The 'everything' I described shortly elsewhere + a lot of extras totals to ~16 GB of disc space. That's nothing.)
and becaus ewe have the ressources we are wasting them?
"They update with everything else" mhh you must have a lot of money to have only SSD-RAID or why do you not notice the difference updating 100 or 180 packages?