This question keys off IIRC Johnny's remarks earlier in the day about how much work it is to build a release.
This question is addressed to those build gurus on this list who have forgotten more about the build process than this poster will ever know and further have first hand knowledge|experience with cygwin-setup.
Because 4 of our servers are Windows boxen running IIS (to enable small business folks to build web sites with Front Page), considerable fondness for the cygwin build process has developed to the extent that Windows commands via CLI and GUI are hardly ever done.
At the end of the day, other than RPM package management, what are the substantial differences that would preclude a cygwin-setup-like build process from being used to build and maintain the release and would it be a lot faster and less labor intensive?
kind regards/ldv/vaden@texoma.net
On Friday, February 18, 2011 06:15:47 pm Larry Vaden wrote:
At the end of the day, other than RPM package management, what are the substantial differences that would preclude a cygwin-setup-like build process from being used to build and maintain the release and would it be a lot faster and less labor intensive?
The substantial difference really is the RPM package management; specifically, the exact requires and provides for each of the more than 2500 packages in the distribution.
The koji framework does this for Fedora. How difficult this would be or how long it would take to get running for C6 I haven't the first clue.
It is more challenging in this specific case because of the need for complete binary-level compatability; this hinges not just on the versions of the packages but specific versions of tools used on the build host itself, as well as the order the packages are built in cases (to the best of my understanding). And those things (toolset versions on the build hosts, and the build order) are not known outside of Red Hat for RHEL.
It's somewhat like trying to impression a pin and tumbler lock. If you've never done that, or don't know what that means, look it up. I'm going home for supper.... all this talk of Texas has made me crave some tender mesquite smoked brisket...handrubbed with chipotle peppers.... too bad I don't have any....
On 18 Feb-2011 15:16, Larry Vaden wrote:
Because 4 of our servers are Windows boxen running IIS (to enable small business folks to build web sites with Front Page), considerable fondness for the cygwin build process has developed to the extent that Windows commands via CLI and GUI are hardly ever done.
At the end of the day, other than RPM package management, what are the substantial differences that would preclude a cygwin-setup-like build process from being used to build and maintain the release and would it be a lot faster and less labor intensive?
If you only have Microsoft(R) Windows(R)-based servers available, then why not run the build process on a virtual machine that is running Linux (RHEL or CentOS? (granted the best scenario would be to have native Linux(R)-based system, but if you have to share... :-)
If you have four servers that are MS Windows/MS IIS (perhaps are the only h/w available?), install VMware(R) Player (or VMware(R) Server) to run a true Linux "server" on the same h/w along side the Microsoft Windows operating system on one of those servers.
It wouldn't require replacing the MS Windows operating system (nor would it require a dual boot).
This would give you the same tools to handle the builds plus the RPM package management.
Building your RPMs on Linux (RHEL or CentOS) would also give you a better chance of those same RPMs running that Linux distribution. Compiling the executable/objects with the same GCC toolset, linked against the same libraries also ensures that the executable will run correctly under RHEL or CentOS. This reduces the possibility of GCC compiler or library versions not matching those used under RHEL/CentOS.
And Linux instance running in VMware Player would still give you the cut & paste capability (between host and guest) and could "mount" a directory from the host to facilitate file copies between the "host" and "guest" operating systems.
VMware Player cost = zero dollars. VMware Server cost = zero dollars. VMware(R) Workstation cost = under $175. (greater flexibility, plus can build VMware instances that can be run by VMware Player or VMware Server).
Both are fully supported under current releases of MS Windows and both support running RHEL as a guest OS on MS Windows host. (They also support the reverse, running on Linux host with MS Windows guest -- I use this method for MS Windows development at home).
Need a test environment in addition to the build environment? It's just another VMware instance of a Linux guest.
And you can backup/restore the set of files that make up a VMware instance (make sure the instance is shut down first :-) if you want to try out a different set of packages w/o disturbing the current set. Easy backup = drag/drop files from host system to a USB-attached hard drive.
Host: MS Windows (or Linux) w/ VMware Player, VMware Server, or VMware Workstation Guest 1: build system running old (or new) RHEL or CentOS Linux Guest 2: test system running new RHEL or CentOS Linux
plus a shared host directory accessible by both guests to dropoff/pickup the RPMs (or just SFTP between the two Linux virtual machines) (if you needed to split the CPU/disk load, you could use two host servers instead, each running one VMware instance of Linux).
-- Cheers!
Merka Phoenix Sr. Systems Integrator Merka Associates
Std disclaimer here about: "Views expressed herein are not necessarily those of my employer or my client(s)" :) (and none of the companies mentioned paid me anything to mention their products, nor do I own any of their stock)
VMware site: http://www.vmware.com/ and downloads here: http://downloads.vmware.com/d/info/desktop_downloads/vmware_player/3_0
Linux is a registered trademark of Linus Torvalds. RED HAT is a registered trademark of Red Hat, Inc. in the United States and other countries. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks and trade names are the property of their respective holders.
On Fri, Feb 18, 2011 at 16:15, Larry Vaden vaden@texoma.net wrote:
This question keys off IIRC Johnny's remarks earlier in the day about how much work it is to build a release.
This question is addressed to those build gurus on this list who have forgotten more about the build process than this poster will ever know and further have first hand knowledge|experience with cygwin-setup.
...
At the end of the day, other than RPM package management, what are the substantial differences that would preclude a cygwin-setup-like build process from being used to build and maintain the release and would it be a lot faster and less labor intensive?
1) Cygwin doesn't install a system. Windows has done all that for you. [Anaconda and its ilk does that and has to played with to make sure it works.] 2) Cygwin's package control is much simpler than RPM from my long ago talking with cygwin developers. Again because a lot of stuff is provided by either Windows or not needed. 3) As far as I know Cygwin doesn't build locally. Stuff is built 'somewhere else' and then cygwin-setup downloads the archives, unpacks them, and installs it into \cygwin.
On Fri, Feb 18, 2011 at 9:01 PM, Stephen John Smoogen smooge@gmail.com wrote:
- As far as I know Cygwin doesn't build locally. Stuff is built
'somewhere else' and then cygwin-setup downloads the archives, unpacks them, and installs it into \cygwin.
You are correct about that, something I just learned. See below the sig.
That leaves Gentoo's process as the only other one with which I am familiar as a _potential_ candidate for a compatible release in a greatly reduced amount of time.
thanks/ldv/vaden@texoma.net
---for brevity, just enough to show that Stephen knows of which he speaks :)
vaden@turtlehill:~/which$ bzcat which-2.20-2.tar.bz2 | tar xvf - usr/ usr/bin/ usr/bin/which.exe
On Fri, Feb 18, 2011 at 10:35 PM, Larry Vaden vaden@texoma.net wrote:
On Fri, Feb 18, 2011 at 9:01 PM, Stephen John Smoogen smooge@gmail.com wrote:
- As far as I know Cygwin doesn't build locally. Stuff is built
'somewhere else' and then cygwin-setup downloads the archives, unpacks them, and installs it into \cygwin.
You are correct about that, something I just learned. See below the sig.
That leaves Gentoo's process as the only other one with which I am familiar as a _potential_ candidate for a compatible release in a greatly reduced amount of time.
Then you're working with an entirely different OS every time you build it. Your binaries may vary profoundly based on subtle differences in available libraries at build time, especially for static components, and as a developer you take a significant risk of feature creep breaking your older software. It's great for development, *rotten* for industrial class stability.
This is not to discredit the "secret sauce" developers of Gentoo, who integrate a lot of non-compliant open source packages into a powerful, fast, and flexible build structure. By the time you've wrapped a build synchronization structure on it to ensure consistent environments and cross compatibility, you will have rebuilt RPM or deb and rediscovered and had to resolve almost all the same problems.
I've repeatedly seen this sort of "I can do it better myself, just the way I think it should work!" with system auditing tools, source control systems, and software building structures. It's usually far, far more efficient to learn the existing structure well and build on it than to start from scratch: a lot of hardwon lessons are very expensive to relearn.
On Fri, Feb 18, 2011 at 9:51 PM, Nico Kadel-Garcia nkadel@gmail.com wrote:
I've repeatedly seen this sort of "I can do it better myself, just the way I think it should work!" with system auditing tools, source control systems, and software building structures. It's usually far, far more efficient to learn the existing structure well and build on it than to start from scratch: a lot of hardwon lessons are very expensive to relearn.
And, since I've been around since the ASR 33 days of paper tape when you had to really think straight and maintain good relations with the operators in order to get 7 compiles a day, I wonder why we still spend time waiting on files to be compressed and to be decompressed when you can't fill up a modern day disk drive with a project's code, much less an array of said drives most modern build systems would have.
On Fri, Feb 18, 2011 at 21:08, Larry Vaden vaden@texoma.net wrote:
On Fri, Feb 18, 2011 at 9:51 PM, Nico Kadel-Garcia nkadel@gmail.com wrote:
I've repeatedly seen this sort of "I can do it better myself, just the way I think it should work!" with system auditing tools, source control systems, and software building structures. It's usually far, far more efficient to learn the existing structure well and build on it than to start from scratch: a lot of hardwon lessons are very expensive to relearn.
And, since I've been around since the ASR 33 days of paper tape when you had to really think straight and maintain good relations with the operators in order to get 7 compiles a day, I wonder why we still spend time waiting on files to be compressed and to be decompressed when you can't fill up a modern day disk drive with a project's code, much less an array of said drives most modern build systems would have.
Because while disk is cheap, a station wagon full of disks is still cheaper than trying to send over the multiple gigabytes of data over WAN or even most company LAN lines. With various ISPs looking at capping downloads at 1GB/month and extra costs for every GB above that.. it would take someone on that line 6 months to download a basic system without compression. Now if we all could be satisfied running on the same amount of code we did back in the good old days of toggle switches and punch tapes/cards.
On Fri, Feb 18, 2011 at 11:08 PM, Larry Vaden vaden@texoma.net wrote:
On Fri, Feb 18, 2011 at 9:51 PM, Nico Kadel-Garcia nkadel@gmail.com wrote:
I've repeatedly seen this sort of "I can do it better myself, just the way I think it should work!" with system auditing tools, source control systems, and software building structures. It's usually far, far more efficient to learn the existing structure well and build on it than to start from scratch: a lot of hardwon lessons are very expensive to relearn.
And, since I've been around since the ASR 33 days of paper tape when you had to really think straight and maintain good relations with the operators in order to get 7 compiles a day, I wonder why we still spend time waiting on files to be compressed and to be decompressed when you can't fill up a modern day disk drive with a project's code, much less an array of said drives most modern build systems would have.
Maybe you could ease up on the attitude while you're at it.
There are several answers. One is that bandwidth, and time, for sending hundreds of Megabytes or full DVD's of material still costs time and money and ties up disk space that is better reserved for things that are sensitive enough to streaming perfomance that 10 blocks are more likely to be contiguous, and perform better, than 20 blocks. And even the bandwidth of reading off local disk matters for high performance components, such as the many Megabytes of dynamically decompressed Java ".jar' files and RPM based "compressed CPIO" format. Even simple operations like checksumming and PGP signing take longer for larger files, and the performance penalty in that form can grow quite large.
And while disks are cheap, splitting content across multiple disks (whether DVD or external USB) is more expensive and more awkard), and the compression of streaming media such as audio and video allows a quite modest network connection to more effectively carry your desired content. So it's quite useful.