I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
With sh being a link to bash in Centos I don't know if it would explode if the link was changed to something else, but at least the scripts we made on our own that run certain services could be changed and tested manually to another shell.
Are there other people who have experience in this and can provide interesting guidance?
On 04/24/15 06:07, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
With sh being a link to bash in Centos I don't know if it would explode if the link was changed to something else, but at least the scripts we made on our own that run certain services could be changed and tested manually to another shell.
Are there other people who have experience in this and can provide interesting guidance? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Why go to that extreme if you tell a script on line 1 which shell to run it will do so. #!/bin/dash or what ever shell you want it to run in. I always do that to make sure that the script runs as expected, if you leave it out the script will run in whatever environment it currently is in.
Pete
On 04/24/15 06:57, Pete Geenhuizen wrote:
On 04/24/15 06:07, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
With sh being a link to bash in Centos I don't know if it would explode if the link was changed to something else, but at least the scripts we made on our own that run certain services could be changed and tested manually to another shell.
Are there other people who have experience in this and can provide interesting guidance?
Why go to that extreme if you tell a script on line 1 which shell to run it will do so. #!/bin/dash or what ever shell you want it to run in. I always do that to make sure that the script runs as expected, if you leave it out the script will run in whatever environment it currently is in.
I'm confused here, too, and this has been bugging me for some time: why sh, when almost 20 years ago, at places I've worked, production shell scripts went from sh to ksh. It was only after I got into the CentOS world in '09 that I saw all the sh scripts again.
mark
On Fri, Apr 24, 2015 at 08:02:56AM -0400, mark wrote:
On 04/24/15 06:57, Pete Geenhuizen wrote:
On 04/24/15 06:07, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
Are there other people who have experience in this and can provide interesting guidance?
Why go to that extreme if you tell a script on line 1 which shell to run it will do so. #!/bin/dash or what ever shell you want it to run in. I always do that to make sure that the script runs as expected, if you leave it out the script will run in whatever environment it currently is in.
I'm confused here, too, and this has been bugging me for some time: why sh, when almost 20 years ago, at places I've worked, production shell scripts went from sh to ksh. It was only after I got into the CentOS world in '09 that I saw all the sh scripts again.
Wasn't Solaris, which for awhile at least, was probably the most popular Unix, using ksh by default?
It was the mid/late-90s, but I seem to recall Bourne being the default shell, although sh/ksh/csh were all available with a typical install.
On Fri, Apr 24, 2015 at 8:32 AM, Scott Robbins scottro@nyc.rr.com wrote:
On Fri, Apr 24, 2015 at 08:02:56AM -0400, mark wrote:
On 04/24/15 06:57, Pete Geenhuizen wrote:
On 04/24/15 06:07, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
Are there other people who have experience in this and can provide interesting guidance?
Why go to that extreme if you tell a script on line 1 which shell to
run it
will do so. #!/bin/dash or what ever shell you want it to run in. I always do that to make
sure that
the script runs as expected, if you leave it out the script will run in whatever environment it currently is in.
I'm confused here, too, and this has been bugging me for some time: why sh, when almost 20 years ago, at places I've worked, production shell scripts went from sh to ksh. It was only after I got into the CentOS world in '09 that I saw all the sh scripts again.
Wasn't Solaris, which for awhile at least, was probably the most popular Unix, using ksh by default?
-- Scott Robbins PGP keyID EB3467D6 ( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 ) gpg --keyserver pgp.mit.edu --recv-keys EB3467D6
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Initially Bourne was used because it was typically a static binary, because the boot process didn't have access to any shared libraries. When that changed it became a bit of a moot point, and you started to see other interpreters being used.
Even though Solaris started using ksh as the default user environment, almost all of the start scrips were either bourne or bash scripts. With Bash having more functionality the scripts typically used the environment that suited the requirements best.
Bottom line is use what ever script suits your needs just be sure to tell the environment which interpreter to use. Personally I never write a script that doesn't include the interpreter on the first line.
Pete
On 04/24/15 08:42, Eckert, Doug wrote:
It was the mid/late-90s, but I seem to recall Bourne being the default shell, although sh/ksh/csh were all available with a typical install.
On Fri, Apr 24, 2015 at 8:32 AM, Scott Robbins scottro@nyc.rr.com wrote:
On Fri, Apr 24, 2015 at 08:54:48AM -0400, Pete Geenhuizen wrote:
Even though Solaris started using ksh as the default user environment, almost all of the start scrips were either bourne or bash scripts. With Bash having more functionality the scripts typically used the environment that suited the requirements best.
Bash is a better command shell for many people, but ksh has better scripting ability (eg typescript options bash has never seen). Many Solaris provided scripts were ksh.
Bash was bigger than ksh in the non-commercial Unix world because of ksh88 licensing problems. Back in 1998 I wanted to teach a ksh scripting course to my local LUG, but AT&T (David Korn himsef!) told me I couldn't give people copies of the shell to take home.
(Finally, too late in the day, they changed their licensing).
Stephen Harris lists@spuddy.org wrote:
Bash was bigger than ksh in the non-commercial Unix world because of ksh88 licensing problems. Back in 1998 I wanted to teach a ksh scripting course to my local LUG, but AT&T (David Korn himsef!) told me I couldn't give people copies of the shell to take home.
AFAIR, ksh was OSS (but not using an OSI approved license) since 1997. Since 2001, ksh is under a OSI approved license.
Jörg
On Fri, Apr 24, 2015 at 03:15:27PM +0200, Joerg Schilling wrote:
Stephen Harris lists@spuddy.org wrote:
Bash was bigger than ksh in the non-commercial Unix world because of ksh88 licensing problems. Back in 1998 I wanted to teach a ksh scripting course to my local LUG, but AT&T (David Korn himsef!) told me I couldn't give people copies of the shell to take home.
AFAIR, ksh was OSS (but not using an OSI approved license) since 1997. Since
In 1998 each user had to sign a license; you couldn't give away copies to other people.
Date: Wed, 20 May 1998 14:09:30 -0400 (EDT) From: David Korn dgk@research.att.com
If you are going to make copies for use at your course there is no problem. However, if users are to get their own copies to take home with them, then we need to get each of them to accpet the license agreement that is on the web.
[ snip other options, including printing out the license and having people sign it and sending the results back! ]
Stephen Harris lists@spuddy.org wrote:
AFAIR, ksh was OSS (but not using an OSI approved license) since 1997. Since
In 1998 each user had to sign a license; you couldn't give away copies to other people.
Date: Wed, 20 May 1998 14:09:30 -0400 (EDT) From: David Korn dgk@research.att.com
If you are going to make copies for use at your course there is no problem. However, if users are to get their own copies to take home with them, then we need to get each of them to accpet the license agreement that is on the web.
OK, I remeber again: You had to click "accept" on the web to get your copy of the source. This need was removed in 2001.
Jörg
Stephen Harris wrote:
On Fri, Apr 24, 2015 at 03:15:27PM +0200, Joerg Schilling wrote:
Stephen Harris lists@spuddy.org wrote:
Bash was bigger than ksh in the non-commercial Unix world because of ksh88 licensing problems. Back in 1998 I wanted to teach a ksh
scripting
course to my local LUG, but AT&T (David Korn himsef!) told me I couldn't give people copies of the shell to take home.
AFAIR, ksh was OSS (but not using an OSI approved license) since 1997. Since
In 1998 each user had to sign a license; you couldn't give away copies to other people.
Date: Wed, 20 May 1998 14:09:30 -0400 (EDT) From: David Korn dgk@research.att.com
If you are going to make copies for use at your course there is no problem. However, if users are to get their own copies to take home with them, then we need to get each of them to accpet the license agreement that is on the web.
[ snip other options, including printing out the license and having people sign it and sending the results back! ]
Fascinating. As I'd been in Sun OS, and started doing admin work when it became Solaris, I'd missed that bit. A question: did the license agreement include payment, or was it just restrictive on distribution?
Oh, and to clarify what I said before, our production shell scripts, in the mid-nineties, were corporately required to go to ksh.
I didn't know bash till I got to CentOS (I don't remember it in RH 9...), and it's what I prefer (my manager and some other folks here like zsh), but bash lets me use all my c-shell-isms that I learned when I started in UNIX in '91.
mark !se....
m.roth@5-cent.us wrote:
Fascinating. As I'd been in Sun OS, and started doing admin work when it became Solaris, I'd missed that bit. A question: did the license agreement include payment, or was it just restrictive on distribution?
Everything other than ksh93 is closed source. The POSIX shell used by various commercial UNIXes is based on ksh88. Sun tried to make this OSS in 2005 but "OSS lovers" as HP and IBM prevented this from happening.
ksh93 exists in a 1997 version with restricted redistribution and a 2001 version with OSI OSS compliance.
Oh, and to clarify what I said before, our production shell scripts, in the mid-nineties, were corporately required to go to ksh.
I didn't know bash till I got to CentOS (I don't remember it in RH 9...), and it's what I prefer (my manager and some other folks here like zsh), but bash lets me use all my c-shell-isms that I learned when I started in UNIX in '91.
Most if not all of these goodies are in the Bourne Shell now as well.
And bash still comes with a history editor that gives less features than the one I prototyped in 1982 and that is now available in the Bourne Shell.
Jörg
On Fri, Apr 24, 2015 at 10:38:25AM -0400, m.roth@5-cent.us wrote:
Fascinating. As I'd been in Sun OS, and started doing admin work when it became Solaris, I'd missed that bit. A question: did the license agreement include payment, or was it just restrictive on distribution?
In 1990, when I started using ksh88, it was totally commercial. Binaries were $$$ and source was $$$$. We bought the source and compiled it for SunOS, Ultrix and various SYSVr[23] machines (one machine was so old it didn't understand #! and so needed it placed as /bin/sh).
By 1998, ksh93 was free (as in beer) but was restricted distribution. Eventually ksh93 became properly free, but by this point bash was already popular in the Free-nix arena and had even made it into Solaris, AIX and others.
I didn't know bash till I got to CentOS (I don't remember it in RH 9...),
Yes it was. It was in RH(not EL) 4, which was the first RH I used.
Even the 0.11 "boot+root" combination from 1991 had a version of bash in it! http://gunkies.org/wiki/Linux_0.11 (that was the first Linux version I used)
Stephen Harris wrote:
On Fri, Apr 24, 2015 at 10:38:25AM -0400, m.roth@5-cent.us wrote:
Fascinating. As I'd been in Sun OS, and started doing admin work when it became Solaris, I'd missed that bit. A question: did the license agreement include payment, or was it just restrictive on distribution?
In 1990, when I started using ksh88, it was totally commercial. Binaries were $$$ and source was $$$$. We bought the source and compiled it for SunOS, Ultrix and various SYSVr[23] machines (one machine was so old it didn't understand #! and so needed it placed as /bin/sh).
I just (finally) got into Unix in '91, and didn't do any admin work, just programming, until later in '95, and I had nothing to do with what software got installed, at least to start (I sat there while someone else was doing the installing). And that was a Sun, anyway.
By 1998, ksh93 was free (as in beer) but was restricted distribution. Eventually ksh93 became properly free, but by this point bash was already popular in the Free-nix arena and had even made it into Solaris, AIX and others.
I didn't know bash till I got to CentOS (I don't remember it in RH 9...),
Yes it was. It was in RH(not EL) 4, which was the first RH I used.
Ah. I don't remember if I was using csh, or ksh, and didn't realize about bash. I *think* I vaguely remember that sh seemed to be more capable than I remembered.
My first RH was 5, late nineties. First time I looked at linux and installed, it was '95, and slack. (We'll ignore the Coherent that I installed on my beloved 286 in the late 80's). <snip> mark
On Fri, Apr 24, 2015 at 3:04 PM, m.roth@5-cent.us wrote:
My first RH was 5, late nineties. First time I looked at linux and installed, it was '95, and slack. (We'll ignore the Coherent that I installed on my beloved 286 in the late 80's).
<snip>
You mean you missed all the fun with Xenix on Radio Shack Model 16's and SysV on AT&T's weird 3b machines?
Les Mikesell wrote:
On Fri, Apr 24, 2015 at 3:04 PM, m.roth@5-cent.us wrote:
My first RH was 5, late nineties. First time I looked at linux and installed, it was '95, and slack. (We'll ignore the Coherent that I installed on my beloved 286 in the late 80's).
<snip>
You mean you missed all the fun with Xenix on Radio Shack Model 16's and SysV on AT&T's weird 3b machines?
Yep. Had a friend with a 3b, but I kept wanting *Nix, and only finally made it in '91. Sun. Irix. HP-UX (once in a blue moon, and I tried to avoid it when possible).
mark
Les Mikesell lesmikesell@gmail.com wrote:
On Fri, Apr 24, 2015 at 3:04 PM, m.roth@5-cent.us wrote:
My first RH was 5, late nineties. First time I looked at linux and installed, it was '95, and slack. (We'll ignore the Coherent that I installed on my beloved 286 in the late 80's).
<snip>
You mean you missed all the fun with Xenix on Radio Shack Model 16's and SysV on AT&T's weird 3b machines?
You do not neet to ;-)
I started with UNOS in 1982 as my first UNIX like. UNOS in fact was the first UNIX clone and it was a real time OS.
In February 1985, I switched to a Sun....the first Sun that made it to Europe.
Jörg
On 04/27/2015 06:43 AM, Joerg Schilling wrote:
I started with UNOS in 1982 as my first UNIX like. UNOS in fact was the first UNIX clone and it was a real time OS. In February 1985, I switched to a Sun....the first Sun that made it to Europe. Jörg
Charles River UNOS was actually Tandy's first non-TRSDOS choice for the Model 16; Microsoft won the platform to Xenix by threatening to withhold BASIC and Multiplan for all other Tandy platforms if Tandy went UNOS [1]. Xenix on the 16 in 1987 was my first Un*x system (starring out a letter in Unix was to keep from trademark violations......) and 3B1 Convergent-written AT&T-labeled SVR2 was the second, with the oddball Apollo Domain/OS (change an environment variable and change the system from 4.2BSD to SVR3!) the third. A QIC-120 packaging of SLS by Mac's Place BBS was my fourth [2], and I've used Linux in some form ever since.
How is this related to CentOS? Peripherally only, in that there was once a Project-16 newsletter post to comp.sys.tandy about the 16B made by one John M. Hughes (bang-path e-mail address of noao!coyote!moondog!proj16) back in January of 1991 [3].......I would love to come across a collection of these, as my main box at that time (running C-News) was a T6K with a pair of Rodime 70MB drives and a Maxtor XT-1140 140MB drive for the news spool.
[1]: Post to comp.sys.tandy by Frank Durda IV on November 13, 2001, archived at http://www.dogpatch.com/misc/tandy_xenix_history.html among other places. A fun and grin-inducing read. [2]: Posting by John McNamara to comp.os.linux on April 6, 1993 subject: "Linux free by mail" (search on google groups for it) [3]: Posting to comp.sys.tandy by John Hughes, January 9, 1991 subject: "Project 16 - Tandy 16/6000 Newsletter and Mailing List"
m.roth@5-cent.us wrote:
Ah. I don't remember if I was using csh, or ksh, and didn't realize about bash. I *think* I vaguely remember that sh seemed to be more capable than I remembered.
If you like to check what the Bourne Shell did support in the late 1980s, I recommend you to fetch recent Schily tools from:
https://sourceforge.net/projects/schilytools/files/
compile and install and test "osh".
This is the SVr4 Bourne Shell, so you need to take into account what has been added with Svr4:
- multibyte character support. In the 1980s, the Bourne Shell was just 8-bit clean.
- job-control. If you do not call "jsh", or if you switch off jobcontrol via "set +m" in a job shell, you have the job-control related builtins but there is no processgroup management.
Jörg
On Apr 27, 2015, at 4:38 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
This is the SVr4 Bourne Shell, so you need to take into account what has been added with Svr4:
Is there any difference between your osh and the Heirloom Bourne Shell?
http://heirloom.sourceforge.net/sh.html
I see that you already wrote up the differences between osh and bosh in an earlier post. Is there a good reason why these comparisons are not on the Schily Tools web page already? :)
Warren Young wyml@etr-usa.com wrote:
On Apr 27, 2015, at 4:38 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
This is the SVr4 Bourne Shell, so you need to take into account what has been added with Svr4:
Is there any difference between your osh and the Heirloom Bourne Shell?
Heirloom did make quick and dirty ports and then stopped working.
Heirloom e.g. did make the same attempt to port to platforms that may cause problems with own malloc() implemenetaions:
- add a private malloc for sh internal use based on mmap().
This however caused problems with some Linux distros that have been reported against my old Bourne Shell port, so I assume the same problems exist with Heirloom.
Heirloom added support for uname -S and for some linux ulimit extensions but then stopped working on the code after a few months
You still cannot get a working Bourne Shell from heirloom that behaves exactly like the Solaris shell.
My code added a lot more new features and it converted the code cleanly to use malloc() from libc. My code also allows all the modifications to be disabled via #ifdef's. This happens with "osh".
My code is actively maintained and fixed _all_ documented historic bugs, see:
http://www.in-ulm.de/~mascheck/bourne/
I see that you already wrote up the differences between osh and bosh in an earlier post. Is there a good reason why these comparisons are not on the Schily Tools web page already? :)
The schily tools act as a container to publish the current code state. There is no such maintained web page. Given the fact that Sven Maschek wrote down a lot, it seems the information is still here.
I would be interested to understand why Heirloom seems to so well known and my portability attempts seem to be widely unknown.
Jörg
On Mon, Apr 27, 2015 at 10:07 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
I would be interested to understand why Heirloom seems to so well known and my portability attempts seem to be widely unknown.
Not sure why it matters with a standalone application like sh, but I think a lot of people have been put off by the GPL incompatibility with your tools. If you want popularity - and usability, a dual-license would work as perl shows.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 10:07 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
I would be interested to understand why Heirloom seems to so well known and my portability attempts seem to be widely unknown.
Not sure why it matters with a standalone application like sh, but I think a lot of people have been put off by the GPL incompatibility with your tools. If you want popularity - and usability, a dual-license would work as perl shows.
??? There is nothing different with heirloom.
And the problem is the GPL. I recommend you to work on making all GPL code freely combinable with other OSS.
My code is fully legal and there is absolutely no license problem with it.
Just do not follow the false claims from some OSS enemies...and believe the lawyers that checked my code ;-)
My code was audited by "Sun legal", "Oracle legal" and by the legal department from SuSe.
Question: when will RedHat follow the legal audits from these companies?
Jörg
On Mon, Apr 27, 2015 at 10:46 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
And the problem is the GPL. I recommend you to work on making all GPL code freely combinable with other OSS.
Of course the problem it the GPL. Glad you recognize that. It's whole point is the restriction against linking with anything with an incompatible license which obviously prevents a lot of best-of-breed combinations.
My code is fully legal and there is absolutely no license problem with it.
Umm, no. Larry Wall clearly understood this eons ago.
Just do not follow the false claims from some OSS enemies...and believe the lawyers that checked my code ;-)
My code was audited by "Sun legal", "Oracle legal" and by the legal department from SuSe.
Sure, there is nothing 'wrong' with your licence as long as it isn't mixed with anything with different restrictions. Just don't act surprised that the code doesn't get used in projects that have to accommodate GPL restrictions.
Question: when will RedHat follow the legal audits from these companies?
Question: If _you_ believe that it is OK to mix your code with GPL'd code, why not add the dual licensing statement that would make it clear for everyone else? It doesn't take anything away - unless you really don't want it to be used in other projects.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 10:46 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
And the problem is the GPL. I recommend you to work on making all GPL code freely combinable with other OSS.
Of course the problem it the GPL. Glad you recognize that. It's whole point is the restriction against linking with anything with an incompatible license which obviously prevents a lot of best-of-breed combinations.
You should read the GPL and get help to understand it. The GPL does not forbid this linking. In contrary, the GPOL allows any GPLd program to be linked against any library under and license. If this was not thecase, you could not legally distribute binaries from GPLd programs.
My code is fully legal and there is absolutely no license problem with it.
Umm, no. Larry Wall clearly understood this eons ago.
???
Just do not follow the false claims from some OSS enemies...and believe the lawyers that checked my code ;-)
My code was audited by "Sun legal", "Oracle legal" and by the legal department from SuSe.
Sure, there is nothing 'wrong' with your licence as long as it isn't mixed with anything with different restrictions. Just don't act surprised that the code doesn't get used in projects that have to accommodate GPL restrictions.
Again, don't follow the agitation from OSS enemies. You are of course wrong!
Question: when will RedHat follow the legal audits from these companies?
Question: If _you_ believe that it is OK to mix your code with GPL'd code, why not add the dual licensing statement that would make it clear for everyone else? It doesn't take anything away - unless you really don't want it to be used in other projects.
Why should I do something that is not needed?
But before you like to discuss things with me, I recommend you to first inform yourself correctly.
I if course _don't_ mix CDDLd code with GPLd code.
Jörg
On Mon, Apr 27, 2015 at 11:16 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
You should read the GPL and get help to understand it. The GPL does not forbid this linking. In contrary, the GPOL allows any GPLd program to be linked against any library under and license. If this was not thecase, you could not legally distribute binaries from GPLd programs.
You can't distribute GPLd programs unless 'the work as a whole' is covered by the GPL. There can't be a distinction between binary and source since one is derived from the other.
My code is fully legal and there is absolutely no license problem with it.
Umm, no. Larry Wall clearly understood this eons ago.
???
Odd, I expected you to be as smart as him. He started with only the 'Artistic' license but quickly understood the issues when you need part of the 'work as a whole' to include, say, linking in a proprietary database driver as one component and GPL'd readline as another, along with the code he wanted to be generally usable. And he did something about it.
Sure, there is nothing 'wrong' with your licence as long as it isn't mixed with anything with different restrictions. Just don't act surprised that the code doesn't get used in projects that have to accommodate GPL restrictions.
Again, don't follow the agitation from OSS enemies. You are of course wrong!
You don't have to 'follow' anything - just read the phrase 'work as a whole'.
Question: If _you_ believe that it is OK to mix your code with GPL'd code, why not add the dual licensing statement that would make it clear for everyone else? It doesn't take anything away - unless you really don't want it to be used in other projects.
Why should I do something that is not needed?
My question is 'why not do it?'. You don't lose anything but the restrictions that you pretend aren't there since a dual license allows you to choose the terms of the other if you prefer. I don't like the GPL restrictions either, but I just say so instead of pretending otherwise. A dual license is clearly needed unless your point is to make people choose between either using your code or anything that is GPL'd.
But before you like to discuss things with me, I recommend you to first inform yourself correctly.
I if course _don't_ mix CDDLd code with GPLd code.
So, you really don't want your code to be used? Then why ask why it isn't popular?
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 11:16 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
You should read the GPL and get help to understand it. The GPL does not forbid this linking. In contrary, the GPOL allows any GPLd program to be linked against any library under and license. If this was not thecase, you could not legally distribute binaries from GPLd programs.
You can't distribute GPLd programs unless 'the work as a whole' is covered by the GPL. There can't be a distinction between binary and source since one is derived from the other.
Now you just need to understand what "as a whole" means....
Try to be clever and try to inform yourself before sending more fals claims as you did already.
Maybe you are a native english speaker and thus lazy with reading the GPL. If you carefully read the GPL, you of course understand that it is _very_ careful about what parts the GPL applies to. It definitely does _not_ apply to the "complete source".
If you have problems to understand the GPL, read one of the various comments from lawyers, but avoid Mr. Moglen - he is well known for intentionally writing false claims in the public and only uses correct lawful interpretations if he is in a private discussion.
My code is fully legal and there is absolutely no license problem with it.
Umm, no. Larry Wall clearly understood this eons ago.
???
Odd, I expected you to be as smart as him. He started with only the 'Artistic' license but quickly understood the issues when you need part of the 'work as a whole' to include, say, linking in a proprietary database driver as one component and GPL'd readline as another, along with the code he wanted to be generally usable. And he did something about it.
The fact that there is GNU readline verifies that some people at FSF are in fact hostile against OSS.
BTW: I don't need GNU readline as I have my owm history editor since August 1984 ;-)
And fortunately, Larry didn't publish "patch" under GPL, so I was able to write a non-GPLd POSIX compliant patch (note that gpatch is not POSIX compliant).
Again, don't follow the agitation from OSS enemies. You are of course wrong!
You don't have to 'follow' anything - just read the phrase 'work as a whole'.
You need to _understand_ the GPL and avoid to just lazyly read it as you did before. The GPL does _not_ apply to _everything_. The GPL just applies to the "work" that is under GPL. For the rest, you just need to include it under _any_ license and if you did ever carefully read the GPL, you of course did know that already.
There are parts in the GPL that read similar to: "under the terms and conditions of this license". These parts apply to GPL code only, but enforce all GPL rules.
There are other parts in th GPL that read similar to: "under the terms and conditions of paragraph xxx". And these parts just require you to follow the rules in the named part of the GPL but not to more! These parts apply to what the GPL addresses when speaking about the "complete source".
Fazit: The GPL does not require you to put everything under GPL. It just requires you to include makefiles, scripts and libraries under any license that permits redistribution.
Question: If _you_ believe that it is OK to mix your code with GPL'd code, why not add the dual licensing statement that would make it clear for everyone else? It doesn't take anything away - unless you really don't want it to be used in other projects.
Why should I do something that is not needed?
My question is 'why not do it?'. You don't lose anything but the restrictions that you pretend aren't there since a dual license allows you to choose the terms of the other if you prefer. I don't like the GPL restrictions either, but I just say so instead of pretending otherwise. A dual license is clearly needed unless your point is to make people choose between either using your code or anything that is GPL'd.
If I did add the GPL to my code, I would not win anything, because antisocial people would still prevent it from being included in Debian or RedHat.
I would however risk that people send interesting patches as GPL only and this way prevent the freedom to use it by anybody.
But before you like to discuss things with me, I recommend you to first inform yourself correctly.
I if course _don't_ mix CDDLd code with GPLd code.
So, you really don't want your code to be used? Then why ask why it isn't popular?
Please explain me why people believe RedHat or Centos is a good choice when there are people inside that write false claims on the GPL because they did not read it in a way that would allow them to understand the GPL?
Jörg
On Mon, Apr 27, 2015 at 11:57 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
You can't distribute GPLd programs unless 'the work as a whole' is covered by the GPL. There can't be a distinction between binary and source since one is derived from the other.
Now you just need to understand what "as a whole" means....
Apparently we live in different universes. Or at least countries - where meanings are relative. But it doesn't matter how either of us understand it, what matters are how the legal system understands it in our native countries.
Try to be clever and try to inform yourself before sending more fals claims as you did already.
Maybe you are a native english speaker and thus lazy with reading the GPL. If you carefully read the GPL, you of course understand that it is _very_ careful about what parts the GPL applies to. It definitely does _not_ apply to the "complete source".
Yes, in english, 'work as a whole' does mean complete. And the normal interpretation is that it covers everything linked into the same process at runtime unless there is an alternate interface-compatible component with the same feature set.
If you have problems to understand the GPL, read one of the various comments from lawyers, but avoid Mr. Moglen - he is well known for intentionally writing false claims in the public and only uses correct lawful interpretations if he is in a private discussion.
No one is interested in setting themselves up for a legal challenge with opposing views by experts.
And fortunately, Larry didn't publish "patch" under GPL, so I was able to write a non-GPLd POSIX compliant patch (note that gpatch is not POSIX compliant).
Larry is a nice guy. He doesn't want to cause trouble for anyone. Apparently that's not universal....
You don't have to 'follow' anything - just read the phrase 'work as a whole'.
You need to _understand_ the GPL and avoid to just lazyly read it as you did before. The GPL does _not_ apply to _everything_. The GPL just applies to the "work" that is under GPL. For the rest, you just need to include it under _any_ license and if you did ever carefully read the GPL, you of course did know that already.
It applies to everything copyright law applies to since it is really copyright law that restricts distribution and the GPL simply provides the exceptions. There's a valid case for linked components to be considered derivative works of each other if they require the other for the work as a whole to be functional.
Fazit: The GPL does not require you to put everything under GPL. It just requires you to include makefiles, scripts and libraries under any license that permits redistribution.
Those are mentioned separately because they wouldn't be included as a derivative work otherwise.
My question is 'why not do it?'. You don't lose anything but the restrictions that you pretend aren't there since a dual license allows you to choose the terms of the other if you prefer. I don't like the GPL restrictions either, but I just say so instead of pretending otherwise. A dual license is clearly needed unless your point is to make people choose between either using your code or anything that is GPL'd.
If I did add the GPL to my code, I would not win anything, because antisocial people would still prevent it from being included in Debian or RedHat.
Beg your pardon? You lost me here. If you remove the reason for exclusion, what evidence do you have that the work would still be excluded, other than perhaps your long history of keeping it from being usable?
I would however risk that people send interesting patches as GPL only and this way prevent the freedom to use it by anybody.
And that would be different how???? You can't use them now. And worse, you've severely restricted the number of people who might offer patches regardless of the license.
But before you like to discuss things with me, I recommend you to first inform yourself correctly.
I if course _don't_ mix CDDLd code with GPLd code.
So, you really don't want your code to be used? Then why ask why it isn't popular?
Please explain me why people believe RedHat or Centos is a good choice when there are people inside that write false claims on the GPL because they did not read it in a way that would allow them to understand the GPL?
How do you imagine such a 'false claim' affects anyone's use of released code and source or why it would be a factor in their choice? Personally I can't reconcile RedHat's restriction on redistributing binaries with the GPL's prohibition on additional restrictions, but Centos makes that a non-issue.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 11:57 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
You can't distribute GPLd programs unless 'the work as a whole' is covered by the GPL. There can't be a distinction between binary and source since one is derived from the other.
Now you just need to understand what "as a whole" means....
Apparently we live in different universes. Or at least countries - where meanings are relative. But it doesn't matter how either of us understand it, what matters are how the legal system understands it in our native countries.
NO, you just still do not understand the legal rules that apply. I recommend you to inform yourself about the rules that apply. Fortunately all serious lawyers basically say the same here...
Maybe you are a native english speaker and thus lazy with reading the GPL. If you carefully read the GPL, you of course understand that it is _very_ careful about what parts the GPL applies to. It definitely does _not_ apply to the "complete source".
Yes, in english, 'work as a whole' does mean complete. And the normal interpretation is that it covers everything linked into the same process at runtime unless there is an alternate interface-compatible component with the same feature set.
You of yourse need to understand what this means in a legal context and not in kitchen english...
If you have problems to understand the GPL, read one of the various comments from lawyers, but avoid Mr. Moglen - he is well known for intentionally writing false claims in the public and only uses correct lawful interpretations if he is in a private discussion.
No one is interested in setting themselves up for a legal challenge with opposing views by experts.
So could you please explain why all distros that asked specialized lawyers ship the original cdrtools and thos distros that do not did never ask a lawyer?
And fortunately, Larry didn't publish "patch" under GPL, so I was able to write a non-GPLd POSIX compliant patch (note that gpatch is not POSIX compliant).
Larry is a nice guy. He doesn't want to cause trouble for anyone. Apparently that's not universal....
I am also a nice guy who in interested in collaboration amongst OSS projects.
BTW: I could relicense mkisofs to CDDL if I remove the apple HFS filesystem support. This is why the code originally from Eric Youngdale did drop much below 50%, it is even below 10%. I am a nice guy and leave things as they are as long as possible after I asked specialized lawyers.
If you ask a lawyer, you will learn that you believed the wrong people before.
You need to _understand_ the GPL and avoid to just lazyly read it as you did before. The GPL does _not_ apply to _everything_. The GPL just applies to the "work" that is under GPL. For the rest, you just need to include it under _any_ license and if you did ever carefully read the GPL, you of course did know that already.
It applies to everything copyright law applies to since it is really copyright law that restricts distribution and the GPL simply provides the exceptions. There's a valid case for linked components to be considered derivative works of each other if they require the other for the work as a whole to be functional.
Read the GPL, to learn that the GPL does not include the term "linking". The law applies instead and the law is related to the term "work" - not linking.
I explained already, that every part of the GPL that uses GPLs own definition of a derivative work is void because it is in conflict with the law.
BTW: It seems to be a progress that you now admit that parts that are usable separately are independend works. This applies to the cdrtools. You just need to check the cdrtools instead of listening to the false claims from the anti-social people at Debian.
My question is 'why not do it?'. You don't lose anything but the restrictions that you pretend aren't there since a dual license allows you to choose the terms of the other if you prefer. I don't like the GPL restrictions either, but I just say so instead of pretending otherwise. A dual license is clearly needed unless your point is to make people choose between either using your code or anything that is GPL'd.
If I did add the GPL to my code, I would not win anything, because antisocial people would still prevent it from being included in Debian or RedHat.
Beg your pardon? You lost me here. If you remove the reason for exclusion, what evidence do you have that the work would still be excluded, other than perhaps your long history of keeping it from being usable?
There _never_ was any reason for exclusion.
You should verify that you are serious with your claims and include the original cdrtools now since there is no reason for exclusion: the claimed problems do not exist and did never exist.
Let me give a hint on the true background: the license change towards CDDL was a reaction of the antisocial activities from Debian. Any person who is interested could check the true and not fakeable timeline in the internet to verify this.
How do you imagine such a 'false claim' affects anyone's use of released code and source or why it would be a factor in their choice? Personally I can't reconcile RedHat's restriction on redistributing binaries with the GPL's prohibition on additional restrictions, but Centos makes that a non-issue.
If redhat follows the rules of the GPL, redHat should ship the original cdrtools now!
Jörg
On Mon, 2015-04-27 at 12:32 -0500, Les Mikesell wrote:
On Mon, Apr 27, 2015 at 11:57 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Now you just need to understand what "as a whole" means....
Yes, in english, 'work as a whole' does mean complete. And the normal interpretation is that it covers everything linked into the same process at runtime unless there is an alternate interface-compatible component with the same feature set.
That may be the USA interpretation but on the other, European, side of the Atlantic I believe
"as a whole" means generally BUT allowing for exceptions.
On Mon, Apr 27, 2015 at 1:46 PM, Always Learning centos@u64.u22.net wrote:
Yes, in english, 'work as a whole' does mean complete. And the normal interpretation is that it covers everything linked into the same process at runtime unless there is an alternate interface-compatible component with the same feature set.
That may be the USA interpretation but on the other, European, side of the Atlantic I believe
"as a whole" means generally BUT allowing for exceptions.
OK, great. That clears it up then.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 1:46 PM, Always Learning centos@u64.u22.net wrote:
Yes, in english, 'work as a whole' does mean complete. And the normal interpretation is that it covers everything linked into the same process at runtime unless there is an alternate interface-compatible component with the same feature set.
That may be the USA interpretation but on the other, European, side of the Atlantic I believe
"as a whole" means generally BUT allowing for exceptions.
OK, great. That clears it up then.
Maybe this helps:
The BSD license does not permit to relicense the code, so you cannot put BSD code under the GPL. This was e.g. explained by Theo de Raath some years ago already. The result was that Linux people did remove the GPL header from all BSDd Linux source files that have not been 100% written by the same person that added the GPL header.
The BSD license permits to mix a source file under BSD license with some lines under a different license if you document this. But this is not done in all cases I am aware of.
Up to now, nobody could explain me how a mixture of GPL and BSD can be legal as this would require (when following the GPL) to relicense the BSD code under GPL in order to make the whole be under GPL.
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
Jörg
On Mon, Apr 27, 2015 at 2:28 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
"as a whole" means generally BUT allowing for exceptions.
OK, great. That clears it up then.
Maybe this helps:
The BSD license does not permit to relicense the code, so you cannot put BSD code under the GPL.
Yes, if you mean what is described here as 'the original 4-clause' license, or BSD-old: http://en.wikipedia.org/wiki/BSD_licenses
The BSD license permits to mix a source file under BSD license with some lines under a different license if you document this. But this is not done in all cases I am aware of.
But you can't add the 'advertising requirement' of the 4-clause BSD to something with a GPL component because additional restrictions are prohibited.
Up to now, nobody could explain me how a mixture of GPL and BSD can be legal as this would require (when following the GPL) to relicense the BSD code under GPL in order to make the whole be under GPL.
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
You can't do either if you are talking about the BSD-old license (which also isn't accepted as open source by the OSI). Fortunately, the owners of the original/official BSD were nice guys and removed the GPL incompatible clause, with the Revised BSD License being recognized as both open source and GPL-compatible. But that hasn't - and probably can't - happen with CDDL, so the only working option is dual licensing.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 2:28 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
"as a whole" means generally BUT allowing for exceptions.
OK, great. That clears it up then.
Maybe this helps:
The BSD license does not permit to relicense the code, so you cannot put BSD code under the GPL.
Yes, if you mean what is described here as 'the original 4-clause' license, or BSD-old: http://en.wikipedia.org/wiki/BSD_licenses
Do you like to discuss things or do you like to throw smoke grenades?
The BSD license permits to mix a source file under BSD license with some lines under a different license if you document this. But this is not done in all cases I am aware of.
But you can't add the 'advertising requirement' of the 4-clause BSD to something with a GPL component because additional restrictions are prohibited.
Up to now, nobody could explain me how a mixture of GPL and BSD can be legal as this would require (when following the GPL) to relicense the BSD code under GPL in order to make the whole be under GPL.
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
You can't do either if you are talking about the BSD-old license (which also isn't accepted as open source by the OSI). Fortunately, the owners of the original/official BSD were nice guys and removed the GPL incompatible clause, with the Revised BSD License being recognized as both open source and GPL-compatible. But that hasn't - and probably can't - happen with CDDL, so the only working option is dual licensing.
It seems that you are not interested in a sesrious discussion.
The 4-clause BSD license is not a valid OSS license and all original BSD code was converted by addict of the president of UC-Berleley.
So you claim that there is 4-clause BSD code in the Linux kernel? You are kidding :-(
Jörg
On Mon, Apr 27, 2015 at 4:04 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Yes, if you mean what is described here as 'the original 4-clause' license, or BSD-old: http://en.wikipedia.org/wiki/BSD_licenses
Do you like to discuss things or do you like to throw smoke grenades?
The only thing I'd like to discuss is your reason for not adding a dual license to make your code as usable and probably as ubiquitous as perl. And you have not mentioned anything about how that might hurt you.
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
You can't do either if you are talking about the BSD-old license (which also isn't accepted as open source by the OSI). Fortunately, the owners of the original/official BSD were nice guys and removed the GPL incompatible clause, with the Revised BSD License being recognized as both open source and GPL-compatible. But that hasn't - and probably can't - happen with CDDL, so the only working option is dual licensing.
It seems that you are not interested in a sesrious discussion.
Not unless it is about how you or anyone else would be hurt by a dual license. Anything else is just ranting on both our parts.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 4:04 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Yes, if you mean what is described here as 'the original 4-clause' license, or BSD-old: http://en.wikipedia.org/wiki/BSD_licenses
Do you like to discuss things or do you like to throw smoke grenades?
The only thing I'd like to discuss is your reason for not adding a dual license to make your code as usable and probably as ubiquitous as perl. And you have not mentioned anything about how that might hurt you.
I explained this to you in vast details. If you ignore this explanation, I cannot help you.
Jörg
On Mon, Apr 27, 2015 at 4:19 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Do you like to discuss things or do you like to throw smoke grenades?
The only thing I'd like to discuss is your reason for not adding a dual license to make your code as usable and probably as ubiquitous as perl. And you have not mentioned anything about how that might hurt you.
I explained this to you in vast details. If you ignore this explanation, I cannot help you.
No, you posted some ranting misconceptions about why you don't see a need for it. But if you actually believed any of that yourself, then you would see there was no harm in adding a dual license to make it clear to everyone else. It clearly has not hurt the popularity of perl or BSD code to become GPL-compatible, nor has it forced anyone to use that code only in GPL-compatible ways.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 4:19 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Do you like to discuss things or do you like to throw smoke grenades?
The only thing I'd like to discuss is your reason for not adding a dual license to make your code as usable and probably as ubiquitous as perl. And you have not mentioned anything about how that might hurt you.
I explained this to you in vast details. If you ignore this explanation, I cannot help you.
No, you posted some ranting misconceptions about why you don't see a need for it. But if you actually believed any of that yourself, then you would see there was no harm in adding a dual license to make it clear to everyone else. It clearly has not hurt the popularity of perl or BSD code to become GPL-compatible, nor has it forced anyone to use that code only in GPL-compatible ways.
Cdrtools are fully legal as they strictly follow all claims from the related licenses.
What problem do you have with fully legal code?
I explained that because cdrtools is legally distributable as is (see legal reviews from Sun, Oracle and Suse), there is no need to dual license anything.
I also explained that a dual licensed source will cause problems if people send e.g. a GPL only patch.
If you continue to claim not to have an answer from me, I need to assume that you are not interested in a serious discussion.
Conclusion: dual licensing is not helpful and it even has disadvantages.
Jörg
On Mon, Apr 27, 2015 at 4:34 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
No, you posted some ranting misconceptions about why you don't see a need for it. But if you actually believed any of that yourself, then you would see there was no harm in adding a dual license to make it clear to everyone else. It clearly has not hurt the popularity of perl or BSD code to become GPL-compatible, nor has it forced anyone to use that code only in GPL-compatible ways.
Cdrtools are fully legal as they strictly follow all claims from the related licenses.
What problem do you have with fully legal code?
The problem is that it can't be used as a component of a larger work if any other components are GPL-covered. As you know very well.
I explained that because cdrtools is legally distributable as is (see legal reviews from Sun, Oracle and Suse), there is no need to dual license anything.
Unless you would like it to be used more widely, and available as component in best-of-breed works.
I also explained that a dual licensed source will cause problems if people send e.g. a GPL only patch.
So, not being able to accept patches from people who aren't sending patches now - and probably aren't even aware of your work - would somehow be a problem. That's ummm, imaginative...
If you continue to claim not to have an answer from me, I need to assume that you are not interested in a serious discussion.
I haven't seen any serious discussion yet. Maybe we could discuss how badly perl has suffered from not being able to accept those GPL'd patches that you fear so much.
Conclusion: dual licensing is not helpful and it even has disadvantages.
Wrong conclusion. Remind we why you asked about your code not being used.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 4:34 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
No, you posted some ranting misconceptions about why you don't see a need for it. But if you actually believed any of that yourself, then you would see there was no harm in adding a dual license to make it clear to everyone else. It clearly has not hurt the popularity of perl or BSD code to become GPL-compatible, nor has it forced anyone to use that code only in GPL-compatible ways.
Cdrtools are fully legal as they strictly follow all claims from the related licenses.
What problem do you have with fully legal code?
The problem is that it can't be used as a component of a larger work if any other components are GPL-covered. As you know very well.
You know very well that you are writing a false claim here.
Cdrtools is fully legal and can be rightfully redistributed in source or binary form. This has been verified by three independent teams of lawyers.
If you have wishes that go bejond legality, I cannot help you.
Jörg
On 04/27/2015 12:28 PM, Joerg Schilling wrote:
Up to now, nobody could explain me how a mixture of GPL and BSD can be legal as this would require (when following the GPL) to relicense the BSD code under GPL in order to make the whole be under GPL.
The GPL doesn't require that you relicense any non-GPL parts of the whole. It requires that the whole "be licensed ... at no charge to all third parties under the terms of this License"
The whole, containing portions which are BSD licensed, does not place any additional restrictions or responsibilities upon recipients, and therefore satisfies the requirements of GPL2 section 2.b.
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
No, you can't. Section 6 of the GPL states that "You may not impose any further restrictions on the recipients' exercise of the rights granted herein." CDDL however, does contain additional restrictions.
Moreover, the exclusion is mutual. Section 3.4 of the CDDL states "You may not offer or impose any terms on any Covered Software in Source Code form that alters or restricts the applicable version of this License or the recipients' rights hereunder." The GPL2 restricts the recipients rights in ways that the CDDL does not.
I'm not able to find any information about actual court decisions about compatibility between GPL 2 or 3 and CDDL or MPL 1.1 (upon which CDDL was based). The FSF regards MPL 1.1 and CDDL as incompatible with GPL. If you and your lawyers disagree, you might end up as the first to establish a court precedent. Only you can decide for yourself if that is a risk you would like to undertake, and if the value of testing that notion is worth the costs. Until then, any claim that the two are compatible is naive.
Gordon Messmer gordon.messmer@gmail.com wrote:
On 04/27/2015 12:28 PM, Joerg Schilling wrote:
Up to now, nobody could explain me how a mixture of GPL and BSD can be legal as this would require (when following the GPL) to relicense the BSD code under GPL in order to make the whole be under GPL.
The GPL doesn't require that you relicense any non-GPL parts of the whole. It requires that the whole "be licensed ... at no charge to all third parties under the terms of this License"
You missread the GPL. Ask a lawyer for help.
The GPL demands (in case you ship binaries and only in this case) no more than to put the GPL work under GPL and to make anything, needed to re-create the binary, to be made available under a license that allows redistribution.
See e.g. the book about the GPL from the lawyers of Harald Welte.
http://www.oreilly.de/german/freebooks/gplger/pdf/025-168.pdf
See page 85 (PDF page 60) see the lower half of the paragraph numbered "23".
In other words, if you can legally combine BSD code with GPL code, you can do with GPL and CDDL as well.
No, you can't. Section 6 of the GPL states that "You may not impose any further restrictions on the recipients' exercise of the rights granted herein." CDDL however, does contain additional restrictions.
I recommend you not to repeat false claims from uninformed people.
If you did read the CDDL, you did of course know that the CDDL places "work limits" at file limits and that the CDDL does not try to impose any restriction on sources that are not in a file marked as CDDLd. So the CDDL of course does create any restriction on a GPLd work.
On the other side, the GPL does create restrictions on other sources, but it just requires other sources (if needed to recreate the shipped binary) to be shipped together with the GPLd work. The GPL of course does not impose any further restrictions on _other_ sources under a different license.
Given the fact that the official cdrtools source tarball includes everything to recreate the binary, everything is legal unless you make unlawful changes to the original source.
So calm down, read the GPL and the CDDL by your own - repeat this - until you fully understand both licenses.
Jörg
Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
If you did read the CDDL, you did of course know that the CDDL places "work limits" at file limits and that the CDDL does not try to impose any restriction on sources that are not in a file marked as CDDLd. So the CDDL of course does _not_ create any restriction on a GPLd work.
^^^^^ Typo correction.
Jörg
On Apr 27, 2015, at 9:07 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Heirloom added support for uname -S and for some linux ulimit extensions but then stopped working on the code after a few months
Ah. I had no idea it was in a state of disrepair.
I see that you already wrote up the differences between osh and bosh in an earlier post. Is there a good reason why these comparisons are not on the Schily Tools web page already? :)
The schily tools act as a container to publish the current code state. There is no such maintained web page.
I was referring to the summary on the SourceForge page, where you just list the contents of the package without explaining why one would want to download it.
I would be interested to understand why Heirloom seems to so well known and my portability attempts seem to be widely unknown.
I can think of several explanations:
1. The Heirloom pages explain what features each download provides, rather than just give a list of program names.
If you tell me that I can download “bsh”, I have no idea why I want bsh based solely on its name. If you tell me that I can download “od”, I reply that I already have a functioning version of od, thank you very much. :)
2. Many of those who might be interested in your osh are already well served by the Ancient Unix V7 + SIMH combination:
http://www.in-ulm.de/~mascheck/various/ancient/
You are left with the subset of people who want to run something other than the shells that come with their OS, and who want it to run natively.
I should point out that a lot of people using the Ancient Unix images actually don’t want old bugs fixed.
3. It’s not clear from the files I’ve peeked into in your source distribution when bsh first became available in an OSI-approved form, but it seems to be sometime in the 2005-2007 range.
If that is true, then bsh is several years late to fill a gap already filled by ash, in the same way that the prior existence of bash makes the open-source version of ksh93 uninteresting to most people.
This is why you need a web page to sell your project: to explain why someone should abandon bash, zsh, ash, dash, posh, ksh93u+, mksh…
4. CDDL annoys a lot of people. Yes, I know, GPL annoys a lot of people, too. But again, you’re going up against ash, which is BSD, which annoys almost no one. :)
Warren Young wyml@etr-usa.com wrote:
The schily tools act as a container to publish the current code state. There is no such maintained web page.
I was referring to the summary on the SourceForge page, where you just list the contents of the package without explaining why one would want to download it.
I thought I don't need to make advertizing for well known software.
I manage the only actively maintained portable Bourne Shell and I do so as well for SCCS.
I would be interested to understand why Heirloom seems to so well known and my portability attempts seem to be widely unknown.
I can think of several explanations:
- The Heirloom pages explain what features each download provides, rather than just give a list of program names.
The problem is that the developes page cannot contain much information and in general, I prefer to code than to write advertizing.
If you tell me that I can download ?bsh?, I have no idea why I want bsh based solely on its name. If you tell me that I can download ?od?, I reply that I already have a functioning version of od, thank you very much. :)
Bsh is mainly in schily tools to show people how the first shell with an interactive editable history did look like. Bsh != Bourne Shell. It was named bsh because I implemented my history editor at H. Berthold AG while working on a depanded page variant of UNOS.
- Many of those who might be interested in your osh are already well served by the Ancient Unix V7 + SIMH combination:
http://www.in-ulm.de/~mascheck/various/ancient/
You are left with the subset of people who want to run something other than the shells that come with their OS, and who want it to run natively.
I should point out that a lot of people using the Ancient Unix images actually don?t want old bugs fixed.
- It?s not clear from the files I?ve peeked into in your source distribution when bsh first became available in an OSI-approved form, but it seems to be sometime in the 2005-2007 range.
If that is true, then bsh is several years late to fill a gap already filled by ash, in the same way that the prior existence of bash makes the open-source version of ksh93 uninteresting to most people.
This is why you need a web page to sell your project: to explain why someone should abandon bash, zsh, ash, dash, posh, ksh93u+, mksh?
I am not interested in working against ksh93, as this is much closer to POSIX than the current Bourne Shell. The Bourne Shell however is a nice idea for the system shell in /bin/sh because it is faster than bash and as fast as ksh93 but much smaller (if you use the UNIX linker, you can implement lazy linking that causes it to be only 80 kB when interpreting scripts). See:
http://schillix.sourceforge.net/man/man1/ld.1.html
for the UNIX linker man page, -zlazyload
- CDDL annoys a lot of people. Yes, I know, GPL annoys a lot of people, too. But again, you?re going up against ash, which is BSD, which annoys almost no one. :)
The CDDL does not annoy people, this is just a fairy tale from some OSS enemies. BTW: I am of course not against ash, I just support the Bourne Shell.
Jörg
On Apr 27, 2015, at 10:10 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Warren Young wyml@etr-usa.com wrote:
I was referring to the summary on the SourceForge page, where you just list the contents of the package without explaining why one would want to download it.
I thought I don't need to make advertizing for well known software.
I first learned of its existence last week, and then only by coincidence to the present discussion.
I immediately disregarded it for the reasons I’ve already given.
If it wasn’t for this thread, I’d still be ignorant of the reasons why I should care about the existence of Schily Tools.
the developes page cannot contain much information
SourceForge gives you a way to link to a page on another site.
I prefer to code than to write advertizing.
Well, there’s your diagnosis, then. Successful software requires advertising, whether F/OSS or not.
The original Bourne shell was advertised through the pages of CACM, in books, etc. A search for “Schilling” on linuxjournal.com turns up nothing except for some references to cdrecord.
So, why do you expect that I should have stumbled across Schily Tools before now?
If you tell me that I can download ?bsh?, I have no idea why I want bsh based solely on its name. If you tell me that I can download ?od?, I reply that I already have a functioning version of od, thank you very much. :)
Bsh is mainly in schily tools to show people how the first shell with an interactive editable history did look like. Bsh != Bourne Shell.
Yes, I realize that osh is closer to the original Bourne shell. My point is that you can’t expect people to just know, without having been told, why they want bsh, or osh, bosh, or smake, or…
Most of these tools compete with tools that are already in CentOS. If you want people to use these instead, you’re not going to persuade many people with a tarball.
As for the tools that do not have equivalents in CentOS, the file name is not an explanation.
You can’t expect people to just blindly download the tarball, build it, install it, and then start reading man pages. You have to entice people first.
This thread is accomplishing that to some extent. I just think your time would be better spent writing such thoughts up on a web page somewhere, then linking to that from the SourceForge page. You will reach many more people that way.
- CDDL annoys a lot of people.
The CDDL does not annoy people, this is just a fairy tale from some OSS enemies.
The following irritates me, I am a “people,” and I am not an OSS enemy:
http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue
BTW: I am of course not against ash, I just support the Bourne Shell.
Like it or not, your shells are in competition against all the other shells that became available earlier than yours. There is only so much free time in the world. You can’t expect people to stop using something they’re already successfully using without some amount of persuasion.
On Mon, Apr 27, 2015 at 11:41 AM, Warren Young wyml@etr-usa.com wrote:
- CDDL annoys a lot of people.
The CDDL does not annoy people, this is just a fairy tale from some OSS enemies.
The following irritates me, I am a “people,” and I am not an OSS enemy:
It is really the GPL that has the restriction preventing 'best-of-breed' components being combined, but it doesn't matter, it isn't going to change. I can see Sun being irritated with Linux (and for good reason...) but isn't it time to let it go?
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 11:41 AM, Warren Young wyml@etr-usa.com wrote:
- CDDL annoys a lot of people.
The CDDL does not annoy people, this is just a fairy tale from some OSS enemies.
The following irritates me, I am a ?people,? and I am not an OSS enemy:
It is really the GPL that has the restriction preventing 'best-of-breed' components being combined, but it doesn't matter, it isn't going to change. I can see Sun being irritated with Linux (and for good reason...) but isn't it time to let it go?
We had much less problems is the people that use the GPL would understand the GPL.
If you combine ZFS and Linux, you create a permitted "collective work" and the GPL cannot extend it's rules to the CDDLd separate and independend work ZFS of course.
Jörg
On Mon, Apr 27, 2015 at 12:10 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
If you combine ZFS and Linux, you create a permitted "collective work" and the GPL cannot extend it's rules to the CDDLd separate and independend work ZFS of course.
Which countries' copyright laws would permit that explicitly even when some of the components' licenses prohibit it?
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 12:10 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
If you combine ZFS and Linux, you create a permitted "collective work" and the GPL cannot extend it's rules to the CDDLd separate and independend work ZFS of course.
Which countries' copyright laws would permit that explicitly even when some of the components' licenses prohibit it?
Fortunately, Europe and the USA declare the same parts of the GPL void, these parts would prevent such a combination.
In the USA, the GPL is a legal construct called "license" for customer protection and a "license" is limited to only make claims that are listed in: US Copyright law title 17 paragraph 106
The GPL makes claims that are in conflict with the law because these claims are not amongst what the list in the law permits and that are thus void.
The same parts of the GPL are void in the EU because they are writen in an ambiguous way. For customer protection, the rules for "general conditions" and these rules permit the customer to select the interpretation that is best for the customer in such a case.
Both legal systems have the same results: They prevent the GPL from using it's own interpretation os what a derivative work is and the rules from the laws apply instead. These rules make many combinations a "collective work" that is permitted. The cdrtools and ZFS on Linux match these rules - well, I assume that the ZFS integration code follows the rules that are needed for a clean collective work.
Cdrtools follow these rules:
- No code from CDDL and GPL is mixed into a single file
- Non-GPL code used in a colective work was implemented independently from the GPLd parts and form a separate work that may be used without the GPLd code as well.
Jörg
On Mon, Apr 27, 2015 at 1:02 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The GPL makes claims that are in conflict with the law because these claims are not amongst what the list in the law permits and that are thus void.
The GPL is all that gives you permission to distribute. If it is void then you have no permission at all to distribute any covered code.
Both legal systems have the same results: They prevent the GPL from using it's own interpretation os what a derivative work is and the rules from the laws apply instead.
So apply copyright law without a license. You can't distribute. I agree that the FSF interpretation about distributing source with the intention that the end user does the link with other components is pretty far off the wall, but static binaries are clearly one 'work as a whole' and dynamic linkage is kind of fuzzy. US juries are supposed to focus on intent and are pretty unpredictable - I wouldn't want to take a chance on what they might decide.
These rules make many combinations a "collective work" that is permitted. The cdrtools and ZFS on Linux match these rules - well, I assume that the ZFS integration code follows the rules that are needed for a clean collective work.
Can you point out a reference to case where this has been validated? That is, a case where the only licence to distribute a component of something is the GPL and distribution is permitted by a court ruling under terms where the GPL does not apply to the 'work as a whole'?
Cdrtools follow these rules:
No code from CDDL and GPL is mixed into a single file
How is 'a file' relevant to the composition of the translated binary where the copyright clearly extends? And why do you have any rules if you think the GPL doesn't pose a problem with combining components? More to the point, why don't you eliminate any question about that problem with a dual license on the code you control?
Non-GPL code used in a colective work was implemented independently from the GPLd parts and form a separate work that may be used without the GPLd code as well.
How 'you' arrange them isn't the point. Or even any individual who builds something that isn't intended for redistribution. But for other people to consider them generally usable as components in redistributable projects there's not much reason to deal with the inability to combine with other widely used components. What's the point - and what do you have against the way perl handles it?
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 1:02 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The GPL makes claims that are in conflict with the law because these claims are not amongst what the list in the law permits and that are thus void.
The GPL is all that gives you permission to distribute. If it is void then you have no permission at all to distribute any covered code.
Fortunately judges know better than you....
If you read the reasoning from judgements, you would know that judges just look at the parts of the GPL that are not in conflict with the law. Judges know that making the GPL void as a whole would be a desaster.
Both legal systems have the same results: They prevent the GPL from using it's own interpretation os what a derivative work is and the rules from the laws apply instead.
So apply copyright law without a license. You can't distribute. I agree that the FSF interpretation about distributing source with the intention that the end user does the link with other components is pretty far off the wall, but static binaries are clearly one 'work as a whole' and dynamic linkage is kind of fuzzy. US juries are supposed to focus on intent and are pretty unpredictable - I wouldn't want to take a chance on what they might decide.
Given the fact that there is not a single trustworthy lawjer in the US that writes about the GPL and that follows your interpreation, I am relaxed.
These rules make many combinations a "collective work" that is permitted. The cdrtools and ZFS on Linux match these rules - well, I assume that the ZFS integration code follows the rules that are needed for a clean collective work.
Can you point out a reference to case where this has been validated? That is, a case where the only licence to distribute a component of something is the GPL and distribution is permitted by a court ruling under terms where the GPL does not apply to the 'work as a whole'?
There was no court case, but VERITAS published a modifed version of gtar where additional code was added by binary only libraries from VERITAS. The FSF did never try to discuss this is public even though everybody did know about the existence. As long as the FSF does not try to sue VERITAS, we are safe - regardless what intentional nonsense you can read on the FSF webpages.
Cdrtools follow these rules:
No code from CDDL and GPL is mixed into a single file
How is 'a file' relevant to the composition of the translated binary where the copyright clearly extends? And why do you have any rules if you think the GPL doesn't pose a problem with combining components? More to the point, why don't you eliminate any question about that problem with a dual license on the code you control?
???
I completely follow the claims from both licenses, so there is no need to follow your wishes.
Non-GPL code used in a colective work was implemented independently from the GPLd parts and form a separate work that may be used without the GPLd code as well.
How 'you' arrange them isn't the point. Or even any individual who builds something that isn't intended for redistribution. But for other people to consider them generally usable as components in redistributable projects there's not much reason to deal with the inability to combine with other widely used components. What's the point - and what do you have against the way perl handles it?
You are of course wrong and you ignore everything I explained you before.
If your idesyncratic GPL interpretation was true, your whole Linux distro would be illegal. When do you withdraw your Linux distro?
Jörg
Can we take the license wanking off the list please? I don't think either of the people arguing are actually lawyers, so it has no relevance.
On Mon, 2015-04-27 at 14:21 -0500, Chris Adams wrote:
Can we take the license wanking off the list please? I don't think either of the people arguing are actually lawyers, so it has no relevance.
Relevance is not dependent on being, or not being, a lawyer. Relevance for inclusion on the mailing list is a close connection to Centos/RHEL :-)
On Mon, Apr 27, 2015 at 2:13 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The GPL is all that gives you permission to distribute. If it is void then you have no permission at all to distribute any covered code.
Fortunately judges know better than you....
If you read the reasoning from judgements, you would know that judges just look at the parts of the GPL that are not in conflict with the law. Judges know that making the GPL void as a whole would be a desaster.
There is nothing in conflict with law about prohibiting distribution. And you cant' just unilaterally pick parts of the licence that permits distribution that you like and ignore the rest.
So apply copyright law without a license. You can't distribute. I agree that the FSF interpretation about distributing source with the intention that the end user does the link with other components is pretty far off the wall, but static binaries are clearly one 'work as a whole' and dynamic linkage is kind of fuzzy. US juries are supposed to focus on intent and are pretty unpredictable - I wouldn't want to take a chance on what they might decide.
Given the fact that there is not a single trustworthy lawjer in the US that writes about the GPL and that follows your interpreation, I am relaxed.
It's not 'my' interpretation. Nor does my interpretation matter much. It is the owners of the GPL licensed code that would be allowed to claim damages if the GPL terms are not followed. And what they have published is that all of the runtime linked components are included in the 'work as a whole' specification. I assume you are familiar with RIPEM and the reason it could not be distributed until there was a non-GNU implementation of gmp. https://groups.google.com/forum/#!topic/gnu.misc.discuss/4RcHL5Jg14o%5B1-25]
Can you point out a reference to case where this has been validated? That is, a case where the only licence to distribute a component of something is the GPL and distribution is permitted by a court ruling under terms where the GPL does not apply to the 'work as a whole'?
There was no court case, but VERITAS published a modifed version of gtar where additional code was added by binary only libraries from VERITAS. The FSF did never try to discuss this is public even though everybody did know about the existence. As long as the FSF does not try to sue VERITAS, we are safe - regardless what intentional nonsense you can read on the FSF webpages.
Hardly. One instance by one set of code owners has nothing to do with what some other code owner might do under other circumstances. If you could quote a decision that set a precedent it might be a factor.
Cdrtools follow these rules:
No code from CDDL and GPL is mixed into a single file
How is 'a file' relevant to the composition of the translated binary where the copyright clearly extends? And why do you have any rules if you think the GPL doesn't pose a problem with combining components? More to the point, why don't you eliminate any question about that problem with a dual license on the code you control?
???
I completely follow the claims from both licenses, so there is no need to follow your wishes.
Unless, of course, you actually wanted the code to be used by others or included as components of best-of-breed projects.
Non-GPL code used in a colective work was implemented independently from the GPLd parts and form a separate work that may be used without the GPLd code as well.
How 'you' arrange them isn't the point. Or even any individual who builds something that isn't intended for redistribution. But for other people to consider them generally usable as components in redistributable projects there's not much reason to deal with the inability to combine with other widely used components. What's the point - and what do you have against the way perl handles it?
You are of course wrong and you ignore everything I explained you before.
And likewise you ignore the fact that you would not lose anything with a dual license other than the reason for frequent arguments. And my only question is 'why not'?
If your idesyncratic GPL interpretation was true, your whole Linux distro would be illegal. When do you withdraw your Linux distro?
How so? Which process links GPL and non-GPL-compatible licensed code into a single work? No one has suggested that it is a problem to distribute separate differently-licensed works together on the same medium or run them on the same box.
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 2:13 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The GPL is all that gives you permission to distribute. If it is void then you have no permission at all to distribute any covered code.
Fortunately judges know better than you....
If you read the reasoning from judgements, you would know that judges just look at the parts of the GPL that are not in conflict with the law. Judges know that making the GPL void as a whole would be a desaster.
There is nothing in conflict with law about prohibiting distribution. And you cant' just unilaterally pick parts of the licence that permits distribution that you like and ignore the rest.
There is no doubt that the GPL is on conflict with
US Copyright law title 17 paragraph 106
if you believe that this makes the GPL void as a whole, verify that you are trying to be serious and stop shipping your Linux distro immediately!
Jörg
On Mon, Apr 27, 2015 at 2:13 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Les Mikesell lesmikesell@gmail.com wrote:
There was no court case, but VERITAS published a modifed version of gtar where additional code was added by binary only libraries from VERITAS. The FSF did never try to discuss this is public even though everybody did know about the existence. As long as the FSF does not try to sue VERITAS, we are safe - regardless what intentional nonsense you can read on the FSF webpages.
I just remembered a counterpoint to this. Back in the Windows 3.0 days when windows had no tcp networking of its own, I put together a DOS binary built from gnutar and the wattcp stack so you could back up a windows or dos box to a unix system via rsh. And when I tried to give it away I was contacted and told that I couldn't distribute it because even though wattcp was distributed in source, it had other conflicts with the GPL. As a side effect of getting it to build on a DOS compiler, I prototyped the tar code and contributed that and some bugfixes. Someone else's version was accepted instead but at least my name is still in a comment somewhere. Probably the only thing still being distributed...
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 2:13 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Les Mikesell lesmikesell@gmail.com wrote:
There was no court case, but VERITAS published a modifed version of gtar where additional code was added by binary only libraries from VERITAS. The FSF did never try to discuss this is public even though everybody did know about the existence. As long as the FSF does not try to sue VERITAS, we are safe - regardless what intentional nonsense you can read on the FSF webpages.
I just remembered a counterpoint to this. Back in the Windows 3.0 days when windows had no tcp networking of its own, I put together a DOS binary built from gnutar and the wattcp stack so you could back up a windows or dos box to a unix system via rsh. And when I tried to give it away I was contacted and told that I couldn't distribute it because even though wattcp was distributed in source, it had other conflicts with the GPL. As a side effect of getting it to build on a
If you had the wattcp stack in a separate library and if you did make the needed changes for integration in the gtar source, this was fully legal.
I know that the FSF frequently tries to ask people to do things that are not on a legal base. They however know that they cannot go on trial with this...
Jörg
On Tue, Apr 28, 2015 at 3:56 AM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Apr 27, 2015 at 2:13 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
Les Mikesell lesmikesell@gmail.com wrote:
There was no court case, but VERITAS published a modifed version of gtar where additional code was added by binary only libraries from VERITAS. The FSF did never try to discuss this is public even though everybody did know about the existence. As long as the FSF does not try to sue VERITAS, we are safe - regardless what intentional nonsense you can read on the FSF webpages.
I just remembered a counterpoint to this. Back in the Windows 3.0 days when windows had no tcp networking of its own, I put together a DOS binary built from gnutar and the wattcp stack so you could back up a windows or dos box to a unix system via rsh. And when I tried to give it away I was contacted and told that I couldn't distribute it because even though wattcp was distributed in source, it had other conflicts with the GPL. As a side effect of getting it to build on a
If you had the wattcp stack in a separate library and if you did make the needed changes for integration in the gtar source, this was fully legal.
The source code was separate files, but the binary 'work as a whole' had to be one. I don't think DOS even had a concept of loading binary libraries separate from the main executable. And the binary obviously is controlled by the copyright on the source. So while I don't like it, I can see the point that it does not meet the GPL requirement any more than it would if it were linked to a commercial library that another user would have to purchase. And there's a reasonable chance they could make an equivalent case even where shared libraries can be used, since the intent is the same.
I know that the FSF frequently tries to ask people to do things that are not on a legal base. They however know that they cannot go on trial with this...
Yes, so, the only way to help keep others from being harmed by this is to dual-license code so they can't possibly make such a claim. It doesn't happen with perl because Larry Wall understood that long ago. Or, if you are so sure of your legal footing, distribute something that they will challenge yourself and win the case that will set the precedent for the rest of us. But I'd guess dual-licensing would be easier and cheaper.
Warren Young wyml@etr-usa.com wrote:
Yes, I realize that osh is closer to the original Bourne shell. My point is that you can?t expect people to just know, without having been told, why they want bsh, or osh, bosh, or smake, or?
Most of these tools compete with tools that are already in CentOS. If you want people to use these instead, you?re not going to persuade many people with a tarball.
Could you explain me why people did write gmake even though smake did exist 5 years eralier already?
The CDDL does not annoy people, this is just a fairy tale from some OSS enemies.
The following irritates me, I am a ?people,? and I am not an OSS enemy:
This is of course completely wrong.
I recommend you to read the GPL book from the Lawyers from Harald Welte. They explain why a filesystem is not a derived work of the Linux kernel.
This of course in special true for ZFS as ZFS was not written for Linux and works without Linux already.
http://www.fokus.fraunhofer.de/usr/schilling ftp://ftp.berlios.de/pub/schily
Stephen Harris lists@spuddy.org wrote:
On Fri, Apr 24, 2015 at 10:38:25AM -0400, m.roth@5-cent.us wrote:
Fascinating. As I'd been in Sun OS, and started doing admin work when it became Solaris, I'd missed that bit. A question: did the license agreement include payment, or was it just restrictive on distribution?
In 1990, when I started using ksh88, it was totally commercial. Binaries were $$$ and source was $$$$. We bought the source and compiled it for SunOS, Ultrix and various SYSVr[23] machines (one machine was so old it didn't understand #! and so needed it placed as /bin/sh).
But around 1991 1992, the first Solaris-2.x (SunOS-5.1) came out and this included the Korn Shell for no additional costs.
Jörg
Pete Geenhuizen pete@geenhuizen.net wrote:
Initially Bourne was used because it was typically a static binary, because the boot process didn't have access to any shared libraries. When that changed it became a bit of a moot point, and you started to see other interpreters being used.
When dynamic linking was intruduced in 1988, people did kno know what we now know and provided sh, mv, tar, ifconfig and mount as statib binaries in "/sbin".
Since Solaris 10 we know better and there is no static binary anymore.
BTW: the real Bourne Shell is now 100% portable and enhanced since a longer time. If you like to test the real Bourne Shell, check the latest schilytools:
https://sourceforge.net/projects/schilytools/files/
The Bourne Shell is also much faster than bash. In special on platforms like Cygwin, where Microsoft enforces extremly slow process creation.
Even though Solaris started using ksh as the default user environment, almost all of the start scrips were either bourne or bash scripts. With Bash having more functionality the scripts typically used the environment that suited the requirements best.
There are no bash scripts on Solaris as bash has too many deviatioons from the standard.
Jörg
Interesting thread i started! Sorry if my question was too vague: -->
On Fri, 4/24/15, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The Bourne Shell is also much faster than bash. In special on platforms like Cygwin, where Microsoft enforces extremly slow process creation.
This gets at what I was thinking. For scripts that are not run interactively, it seems wasteful to load all of Bash autocomplete, command history and all its rich features.
For running in high volume mail server for example, *short* scripts that take a few input args and invoke another program. Or do a mysql update (but it has been pointed out invoking mysql from a shell script is also inefficient since mysql client is also very feature rich with command history and things). Or take some arguments and make a curl HTTP request somewhere.
So my question is should I install ksh (I see it is available in yum centos base repo) and use that? Or should we consider to rewrite these short scripts to perl? I read on the web that perl with a few typical libraries is far slower to start up than a shell script. ?? (no heavy computations)
Just a side tangent was question if it would be of interest to link /bin/sh to something other than /bin/bash, if machine would implode or if it would make machine faster in any way.
thanks everyone!
On Fri, Apr 24, 2015 at 3:45 PM, E.B. emailbuilder88@yahoo.com wrote:
Interesting thread i started! Sorry if my question was too vague: -->
On Fri, 4/24/15, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
The Bourne Shell is also much faster than bash. In special on platforms like Cygwin, where Microsoft enforces extremly slow process creation.
This gets at what I was thinking. For scripts that are not run interactively, it seems wasteful to load all of Bash autocomplete, command history and all its rich features.
For running in high volume mail server for example, *short* scripts that take a few input args and invoke another program. Or do a mysql update (but it has been pointed out invoking mysql from a shell script is also inefficient since mysql client is also very feature rich with command history and things). Or take some arguments and make a curl HTTP request somewhere.
So my question is should I install ksh (I see it is available in yum centos base repo) and use that? Or should we consider to rewrite these short scripts to perl? I read on the web that perl with a few typical libraries is far slower to start up than a shell script. ?? (no heavy computations)
I'd do some serious timing tests in your typical environment before believing anything about this. The part that takes substantial time is if you have to load code from disk. Anything already running (loaded from the same inode, so including hard links to different names) should run shared-text without loading a new copy (also saving memory...). Anything that had been loaded recently but needs a new copy should be reloaded quickly from cache. Loading a new instance of some little used interpreter is going to hit the disk.
Your most likely win would be to consolidate operations into longer scripts and use perl where it can do work that would involve several other programs as shell commands. For example, I'd expect a single perl program with several mysql operations to be much faster than a shell script that needs to invoke mysql more than once - plus it is a lot easier to access the data in a perl program.
On Fri, Apr 24, 2015 at 08:32:45AM -0400, Scott Robbins wrote:
Wasn't Solaris, which for awhile at least, was probably the most popular Unix, using ksh by default?
Solaris /bin/sh was a real real dumb version of the bourne shell. Solaris included /bin/ksh as part of the core distribution (ksh88 was a part of the SVr4 specification) and so many scripts were written with #!/bin/ksh at the start (including tools like "patchadd").
Note Solaris had bugs in those tools because they didn't start "#!/bin/ksh -p" so if you had a $ENVFILE that included lines like "set -o noclobber" or had aliases then scripts would break (patchadd was a perfect example). Many of these got fixed by Solaris 8 :-)
Stephen Harris lists@spuddy.org wrote:
On Fri, Apr 24, 2015 at 08:32:45AM -0400, Scott Robbins wrote:
Wasn't Solaris, which for awhile at least, was probably the most popular Unix, using ksh by default?
Solaris /bin/sh was a real real dumb version of the bourne shell. Solaris included /bin/ksh as part of the core distribution (ksh88 was a part of the SVr4 specification) and so many scripts were written with #!/bin/ksh at the start (including tools like "patchadd").
The basic system had very few scripts that required ksh.
Jörg
Stephen Harris lists@spuddy.org wrote:
Solaris /bin/sh was a real real dumb version of the bourne shell.
If you like to create portable scripts, you can do this by downloading:
https://sourceforge.net/projects/schilytools/files/
and using "osh" as a reference implementation. Osh is the old SunOS Bourne Shell with all bugs that people expect from a SVr4 Bourne Shell. It just has been rewritten to make it portable, e.g. by converting it from sbrk() to malloc() that makes it work on Cygwin. This code to convert to malloc() was written by Geoff Collyer for David Korn for converting the Bourne Shell based ksh. In 2012, I have rewritten that code to make it fit the SVr4 version of the Bourne Shell and a month ago, this was tested by American fuzzy lop and so I could fix a few left over bugs from that conversion.
If you use "osh", you get exactly the same behavior than from a SunOS /bin/sh up to Solaris 10 included.
The current maintained Bourne Shell installed as "sh" and "bosh" has many enhancements, including the following:
- A history editor using my original design from 1982, that predates ksh.
- enhanced aliases (much more than ksh implements), the original design for this implementaion is also from 1982.
- rcfiles "/etc/sh.shrc" "$HOME/.shrc" for interactive shells
- the "repeat" builtin
- true / false builtin
- pushd / popd / dirs builtin && cd -
- support for read -r
- support for set -o
- support for sh -v -x instead of just set -vx
- support for umask -S
- Support for i; do .... with semicolon
- Report a syntax error for "echo foo |;"
- Bugfix for set -a; read VAR
- Evaluate var2=val2 var1=val1 left to right
- a much better man page
- Support for vfork() to speed up things.
Jörg
On Fri, Apr 24, 2015 at 7:02 AM, mark m.roth@5-cent.us wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
With sh being a link to bash in Centos I don't know if it would explode if the link was changed to something else, but at least the scripts we made on our own that run certain services could be changed and tested manually to another shell.
Are there other people who have experience in this and can provide interesting guidance?
Why go to that extreme if you tell a script on line 1 which shell to run it will do so. #!/bin/dash or what ever shell you want it to run in. I always do that to make sure that the script runs as expected, if you leave it out the script will run in whatever environment it currently is in.
I'm confused here, too, and this has been bugging me for some time: why sh, when almost 20 years ago, at places I've worked, production shell scripts went from sh to ksh. It was only after I got into the CentOS world in '09 that I saw all the sh scripts again.
The original ksh wasn't open source and might even have been an extra-cost item in AT&T unix. And the early emulations weren't always complete so you couldn't count on script portability. I generally thought it was safer to use perl for anything that took more than bourne shell syntax.
But as for efficiency, I'd think a script would have to do quite a lot of work to offset the need to page in different code for the interpreter. Any unix-like system should almost always have some instances of sh running and other instances of the same executable should run shared-text, where invoking a shell that isn't already loaded will have to load the code off the disk.
On 04/24/15 05:59, Les Mikesell wrote:
The original ksh wasn't open source and might even have been an extra-cost item in AT&T unix. And the early emulations weren't always complete so you couldn't count on script portability. I generally thought it was safer to use perl for anything that took more than bourne shell syntax.
You're right about the extra cost. In 1989 I bought the ksh source code from AT&T for $100.
Jack
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
On 4/24/2015 10:47 AM, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
A script with no shebang will run in the environment of the account running the script. If that account is root and root uses the bash shell then the script will run in the bash shell. If that account uses the korn shell then the script will run in a korn shell... etc. So it depends and Pete was more correct.
All the Sun systems I worked on (way in the past) had the bourne shell on the root account and I usually set my account up with a korn shell. On linux boxes both the root and personal account use the bash shell. Some systems will use a C shell, and, of course, other choices.
If you want a script to run under a specific shell you NEED the shebang line at the beginning. Assuming the bourne shell as a default is not reliable.
If you use good coding practices you will have that shebang line at the beginning of all scripts.
On 04/24/2015 09:59 AM, Steve Lindemann wrote:
A script with no shebang will run in the environment of the account running the script.
Bad test on my part, apparently.
$ python
import os os.execv('/home/gmessmer/test', ('test',))
Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 8] Exec format error
So a script with no shebang will fail when the shell calls exec(). If that's so, then starting the executable script with an interpreter is probably shell-defined. In other words, each shell might do something different to run a script that has no shebang. Most probably do default to trying itself as the interpreter first. Interesting.
On 4/24/2015 12:32 PM, Gordon Messmer wrote:
On 04/24/2015 09:59 AM, Steve Lindemann wrote:
A script with no shebang will run in the environment of the account running the script.
Bad test on my part, apparently.
$ python
import os os.execv('/home/gmessmer/test', ('test',))
Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 8] Exec format error
So a script with no shebang will fail when the shell calls exec(). If that's so, then starting the executable script with an interpreter is probably shell-defined. In other words, each shell might do something different to run a script that has no shebang. Most probably do default to trying itself as the interpreter first. Interesting.
is file test chmod +x ?
On 4/24/2015 9:47 AM, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
oh fun, just did some tests (using c6.latest). if you're in bash, ./script (sans shebang) runs it in bash. if you're in dash or csh, ./script runs it in sh. if you're in ksh, it runs it in ksh.
On Fri, April 24, 2015 12:04 pm, John R Pierce wrote:
On 4/24/2015 9:47 AM, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
oh fun, just did some tests (using c6.latest). if you're in bash, ./script (sans shebang) runs it in bash. if you're in dash or csh, ./script runs it in sh. if you're in ksh, it runs it in ksh.
Wow! Surprise ;-)
I just tested it on my FreeBSD workstation, and all works as expected (i.e. the script obeys shebang). Just in case, here is the contents of my test script:
######## #!/bin/sh
readlink /proc/$$/file ########
( note that that "file" is because I'm using FreeBSD /proc, for Linux you may need to replace the line with something like:
readlink /proc/$$/exe
Now the fun part
in bash:
$ echo $0 bash
$ ./test /bin/sh
in tcsh
% echo $0 tcsh
% ./test /bin/sh
in zsh
% echo $0 zsh
% ./test /bin/sh
But yet funnier thing:
$ bash ./test /usr/local/bin/bash
$ tcsh ./test /bin/tcsh
$ zsh ./test /usr/local/bin/zsh
Well, no creepy surprises for me ! ;-)
(you can do the same on Linux of your choice and see if it behaves ;-)
Thanks. Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
I believe if you re-read a little more closely, the whole point of the exercise was not to have the #! at the top of the script.
On 04/24/2015 01:36 PM, Valeri Galtsev wrote:
On Fri, April 24, 2015 12:04 pm, John R Pierce wrote:
On 4/24/2015 9:47 AM, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with
/bin/sh.<<< I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
oh fun, just did some tests (using c6.latest). if you're in bash,
./script (sans shebang)<<< runs it in bash. if you're in dash or csh,
./script runs it in sh. if you're in ksh, it runs it in ksh.
Wow! Surprise ;-)
I just tested it on my FreeBSD workstation, and all works as expected (i.e. the script obeys shebang). Just in case, here is the contents of my test script:
######## #!/bin/sh
readlink /proc/$$/file ########
( note that that "file" is because I'm using FreeBSD /proc, for Linux you may need to replace the line with something like:
readlink /proc/$$/exe
Now the fun part
in bash:
$ echo $0 bash
$ ./test /bin/sh
in tcsh
% echo $0 tcsh
% ./test /bin/sh
in zsh
% echo $0 zsh
% ./test /bin/sh
But yet funnier thing:
$ bash ./test /usr/local/bin/bash
$ tcsh ./test /bin/tcsh
$ zsh ./test /usr/local/bin/zsh
Well, no creepy surprises for me ! ;-)
(you can do the same on Linux of your choice and see if it behaves ;-)
Thanks. Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++ _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
######## #!/bin/sh
readlink /proc/$$/file ########
( note that that "file" is because I'm using FreeBSD /proc, for Linux you may need to replace the line with something like:
readlink /proc/$$/exe
And on a platform that implements a correct procfs-2, you should look at:
/proc/self/path/a.out or /proc/$$/path/a.out
This seems to be unknown e.g. to people from the FSF, so many autoconf tests are wrong.
Jörg
On Fri, Apr 24, 2015 at 12:04 PM, John R Pierce pierce@hogranch.com wrote:
On 4/24/2015 9:47 AM, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
oh fun, just did some tests (using c6.latest). if you're in bash, ./script (sans shebang) runs it in bash. if you're in dash or csh, ./script runs it in sh. if you're in ksh, it runs it in ksh.
If I'm doing cron jobs or a top-level control script I usually just specify the interpreter explicitly like cd somewhere && sh some_script.sh cd somewhere_else && perl some_script.pl so it works even if I forget to chmod it executable...
John R Pierce pierce@hogranch.com wrote:
oh fun, just did some tests (using c6.latest). if you're in bash, ./script (sans shebang) runs it in bash. if you're in dash or csh, ./script runs it in sh. if you're in ksh, it runs it in ksh.
See my other mail.
The scripts (unless marked) are run by the current interpreter. Csh runs unmarked scripts by "sh".
Jörg
On Fri, Apr 24, 2015 at 09:47:24AM -0700, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh
"It depends".
On older Unix-type systems which didn't understand #! then the shell itself did the work. At least csh did (sh didn't necessary). If the first character was a # then csh assumed it was a csh script, otherwise it assumed a sh script. That's why a lot of real old scripts began with :
and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
So on CentOS 5 with ksh93 as my shell
% cat x echo ${.sh.version}
Note that it's a simple one liner with no #!
% ./x Version AJM 93t+ 2010-06-21
That's ksh output!
Let's change my shell to "bash" instead % bash bash-3.2$ ./x ./x: line 1: ${.sh.version}: bad substitution
So now it's bash that's trying to interpret it!
So "it depends" is still true :-)
Basically, without #! there (which allows it to be exec'd) the shell determines how the file is interpreted.
Stephen Harris lists@spuddy.org wrote:
On Fri, Apr 24, 2015 at 09:47:24AM -0700, Gordon Messmer wrote:
On 04/24/2015 03:57 AM, Pete Geenhuizen wrote:
if you leave it out the script will run in whatever environment it currently is in.
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh
"It depends".
On older Unix-type systems which didn't understand #! then the shell itself did the work. At least csh did (sh didn't necessary). If the first character was a # then csh assumed it was a csh script, otherwise it assumed a sh script. That's why a lot of real old scripts began with :
As mentioned in the other mail, nearly all UNIX versions did support #! in the mid-1980s. The only exception was AT&T.
Even the first (realtime) UNIX clone UNOS added support for #! in 1985, but this support was not in the kernel but in the standard command interpreter.
Jörg
Gordon Messmer gordon.messmer@gmail.com wrote:
I'm reasonably certain that a script with no shebang will run with /bin/sh. I interpret your statement to mean that if a user is using ksh and enters the path to such a script, it would also run in ksh. That would only be true if you "sourced" the script from your shell.
The historical way is: there is only one shell and all scripts are Bourne Shell scripts.
Then csh came out and some people really thought is was a good idea to write csh scripts. So someone decided to mark csh scripts with an initial "#". Note that at that time, the Bourne Shell did not support "#" as a comment sign and thus scripts with an inital "#" have been illegal Bourne Shell scripts.
Later BSD came out with #!name and all but AT&T adopted to this.
In the mid 1980s, AT&T introduced an initial ":" to mark Bourne Shell scripts.
In 1989, with the beginning of SVr4, even AT&T introduced #!name, but the AT&T variant of the OS did not correct their scripts, so if you are on a UnixWare installation, you will have fun.
Unfortunately, POSIX cannot standardize #!name. This is because POSIX does not standardize PATHs and because the scripts marked that way would need to be scripts that call the POSIX shell. The official method to get a POSIX shell is to call this:
sh # to make sure you have a Bourne Shell alike PATH=`getconf PATH` # to get a POSIX compliant PATH sh # to get a POSIX shell, that muust be the first # 'sh' in the POSIX PATH
/bin/sh definitely does not start a POSIX shell.....
Jörg
On 4/24/2015 3:07 AM, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
perl or python are much better choices for complex scripts that need decent performance
On Fri, Apr 24, 2015 at 11:12 AM, John R Pierce pierce@hogranch.com wrote:
On 4/24/2015 3:07 AM, E.B. wrote:
I'm sure most people here know about Dash in Debian. Have there been discussions about providing a more efficient shell in Centos for use with heavily invoked non-interactive scripts?
perl or python are much better choices for complex scripts that need decent performance
Yes, the shell is great at launching other programs, redirecting i/o, creating pipes, expanding wildcard filenames and generally automating things with exactly the same syntax you'd use manually on the command line. But not so much at doing real computation itself. Even with perl if you have to do serious work you'll probably want modules that link in compiled C libraries.