I had a problem wherein running a script with an embedded ftp call would work in the login shell during integration testing and then fail with an unrecognized option error in cron during acceptance testing.
In solving this I discovered that RedHat, and therefore CentOS, ships with at least two ftp clients, /usr/bin/ftp ( which I thought I was using ) and /usr/kerberos/bin/ftp, which I actually was using. even though I had no inkling of its existence.
My question is why? Why are there two ftp clients provided in a single distribution and why is the kerberos version effectively made the default whereas one might reasonably assume that anything in /usr/bin/ is the standard ( and by inference default ) ftp client for the distribution? If kerberos ftp is intended to be the default ftp client then why is it not in, or at least linked to from, /usr/bin?
I just do not understand why these obscure distribution 'gotchas' are created in the first place, much less permitted to persist.
On 08/02/11 12:41 PM, James B. Byrne wrote:
I just do not understand why these obscure distribution 'gotchas' are created in the first place, much less permitted to persist.
you'd need to ask Red Hat that. Its their policy.
On Tue, Aug 2, 2011 at 2:41 PM, James B. Byrne byrnejb@harte-lyne.cawrote:
My question is why? Why are there two ftp clients provided in a single distribution and why is the kerberos version effectively made the default whereas one might reasonably assume that anything in /usr/bin/ is the standard ( and by inference default ) ftp client for the distribution? If kerberos ftp is intended to be the default ftp client then why is it not in, or at least linked to from, /usr/bin?
If you look in /etc/profile.d/ you'll see the krb5-workstation.sh script.
It tests the path to see if kerberos is in it, and if not, it prepends the path. This is why it is the first ftp used/found.
As to why, dunno. Ask rh.
At Tue, 2 Aug 2011 15:41:52 -0400 (EDT) CentOS mailing list centos@centos.org wrote:
I had a problem wherein running a script with an embedded ftp call would work in the login shell during integration testing and then fail with an unrecognized option error in cron during acceptance testing.
In solving this I discovered that RedHat, and therefore CentOS, ships with at least two ftp clients, /usr/bin/ftp ( which I thought I was using ) and /usr/kerberos/bin/ftp, which I actually was using. even though I had no inkling of its existence.
My question is why? Why are there two ftp clients provided in a single distribution and why is the kerberos version effectively made the default whereas one might reasonably assume that anything in /usr/bin/ is the standard ( and by inference default ) ftp client for the distribution? If kerberos ftp is intended to be the default ftp client then why is it not in, or at least linked to from, /usr/bin?
I just do not understand why these obscure distribution 'gotchas' are created in the first place, much less permitted to persist.
Does this give you a clue:
gollum.deepsoft.com% rpm -qf /usr/kerberos/bin/ftp /usr/bin/ftp krb5-workstation-1.6.1-55.el5_6.2 ftp-0.17-35.el5
On 02/08/2011 3:41 PM, James B. Byrne wrote:
I had a problem wherein running a script with an embedded ftp call would work in the login shell during integration testing and then fail with an unrecognized option error in cron during acceptance testing.
In solving this I discovered that RedHat, and therefore CentOS, ships with at least two ftp clients, /usr/bin/ftp ( which I thought I was using ) and /usr/kerberos/bin/ftp, which I actually was using. even though I had no inkling of its existence.
My question is why? Why are there two ftp clients provided in a single distribution and why is the kerberos version effectively made the default whereas one might reasonably assume that anything in /usr/bin/ is the standard ( and by inference default ) ftp client for the distribution? If kerberos ftp is intended to be the default ftp client then why is it not in, or at least linked to from, /usr/bin?
I just do not understand why these obscure distribution 'gotchas' are created in the first place, much less permitted to persist.
What I'm left wondering is:
1) Why you are relying on PATH expansion for this from something as critical as a cron job. It is good sysadmin practice to specify explicit paths for situations like this rather than to worry about whether or not there is a good or valid reason for there being 2 ftp clients installed on the system.
2) Why you are using an ftp client rather than something like wget or curl instead, both of which are far more powerful and script friendly.
Presumably the kerberos client only gets installed if kerberos infrastructure is installed, and the standard ftp client is probably part of the default install as people expect it to be there perhaps. Putting in unnecessary logic to only install one ftp client and not the other doesn't make much sense either.
Focusing on finding the best way develop solid solutions to administrative scripting problems like this is IMHO much more important than trying to know or guess what the rationale is behind there being 2 ftp binaries on the system.
Anyhow, if you have expectations of specific software being installed or not installed on your systems the only reliable and reproducible way to do that is to use kickstart with a minimal install and build up the package list to include only the things you want to have installed.
HTH
What I'm left wondering is:
- Why you are relying on PATH expansion for this from something as
critical as a cron job. It is good sysadmin practice to specify explicit paths for situations like this rather than to worry about whether or not there is a good or valid reason for there being 2 ftp clients installed on the system.
That was precisely my thought. I often noticed that people find it easier to blame others rather then questioning and rethinking their own actions...
Would that be the same as: why are there multiple desktops: kde gnome why are there multiple browsers: firefox konquerer why are there multiple text editors: vim joe nano why are there multiple mail distribution tools: sendmail, exim, postfix
why why why
Chris
--- On Tue, 8/2/11, Miguel Medalha miguelmedalha@sapo.pt wrote:
From: Miguel Medalha miguelmedalha@sapo.pt Subject: Re: [CentOS] Two ftp clients? Why? To: "CentOS mailing list" centos@centos.org Date: Tuesday, August 2, 2011, 4:01 PM
What I'm left wondering is:
- Why you are relying on PATH expansion for this from
something as
critical as a cron job. It is good sysadmin
practice to specify
explicit paths for situations like this rather than to
worry about
whether or not there is a good or valid reason for
there being 2 ftp
clients installed on the system.
That was precisely my thought. I often noticed that people find it easier to blame others rather then questioning and rethinking their own actions...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Chris Weisiger wrote:
Would that be the same as: why are there multiple desktops: kde gnome
fvwm, icewm, busybox, etc....
why are there multiple browsers: firefox konquerer why are there multiple text editors: vim joe nano
ROTFLMAO! You forgot emacs (take it to alt.religion.editors)
why are there multiple mail distribution tools: sendmail, exim, postfix
fetchmail, too.
It's the Unix Way: not "how can I do this?", but "of all the ways I can do it, which would I prefer?"
mark
On 8/2/2011 4:25 PM, m.roth@5-cent.us wrote:
Chris Weisiger wrote:
Would that be the same as: why are there multiple desktops: kde gnome
fvwm, icewm, busybox, etc....
why are there multiple browsers: firefox konquerer why are there multiple text editors: vim joe nano
ROTFLMAO! You forgot emacs (take it to alt.religion.editors)
why are there multiple mail distribution tools: sendmail, exim, postfix
fetchmail, too.
It's the Unix Way: not "how can I do this?", but "of all the ways I can do it, which would I prefer?"
No, its 'how can I repeat old mistakes' instead of learning from them or building on them.
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
On Tue, Aug 2, 2011 at 5:41 PM, Les Mikesell lesmikesell@gmail.com wrote:
No, its 'how can I repeat old mistakes' instead of learning from them or building on them.
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Les Mikesell
While I understand the sentiment of "why use old stuff", this is still a pretty ridiculous statement. It takes not even 10 seconds to think of situations where one would need to, such as interfacing with *paying* clients, etc...
Instead of suggesting alternate technologies, it should be suggested to not use an ftp client at all and instead use a scripting language, such as perl or python, that has libraries meant for talking to these protocols. Their man pages pretty much show you how even if you don't know the language.
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
-☙ Brian Mathis ❧-
On 8/2/2011 6:06 PM, Brian Mathis wrote:
On Tue, Aug 2, 2011 at 5:41 PM, Les Mikeselllesmikesell@gmail.com wrote:
No, its 'how can I repeat old mistakes' instead of learning from them or building on them.
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Les Mikesell
While I understand the sentiment of "why use old stuff", this is still a pretty ridiculous statement. It takes not even 10 seconds to think of situations where one would need to, such as interfacing with *paying* clients, etc...
Yes, if you don't control both ends or you are talking to an embedded device that can't do anything better...
Instead of suggesting alternate technologies, it should be suggested to not use an ftp client at all and instead use a scripting language, such as perl or python, that has libraries meant for talking to these protocols. Their man pages pretty much show you how even if you don't know the language.
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
There are reasonable clients for automating ftp (curl, wget, ncftp, lftp, etc.). But they can't match rsync for most things if the goal is to move files around, update them in place, etc. And if you have to traverse firewalls, ftp is about the worst possible protocol to use.
On Tue, 2 Aug 2011, Les Mikesell wrote:
*snip*
While I understand the sentiment of "why use old
stuff", this is still
a pretty ridiculous statement. It takes not even 10 seconds to think of situations where one would need to, such as interfacing with *paying* clients, etc...
Yes, if you don't control both ends or you are talking to an embedded device that can't do anything better...
*snip*
There are reasonable clients for automating ftp (curl, wget, ncftp, lftp, etc.). But they can't match rsync for most things if the goal is to move files around, update them in place, etc. And if you have to traverse firewalls, ftp is about the worst possible protocol to use.
I have Proftpd running on my main centos machine. I use gFTP on centos to connect to this machine over my LAN. This allows me to move files between the laptop and the main machine. All my external ports are blocked, and I use ftp as I find the GUI easy and intuitive to use. I would not consider using a commandline ftp client.
On my other laptop running Vista I use WinSCP, which is a free GUI ftp client, that allows me to move files from the centos machine to the Vista laptop.
Having said that, I can also use my USB flash drive to transfer some files between those laptops and the machine running centos. But it's quicker for me to use ftp over the LAN.
One example of using ftp would be me doing some experimental test programs on my (centos) laptop, then ftp'ing to the centos machine and backing up those laptop files to my main centos box's HDD. That way, if the HDD on the lappy goes down, I still have some decent backups on another machine :-)
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On 8/3/2011 12:21 AM, Keith Roberts wrote:
There are reasonable clients for automating ftp (curl, wget, ncftp, lftp, etc.). But they can't match rsync for most things if the goal is to move files around, update them in place, etc. And if you have to traverse firewalls, ftp is about the worst possible protocol to use.
I have Proftpd running on my main centos machine. I use gFTP on centos to connect to this machine over my LAN. This allows me to move files between the laptop and the main machine. All my external ports are blocked, and I use ftp as I find the GUI easy and intuitive to use. I would not consider using a commandline ftp client.
Personally, I find it quicker and easier to use command line scp or rsync (essentially the same arguments as cp) when moving things around unless I've forgotten the name and they'll work anywhere ssh works. But if you want a GUI, the gnome file manager already knows about ssh, windows shares, and ftp. Try File/Open Location and type in sftp://user@host:/path and it will connect over ssh as the specified user and you can drag/drop or copy/paste among windows.
On my other laptop running Vista I use WinSCP, which is a free GUI ftp client, that allows me to move files from the centos machine to the Vista laptop.
Having said that, I can also use my USB flash drive to transfer some files between those laptops and the machine running centos. But it's quicker for me to use ftp over the LAN.
Do yourself a favor and set up a common nfs export and samba share from a stable linux box on the network. Then mount/map that into everything else. That gives you a common transfer point that works with everything directly (i.e. you can download from one machine, execute or maybe burn an iso to a DVD from another without extra transfer steps).
One example of using ftp would be me doing some experimental test programs on my (centos) laptop, then ftp'ing to the centos machine and backing up those laptop files to my main centos box's HDD. That way, if the HDD on the lappy goes down, I still have some decent backups on another machine :-)
If you do much of this, set up subversion or a similar version control system on a stable, backed-up server so it's just a simple 'commit' to save changes and you'll be able to retrieve any committed version, not just the last copy. For a generic backup system, look at backuppc which can use rsync as the transport and pools all duplicate files to keep more online than you would expect. It's not that there is anything wrong with ftp, but it is very limited compared to better alternatives.
On 8/2/2011 11:21 PM, Keith Roberts wrote:
Having said that, I can also use my USB flash drive to transfer some files between those laptops and the machine running centos. But it's quicker for me to use ftp over the LAN.
Even faster is Dropbox. If you keep the frequently-synched files in your Dropbox, you don't even have an explicit copying step. It just happens, over the LAN if possible, via the Cloud otherwise. For files sufficiently small, it happens during the time it takes you to switch machines.
On Wed, 3 Aug 2011, Warren Young wrote:
To: CentOS mailing list centos@centos.org From: Warren Young warren@etr-usa.com Subject: Re: [CentOS] Two ftp clients? Why?
On 8/2/2011 11:21 PM, Keith Roberts wrote:
Having said that, I can also use my USB flash drive to transfer some files between those laptops and the machine running centos. But it's quicker for me to use ftp over the LAN.
Even faster is Dropbox. If you keep the frequently-synched files in your Dropbox, you don't even have an explicit copying step. It just happens, over the LAN if possible, via the Cloud otherwise. For files sufficiently small, it happens during the time it takes you to switch machines.
Thanks for that suggestion Warren. I'll take a look at that ASAP.
Keith
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On Tuesday, August 02, 2011 04:06:53 PM Brian Mathis wrote:
Instead of suggesting alternate technologies,
Ok, so this implies that suggesting alternatives is bad...
it should be suggested to not use an ftp client at all and instead use a scripting language, such as perl or python, that has libraries meant for talking to these protocols. Their man pages pretty much show you how even if you don't know the language.
Wait - isn't that an alternative technology?!?
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
/bin/sh is an "old method". TCP is pretty ancient, as well. For that matter, UNIX is REALLY ancient. Yet somehow, they are not only still useful, but highly relevant. Wheels are also old technology!
There are often situations that have special needs that alternatives don't accomodate. For example, a general purpose tool (such as tcp wrappers in a scripting environment) often don't give you the fine level of control that you may need for special needs. Such as, for instance, the web-based product that adds an optional http header to indicate an error condition. Tools like wget or curl don't always allow access to the options needed to access this and so "sending stdout thru a pipe to an FTP client" might be preferable.
I've been around the block long enough to know that those who are most certain they have the right answer right away are usually those least likely to have it. Science backs this conclusion up, it's called the Dunning-Kruger effect.
On Tue, Aug 2, 2011 at 10:19 PM, Benjamin Smith lists@benjamindsmith.com wrote:
On Tuesday, August 02, 2011 04:06:53 PM Brian Mathis wrote:
Instead of suggesting alternate technologies,
Ok, so this implies that suggesting alternatives is bad...
it should be suggested to not use an ftp client at all and instead use a scripting language, such as perl or python, that has libraries meant for talking to these protocols. Their man pages pretty much show you how even if you don't know the language.
Wait - isn't that an alternative technology?!?
No it's not, and you're making a stupid argument. Clearly there is a difference between using a different client versus changing the entire protocol stack across all systems it's being used for. Using a better client mechanism involves maybe an hour or so worth of work, while changing the entire protocol you're using requires changing every service on every server in every company you might be interfacing with. One of those is easy to do, the other one is likely impossible.
I find it strange and annoying that so many times the answers to questions like the OP's so often and so clearly miss the mark, as if no one here understands what's actually involved in implementing a new protocol stack across an enterprise or between enterprises.
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
/bin/sh is an "old method". TCP is pretty ancient, as well. For that matter, UNIX is REALLY ancient. Yet somehow, they are not only still useful, but highly relevant. Wheels are also old technology!
See above, re: stupid argument. If your objection is to the use of the word "old" as opposed to something like "error prone", please perform 's/old/error prone/g' in your head and save us the pixels. P.S. Something becomes "old" when it's been replaced by a newer, better way of doing things, not simply because of age.
Redirecting commands into an ftp client (and, btw, I don't know if the OP is doing this, but it's still amazingly common) is a provably bad "old" method of doing things. You cannot deal with error conditions or anything else that might come up. Using a scripting language/library allows you to deal with these obvious problems.
There are often situations that have special needs that alternatives don't accommodate. For example, a general purpose tool (such as tcp wrappers in a scripting environment) often don't give you the fine level of control that you may need for special needs. Such as, for instance, the web-based product that adds an optional http header to indicate an error condition. Tools like wget or curl don't always allow access to the options needed to access this and so "sending stdout thru a pipe to an FTP client" might be preferable.
I've been around the block long enough to know that those who are most certain they have the right answer right away are usually those least likely to have it. Science backs this conclusion up, it's called the Dunning-Kruger effect.
-☙ Brian Mathis ❧-
On 8/3/2011 10:30 AM, Brian Mathis wrote:
to not use an ftp client at all and instead use a scripting language, such as perl or python, that has libraries meant for talking to these protocols. Their man pages pretty much show you how even if you don't know the language.
Wait - isn't that an alternative technology?!?
No it's not, and you're making a stupid argument. Clearly there is a difference between using a different client versus changing the entire protocol stack across all systems it's being used for. Using a better client mechanism involves maybe an hour or so worth of work, while changing the entire protocol you're using requires changing every service on every server in every company you might be interfacing with. One of those is easy to do, the other one is likely impossible.
That might be true, or it might not. If you already have an ssh service running, you don't have to set up something else to run rsync or scp (or sftp, I think...).
I find it strange and annoying that so many times the answers to questions like the OP's so often and so clearly miss the mark, as if no one here understands what's actually involved in implementing a new protocol stack across an enterprise or between enterprises.
Which is why most places end up running stuff over ssh as a transport.
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
/bin/sh is an "old method". TCP is pretty ancient, as well. For that matter, UNIX is REALLY ancient. Yet somehow, they are not only still useful, but highly relevant. Wheels are also old technology!
Which is why people like unix/shells/pipes, where every well designed program includes the ability to use all the others.
Redirecting commands into an ftp client (and, btw, I don't know if the OP is doing this, but it's still amazingly common) is a provably bad "old" method of doing things. You cannot deal with error conditions or anything else that might come up. Using a scripting language/library allows you to deal with these obvious problems.
As does running a program that is better-designed to do the job.
On Wednesday, August 03, 2011 08:30:02 AM Brian Mathis wrote:
Wait - isn't that an alternative technology?!?
No it's not, and you're making a stupid argument. Clearly there is a difference between using a different client versus changing the entire protocol stack across all systems it's being used for. Using a better client mechanism involves maybe an hour or so worth of work, while changing the entire protocol you're using requires changing every service on every server in every company you might be interfacing with. One of those is easy to do, the other one is likely impossible.
As you make the point later, perl is a different technology than /usr/bin/ftp. Both can use the same protocol.
I find it strange and annoying that so many times the answers to questions like the OP's so often and so clearly miss the mark, as if no one here understands what's actually involved in implementing a new protocol stack across an enterprise or between enterprises.
We're all doing some different, you know? Some of us have to deal with arcane "requirements" written by some midlevel bureaucrat. I prefer using sftp, scp, or post/https for secure file transfers. More than once I've been forced to use FTP for "security reasons", even after I try to explain otherwise.
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
/bin/sh is an "old method". TCP is pretty ancient, as well. For that matter, UNIX is REALLY ancient. Yet somehow, they are not only still useful, but highly relevant. Wheels are also old technology!
See above, re: stupid argument. If your objection is to the use of the word "old" as opposed to something like "error prone", please perform 's/old/error prone/g' in your head and save us the pixels. P.S. Something becomes "old" when it's been replaced by a newer, better way of doing things, not simply because of age.
I see this nowhere in the standard definition for "old". http://dictionary.reference.com/browse/old
Redirecting commands into an ftp client (and, btw, I don't know if the OP is doing this, but it's still amazingly common) is a provably bad "old" method of doing things. You cannot deal with error conditions or anything else that might come up. Using a scripting language/library allows you to deal with these obvious problems.
You might consider becoming familiar with expect, perhaps? # yum install expect;
I've been around the block long enough to know that those who are most certain they have the right answer right away are usually those least likely to have it. Science backs this conclusion up, it's called the Dunning-Kruger effect.
Strange: no comment here?
Please fix the fonts in your email client. I have no problem with HTML email, but it's coming across as Times New Roman at 6pt size.
On Wed, Aug 3, 2011 at 3:15 PM, Benjamin Smith lists@benjamindsmith.com wrote:
On Wednesday, August 03, 2011 08:30:02 AM Brian Mathis wrote:
Wait - isn't that an alternative technology?!?
No it's not, and you're making a stupid argument. Clearly there is a difference between using a different client versus changing the entire protocol stack across all systems it's being used for. Using a better client mechanism involves maybe an hour or so worth of work, while changing the entire protocol you're using requires changing every service on every server in every company you might be interfacing with. One of those is easy to do, the other one is likely impossible.
As you make the point later, perl is a different technology than /usr/bin/ftp. Both can use the same protocol.
You really want to keep this ridiculous and utterly pedantic argument going? OK.
Obviously using a different client method is, oh my god, *different*. Technically, every time you run the same script, different electrons would be used, so that's different too. Many of the other replies ask "why not use this or that other protocol instead". Clearly this is the context I am referring to here.
Please have conversations at a human level. We are not computers trying to agree on some exact definition of a word before we can continue with some protocol negotiation. The network protocol implemented across a bunch of servers is different than a single client used to access them, and that this is clearly what I'm referring to.
I find it strange and annoying that so many times the answers to questions like the OP's so often and so clearly miss the mark, as if no one here understands what's actually involved in implementing a new protocol stack across an enterprise or between enterprises.
We're all doing some different, you know? Some of us have to deal with arcane "requirements" written by some midlevel bureaucrat. I prefer using sftp, scp, or post/https for secure file transfers. More than once I've been forced to use FTP for "security reasons", even after I try to explain otherwise.
My point is that this happens all the time. There are frequently responses to questions that flippantly suggest something like "just change your whole universe because doing it this other way is marginally better". The poster didn't ask about that, and often knows about the other options. But as you said, everyone has different requirements, so the responses of "just change everything" are worse than noise; they completely derail the conversation (as exemplified by Les's insistence on beating the rsync drum into the ground).
The questionable thing is not using entrenched protocols, but using old methods like redirecting ftp commands via STDIN into a client to control it.
/bin/sh is an "old method". TCP is pretty ancient, as well. For that matter, UNIX is REALLY ancient. Yet somehow, they are not only still useful, but highly relevant. Wheels are also old technology!
See above, re: stupid argument. If your objection is to the use of the word "old" as opposed to something like "error prone", please perform 's/old/error prone/g' in your head and save us the pixels. P.S. Something becomes "old" when it's been replaced by a newer, better way of doing things, not simply because of age.
I see this nowhere in the standard definition for "old". http://dictionary.reference.com/browse/old
I once again refer you to, re: stupid argument
Redirecting commands into an ftp client (and, btw, I don't know if the OP is doing this, but it's still amazingly common) is a provably bad "old" method of doing things. You cannot deal with error conditions or anything else that might come up. Using a scripting language/library allows you to deal with these obvious problems.
You might consider becoming familiar with expect, perhaps? # yum install expect;
I have used expect and it's only good as a last resort when you have no other options. It's only marginally better than having a monkey typing on the keyboard, and reacts just about as well to errors. Using an actual client library gives you full control over both functions and error handling, and generally takes much less effort than expect to get working right. It's still better than redirecting from stdin.
I've been around the block long enough to know that those who are most certain they have the right answer right away are usually those least likely to have it. Science backs this conclusion up, it's called the Dunning-Kruger effect.
Strange: no comment here?
I was going to throw it into the "stupid argument" category, but decided to save the pixels. I'll also raise you an "irrelevant", since this is not about certainty over "the right answer", it's about the flexibility of the tools one uses to reach the answer. The ability to discuss using better tools at all would seem to invalidate the "incompetence denies them the meta-cognitive ability to recognize their mistakes" tenet for that effect to applicable here.
-☙ Brian Mathis ❧-
On 8/3/2011 2:15 PM, Benjamin Smith wrote:
P.S. Something becomes "old" when it's been replaced by a newer, better way of doing things, not simply because of age.
I see this nowhere in the standard definition for "old".
You should see 'obsolete, dated, outdated, stale, outmoded', etc. as synonyms if you look in the right places.
Or there's this from urbandictionary.com: "You begin to be old when people who deliver pizzas are systematically younger than you." or "Old is when its more expensive to buy the candles, than the cake to put it on."
Both of those should apply to ftp (and probably a few of us...).
I've been around the block long enough to know that those who are most certain they have the right answer right away are usually those least likely to have it. Science backs this conclusion up, it's called the Dunning-Kruger effect.
Strange: no comment here?
Pot, kettle? Seems like something with built-in recursion to the person bringing it up.
On Tue, 2011-08-02 at 16:41 -0500, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
having grown-up on computers before M$ existed, I still find FTP very easy, quick and efficient.
Must have a play with rsync though.
On 08/02/11 8:32 PM, Always Learning wrote:
having grown-up on computers before M$ existed, I still find FTP very easy, quick and efficient.
the FTP protocol has 2 fundamental problems. first, its a plaintext protocol that uses plaintext user/password authentication, and secondly, it creates dynamic sockets on the fly for file transfer, which makes tunneling it through firewalls problematic. Further, there's two different methods of socket creation, each of which requires special case handling in firewalls at either the client or server side, and this method is chosen by the client, the server has no choice but to support what the client requests, these two modes are known as passive and active..
if you have to use ftp to transfer files, for instance with legacy embedded systems, and you're scripting this, check out lftp, its far more script friendly than the old legacy FTP client.
me, I use scp/sftp for authenticated remote file transfers over the internet, and mostly use NFS for internal lan transfers. rsync is useful for incremental updates of a large set of files. for anonymous file serving, I prefer to use http rather than FTP, its just as fast at the raw transfer, and its stateless, so there's less overhead on the server.
as an example of lftp, this is my cron job for updating my internal centos mirror
/usr/bin/lftp -c 'open ftp://mirrors.kernel.org/pub/ && lcd /export/mirror && \ mirror -c -x ia64 -x s390 -x s390x -x alpha -x SRPMS centos'
(note, I'm not mirroring itanium, system/390, or alpha, nor the SRPMs. My local mirror is in /export/mirror/centos on that system, which is available via both NFS and http on my local network...).
On 8/2/11 10:32 PM, Always Learning wrote:
On Tue, 2011-08-02 at 16:41 -0500, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
having grown-up on computers before M$ existed, I still find FTP very easy, quick and efficient.
Neither rsync nor http have anything to do with M$, they are just well designed protocols. Rysnc is specialized for copying files and directory trees, is normally used over ssh, and doesn't need any extra server-side setup other than ssh keys if you want it to work without passwords. Http is very general and the setup can be as simple or complicated as you want - and it is well understood by firewalls and proxies.
Must have a play with rsync though.
If ssh works between systems, it will 'just work'.
On Wed, Aug 3, 2011 at 12:24 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 8/2/11 10:32 PM, Always Learning wrote:
On Tue, 2011-08-02 at 16:41 -0500, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
having grown-up on computers before M$ existed, I still find FTP very easy, quick and efficient.
Neither rsync nor http have anything to do with M$, they are just well designed protocols. Rysnc is specialized for copying files and directory trees, is normally used over ssh, and doesn't need any extra server-side setup other than ssh keys if you want it to work without passwords. Http is very general and the setup can be as simple or complicated as you want - and it is well understood by firewalls and proxies.
Rsync barely works well on Windows, and certainly not without some sort of Cygwin involved. It works fine if you have a few files in a folder, but once you start dealing with directory trees, you run into many issues with folder redirections, loops, and junction points.
As for not needing extra server-side setup, you're talking about Windows here, which most definitely *does* need server-side setup for both ssh and rsync. It does not "just work" at all. Once again, you're talking about Cygwin, which is great but not exactly easy to deal with nor something standard.
Must have a play with rsync though.
If ssh works between systems, it will 'just work'.
-☙ Brian Mathis ❧-
On 8/3/2011 10:41 AM, Brian Mathis wrote:
having grown-up on computers before M$ existed, I still find FTP very easy, quick and efficient.
Neither rsync nor http have anything to do with M$, they are just well designed protocols. Rysnc is specialized for copying files and directory trees, is normally used over ssh, and doesn't need any extra server-side setup other than ssh keys if you want it to work without passwords. Http is very general and the setup can be as simple or complicated as you want - and it is well understood by firewalls and proxies.
Rsync barely works well on Windows
So what does???
, and certainly not without some sort of Cygwin involved.
Cygwin is 'just a .dll' as far as windows is concerned. I think you can find bundled versions of just the rsync, ssh, and sshd executables with the cygwin dll, maybe even wrapped in a windows installer if you have something against the full cygwin setup.
It works fine if you have a few files in a folder, but once you start dealing with directory trees, you run into many issues with folder redirections, loops, and junction points.
There are people using it for backups in combination with backuppc. They seem to think it works better than native windows file shares with smbtar which is the 'serverless' option.
Are you saying that ftp knows anything about the possible weirdness of junction points in NTFS?
As for not needing extra server-side setup, you're talking about Windows here, which most definitely *does* need server-side setup for both ssh and rsync. It does not "just work" at all. Once again, you're talking about Cygwin, which is great but not exactly easy to deal with nor something standard.
I have too much history with frequent compromises of windows ftp back in NT and w2k days in spite of best security practices to ever consider running it on a public facing system again, but maybe things are better now...
For internal use, smbclient is OK for an occasional file copy, or using a UNC path for windows->windows.
On Wed, 2011-08-03 at 11:11 -0500, Les Mikesell wrote:
I have too much history with frequent compromises of windows ftp back in NT and w2k days in spite of best security practices to ever consider running it on a public facing system again, but maybe things are better now...
Why oh why are sensible people using Windoze ??? Its too much aggro !
On 8/3/2011 10:11 AM, Les Mikesell wrote:
Rsync barely works well on Windows
So what does???
Please, can we drop the petty advocacy?
You're undoubtedly quite aware that there's a hell of a lot of software that runs best on Windows. The fact that there's a lot of low-quality ports from *ix that run poorly does not reflect on Windows.
Some cases in point: Apache, MySQL, and Perl. All of them started out on *ix, and limped by with half-hearted Windows ports for years and years. All now run very well on Windows, because many concerned people put in the concerted effort to make true native ports happen.
As for rsync, there are a bunch of problems.
One is that the source is highly unportable. It heavily uses forks and pipes and such which have no direct equivalent under Windows. All of that would have to be abstracted away as they've done in the first-quality ports mentioned above.
Above the API level, you have further problems, like POSIXland assumptions that break down under Windows: the expected existence of separate ssh binaries everywhere, the usefulness of the HOME environment variable, the value of config files in ~/.ssh. You'd have to replace all that with Windowsisms to make a proper native Windows port.
Until then, you're forced to build and use it under Cygwin, which brings its own problems: heavyweight native API wrappers, its own bugs[*], incomplete POSIX semantics despite best efforts, etc. (* Years ago, there was a really nasty bug in Cygwin signal handling that caused it to hang hard during transfers. This was well known for years, and went undiagnosed in large part because of attitudes like yours. "Well, it's Windows, what do you expect?") BTW, I say this as a long-time Cygwin contributor and supporter.
Bottom line: no, I would not recommend rsync to a Windows user. It's fine today for those of us who already use Cygwin for other reasons, but to outsiders, it's a mess.
On 8/3/2011 11:48 AM, Warren Young wrote:
On 8/3/2011 10:11 AM, Les Mikesell wrote:
Rsync barely works well on Windows
So what does???
Please, can we drop the petty advocacy?
That was only partly petty - I'm interested in an answer to the question if there is one, and I don't think ftp is that great.
As for rsync, there are a bunch of problems.
One is that the source is highly unportable. It heavily uses forks and pipes and such which have no direct equivalent under Windows. All of that would have to be abstracted away as they've done in the first-quality ports mentioned above.
Yes, I've been surprised that no one has done a native port. Hmmm, I wonder how hard it would be to adapt the rsync-in-perl flavor that is built into backuppc on top of the first-class ports of strawberry or active perl?
Until then, you're forced to build and use it under Cygwin, which brings its own problems: heavyweight native API wrappers, its own bugs[*], incomplete POSIX semantics despite best efforts, etc. (* Years ago, there was a really nasty bug in Cygwin signal handling that caused it to hang hard during transfers. This was well known for years, and went undiagnosed in large part because of attitudes like yours. "Well, it's Windows, what do you expect?") BTW, I say this as a long-time Cygwin contributor and supporter.
Yes, I'm aware of the bug in the versions before the cygwin 1.7 release. It didn't affect rsync-as-a-daemon or initiating an rsync command over ssh from the windows side - only rsync started under sshd. But old/fixed bugs aren't particularly interesting (even though I did point them out about windows ftp because I saw them as something generic and predictable while long-standing cygwin bugs have been rare...).
Bottom line: no, I would not recommend rsync to a Windows user. It's fine today for those of us who already use Cygwin for other reasons, but to outsiders, it's a mess.
But what is the easier/better alternative for cross platform use?
Les Mikesell wrote:
On 8/3/2011 11:48 AM, Warren Young wrote:
On 8/3/2011 10:11 AM, Les Mikesell wrote:
Rsync barely works well on Windows
So what does???
<snip>
One is that the source is highly unportable. It heavily uses forks and pipes and such which have no direct equivalent under Windows. All of that would have to be abstracted away as they've done in the first-quality ports mentioned above.
Yes, I've been surprised that no one has done a native port. Hmmm, I
<snip> Here's a question: back around '96 or '97, M$ announced that they'd made NT POSIX compatible, including a Korn shell*. Is that anywhere inside Windows, still? If so, I'd think it would allow a lot....
mark
* Amusing related true story: M$ honcho at a convention, and giving a long spiel about their new Korn shell. An older man got up in the rear of the hall, and responded that it wasn't really a full implementation. The M$ honcho started to argue with him... until someone else got up, and pointed out that the honcho was arguing with Dr. Korn.
M$ honcho had the grace to look embarrassed.
On 08/03/11 10:35 AM, m.roth@5-cent.us wrote:
Here's a question: back around '96 or '97, M$ announced that they'd made NT POSIX compatible, including a Korn shell*. Is that anywhere inside Windows, still? If so, I'd think it would allow a lot....
I'm pretty sure that was gutted some time around NT4 or NT5 (aka win2000). NT was originally designed as a true microkernel architecture, where 'Windows' (the win32 API) was a personality layer of top of the native kernel, and alternate personalities included Posix and OS/2. Various aspects of this were gutted as they got in the way of decent performance, anyways it was only Posix.1, which wasn't very useful except as a bullet point on a feature list.
On Wednesday, August 03, 2011 12:11:06 PM Les Mikesell wrote:
Cygwin is 'just a .dll' as far as windows is concerned. I think you can find bundled versions of just the rsync, ssh, and sshd executables with the cygwin dll, maybe even wrapped in a windows installer if you have something against the full cygwin setup.
CWRsync is one. We use it for backing up our lightning detector and weather stations (both of which run on Windows for various reasons; in the lightning detector case, drivers for the Boltek board are, as far as I know, Windows-only), where we only want changes to be backed up (in trees with many hundreds if not thousands of small to medium-sized files), and from a server cron.... just exactly what rsync was designed to do. It works quite well. A full transfer using smbclient takes hours and loads down the controller PC's; rsync takes a few tens of seconds nightly and presents little load.
On 08/03/2011 06:41 AM, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Do we have Kerberized rsync yet? Or Globus rsync?
If so... please post a link and... (^.^)
Anyway, that sort of gets to the heart of just why we have several (not just two) ftp options. ftp, vsftp, Kerberized ftp, gridftp, etc... Its a pretty common tool and in some specific cases scripting the niche ones is necessary due to a lack of alternatives to match a given environment -- though if security isn't an issue (bringing in signed, public packages from a repo, say)... then yeah, rsync; though some people view it as "just one more thing to have to learn" and never discovering the benefits.
-Iwao
At Wed, 03 Aug 2011 14:12:27 +0900 CentOS mailing list centos@centos.org wrote:
On 08/03/2011 06:41 AM, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Do we have Kerberized rsync yet? Or Globus rsync?
If so... please post a link and... (^.^)
Is there a Kerberized ssh? Or a Kerberized ssh-agent?
Anyway, that sort of gets to the heart of just why we have several (not just two) ftp options. ftp, vsftp, Kerberized ftp, gridftp, etc... Its a pretty common tool and in some specific cases scripting the niche ones is necessary due to a lack of alternatives to match a given environment -- though if security isn't an issue (bringing in signed, public packages from a repo, say)... then yeah, rsync; though some people view it as "just one more thing to have to learn" and never discovering the benefits.
-Iwao _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, Aug 03, 2011 at 07:57:33AM -0400, Robert Heller wrote:
Is there a Kerberized ssh? Or a Kerberized ssh-agent?
Kerberos ssh; yes, using gssapi
% ldd /usr/sbin/sshd | grep krb libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x0026a000) libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x002bc000) libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00fa2000)
You can turn it on using options in sshd_config. I did some testing recently: http://sweh.spuddy.org/Essays/Kerberos/begining_kerberos.html
Kerberized ssh-agent doesn't make sense 'cos you're not using public keys, and the kerberos layer, itself, can request tickets to be forwarded (see my link above); no agent support needed.
On 8/3/11 12:12 AM, 夜神 岩男 wrote:
On 08/03/2011 06:41 AM, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Do we have Kerberized rsync yet? Or Globus rsync?
If so... please post a link and... (^.^)
Rysnc only uses its own transport if you run it in daemon mode which is pretty rare. Current versions run over ssh by default so if that already works with kerberos so will rsync. Older versions used to run over rsh, so that would have used the version found in the same kerberos/bin/ directory as ftp. Or you can use the '-e ' option to control the transport shell.
Anyway, that sort of gets to the heart of just why we have several (not just two) ftp options. ftp, vsftp, Kerberized ftp, gridftp, etc... Its a pretty common tool and in some specific cases scripting the niche ones is necessary due to a lack of alternatives to match a given environment -- though if security isn't an issue (bringing in signed, public packages from a repo, say)... then yeah, rsync; though some people view it as "just one more thing to have to learn" and never discovering the benefits.
It's easier to learn than the other options because the arguments are pretty much the same as cp with the option to add a remote user@host to the source or target. It also works better than most of the other ways to copy because besides only moving the changed data on a repeated run, it creates the target file with a temp name and renames only when complete so if the files are being used, nothing will open/access partial files.
On 8/3/2011 6:57 AM, Les Mikesell wrote:
Current versions [of rsync] run over ssh by default
I didn't notice that change, thanks.
I tracked it down, and it happened in rsync 2.6.0, which was released after EL3, which ships with 2.5.7. Alas, it appears I still need to keep "-e ssh" in muscle memory....
On 2/8/11 10:12 PM, "夜神 岩男" supergiantpotato@yahoo.co.jp wrote:
On 08/03/2011 06:41 AM, Les Mikesell wrote:
But back to the original problem, why would anyone use ftp in this century when rsync or http(s) are so much easier to manage?
Do we have Kerberized rsync yet? Or Globus rsync?
If so... please post a link and... (^.^)
To kerberize rsync, just kerberize SSH. Rsync when set with --rsh=ssh should honour the GSSAPI settings for SSH when it authenticates.
On Tue, 2011-08-02 at 22:01 +0100, Miguel Medalha wrote:
What I'm left wondering is:
- Why you are relying on PATH expansion for this from something as
critical as a cron job. It is good sysadmin practice to specify explicit paths for situations like this rather than to worry about whether or not there is a good or valid reason for there being 2 ftp clients installed on the system.
That was precisely my thought. I often noticed that people find it easier to blame others rather then questioning and rethinking their own actions...
Perhaps because lacking sufficient understanding they believe they have got it correct ? Rethinking is only possible when one has gained sufficient knowledge to be begin to question of the logic of the situation ! No knowledge or very limited knowledge means rethinking is not possible.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos