Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
Each time I reboot, it seems like the configuration file I create for Network Manager gets destroyed and replaced with a default file. Nothing in the default file would actually make sense on my network, so I'm not even really sure how this machine is still connected to the network after a reboot destroys my previous configuration.
The only way I seem to be able to keep my proper DNS settings is through the GUI interface to Network Manager. I have to enter the configuration in each time I reboot. At the very least, I just want to stop Network Manager from wiping out my perfectly fine /etc/resolv.conf.
There has to be a better way.
On Sun, Apr 27, 2014 at 12:33:27AM -0400, Evan Rowley wrote:
Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX vi /etc/resolv.conf chkconfig network on service network start
John
I don't use NetworkManager on servers, only my laptop. Makes servers act weird. On Apr 27, 2014 1:06 AM, "Andrew Holway" andrew.holway@gmail.com wrote:
service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX vi /etc/resolv.conf chkconfig network on service network start
Yes. Burn it with fire! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 04/27/2014 10:14 AM, Christopher Jacoby wrote:
I don't use NetworkManager on servers, only my laptop. Makes servers act weird.
You know, you don't get NetworkManager on a server if you don't install the 'Desktop' group. The list of packages that actually require NetworkManager is very small.
I have a development machine that also acts as a server, and it has NetworkManager installed, but it does not act 'weird' in networking.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/26/2014 11:37 PM, John R. Dennison wrote:
On Sun, Apr 27, 2014 at 12:33:27AM -0400, Evan Rowley wrote:
Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX vi /etc/resolv.conf chkconfig network on service network start
Also helps to throw a line of "NM_CONTROLLED=NO" into /etc/sysconfig/network-scripts/ifcfg-ethX just to further tell it to go away.
- -- Jim Perrin The CentOS Project | http://www.centos.org twitter: @BitIntegrity | GPG Key: FA09AD77
On Apr 26, 2014, at 10:37 PM, John R. Dennison jrd@gerdesas.com wrote:
On Sun, Apr 27, 2014 at 12:33:27AM -0400, Evan Rowley wrote:
Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX vi /etc/resolv.conf chkconfig network on service network start
Heh,
You forgot...
yum remove NetworkManager
:-)
-- Nate Duehr denverpilot@me.com
Nathan Duehr wrote:
On Apr 26, 2014, at 10:37 PM, John R. Dennison jrd@gerdesas.com wrote:
On Sun, Apr 27, 2014 at 12:33:27AM -0400, Evan Rowley wrote:
Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
service NetworkManager stop chkconfig NetworkManager off vi /etc/sysconfig/network-scripts/ifcfg-ethX vi /etc/resolv.conf chkconfig network on service network start
Heh,
You forgot...
yum remove NetworkManager
:-)
Is this an impromptu poll? I think we had one for NM ("it's so much better in fedora, it was reworked..."), and everyone else, if it's not a laptop, wants it to Go Away.
But will they listen to us?
mark
On 04/28/2014 06:19 PM, m.roth@5-cent.us wrote:
Is this an impromptu poll? I think we had one for NM ("it's so much better in fedora, it was reworked..."), and everyone else, if it's not a laptop, wants it to Go Away. But will they listen to us?
The answer is found in the package set for RHEL7. The time to have voted has long past, and was in the Fedora train. NM is and will be in EL7, and it will be there for ten years, if RH keeps to its support schedule. They won't pull it after the RC.
At least in EL6 you can in fact yum remove NM without it taking your whole system away. I haven't tried on EL7.
But, I also haven't had any issues with NetworkManager in my use cases, which includes much more than just laptops. I also am aware that others have had issues, particularly with bridging and bonding.
On 04/29/2014 02:22 PM, Lamar Owen wrote:
On 04/28/2014 06:19 PM, m.roth@5-cent.us wrote:
Is this an impromptu poll? I think we had one for NM ("it's so much better in fedora, it was reworked..."), and everyone else, if it's not a laptop, wants it to Go Away. But will they listen to us?
The answer is found in the package set for RHEL7. The time to have voted has long past, and was in the Fedora train. NM is and will be in EL7, and it will be there for ten years, if RH keeps to its support schedule. They won't pull it after the RC.
At least in EL6 you can in fact yum remove NM without it taking your whole system away. I haven't tried on EL7.
But, I also haven't had any issues with NetworkManager in my use cases, which includes much more than just laptops. I also am aware that others have had issues, particularly with bridging and bonding.
"The NetworkManager daemon attempts to make networking configuration and operation as pain- less and automatic as possible by managing the primary network connection and other network interfaces, like Ethernet, WiFi, and Mobile Broadband devices. NetworkManager will connect any network device when a connection for that device becomes available, unless that behavior is disabled. Information about networking is exported via a D-Bus interface to any inter- ested application, providing a rich API with which to inspect and control network settings and operation."
This may be fine for users that don't know what they are doing or don't have a stable networking environment, but I have found for me it causes nothing but heartache. The first thing I do is disable it.
The sad part is that it makes us not understand what is really happening with our systems and when something doesn't work we have no idea where to look.
I have been using UNIX/BSD/Linux since the mid eighties and hate where things appear to be going - looking more and more like Windows.
my $.02
On Tue, Apr 29, 2014 at 1:42 PM, Steve Clark sclark@netwolves.com wrote:
This may be fine for users that don't know what they are doing or don't have a stable networking environment, but I have found for me it causes nothing but heartache. The first thing I do is disable it.
The sad part is that it makes us not understand what is really happening with our systems and when something doesn't work we have no idea where to look.
I have been using UNIX/BSD/Linux since the mid eighties and hate where things appear to be going - looking more and more like Windows.
There are two sides to this. On the one hand you want to be able to nail down server configurations - and probably anything that is going to stay wired. On the other, you can't possibly have liked what you had to do to add a new network (or any other) device to a BSD system in the 80's and it is kind of nice to plug in a usb device and have it come up working without a reboot. I think the real issue is that the way to nail things down either hasn't stabilized or isn't well documented. For example, I think there are ways to tell NM not to mess with a specific interface setting, and maybe a way to say you don't want it to screw up your resolv.conf file, but can you tell it that adding a USB device and picking up a dchp address is OK, but you don't want to change your default route just because dhcp offers it?
On 4/29/2014 13:05, Les Mikesell wrote:
can you tell it that adding a USB device and picking up a dchp address is OK, but you don't want to change your default route just because dhcp offers it?
Mixed DHCP and static IP configurations is a very useful but often neglected combination. [1]
Every OS I've used requires some hacking around to make it work as desired. The only reason Linux is easiest of the bunch is because it has a history of letting you turn off the automation, so you can prevent it from doing undesired things.
Windows is far worse than CentOS in this regard, NM or no. [2]
---
[1] A machine might need to accept a random DHCP IP in 192.168.0.x to be allowed through the LAN's restrictive Internet gateway but also need to have static IP 172.16.17.1 to serve a set of Internet-disconnected boxes in that same /24 scheme. This is easy with all modern OSes if you have two NICs and two Ethernet connections back to the nearest switch. Not so easy when both purposes must be served by a single NIC.
On Tue, Apr 29, 2014 at 2:40 PM, Warren Young warren@etr-usa.com wrote:
On 4/29/2014 13:05, Les Mikesell wrote:
can you tell it that adding a USB device and picking up a dchp address is OK, but you don't want to change your default route just because dhcp offers it?
Mixed DHCP and static IP configurations is a very useful but often neglected combination. [1]
Every OS I've used requires some hacking around to make it work as desired. The only reason Linux is easiest of the bunch is because it has a history of letting you turn off the automation, so you can prevent it from doing undesired things.
Yes, but the configs tend to be tied to the names of the devices. If a new device is going to be added on the fly when you jack in a USB plug, where do you hack to say that device shouldn't clobber your resolv.conf or default gateway.
Les Mikesell wrote:
On Tue, Apr 29, 2014 at 2:40 PM, Warren Young warren@etr-usa.com wrote:
On 4/29/2014 13:05, Les Mikesell wrote:
can you tell it that adding a USB device and picking up a dchp address is OK, but you don't want to change your default route just because dhcp offers it?
Mixed DHCP and static IP configurations is a very useful but often neglected combination. [1]
Every OS I've used requires some hacking around to make it work as desired. The only reason Linux is easiest of the bunch is because it has a history of letting you turn off the automation, so you can prevent it from doing undesired things.
Yes, but the configs tend to be tied to the names of the devices. If a new device is going to be added on the fly when you jack in a USB plug, where do you hack to say that device shouldn't clobber your resolv.conf or default gateway.
Or, for that matter, you reboot, and oops, you left a USB key in there, and /dev/sdc3 ain't there....
mark
Les Mikesell wrote:
For example, I think there are ways to tell NM not to mess with a specific interface setting, and maybe a way to say you don't want it to screw up your resolv.conf file,
I don't mind NM editing resolv.conf if it knows - or even thinks it knows - how to improve on the current settings, but what I don't understand is why it occasionally deletes the current settings without substituting anything else. I can't imagine any situation where this would help? Maybe the present settings are defective in some way; but no settings cannot possibly be better.
Having said all that, in my case NM has been working perfectly for over a year now. But I can't forgive it for the hours I wasted on it in the past.
However, I would never run NM on a server, by which I mean a machine that offers services like dhcp and http.
On 2014-04-30, Timothy Murphy gayleard@eircom.net wrote:
I don't mind NM editing resolv.conf if it knows
- or even thinks it knows - how to improve
on the current settings, but what I don't understand is why it occasionally deletes the current settings without substituting anything else. I can't imagine any situation where this would help? Maybe the present settings are defective in some way; but no settings cannot possibly be better.
No settings might be better. If I take my laptop from one site to another, keeping my previous resolv.conf intact, and NM doesn't remove it, then my laptop will try to query the previous site's DNS. They may not like that; depending on how paranoid they are, they may even take measures to block my traffic. Even if not, I may see some really bizarre DNS behavior which could be difficult to troubleshoot, whereas having no DNS at all will be very obvious very quickly.
I don't use NetworkManager, so I don't know the answer to this question: is there a way to tell it not to clobber portions of your network configuration, and/or to provide it with defaults if it can't determine values for a particular option? That seems like the most logical way to handle this scenario.
--keith
On Wed, Apr 30, 2014 at 10:01 AM, Keith Keller kkeller@wombat.san-francisco.ca.us wrote:
No settings might be better. If I take my laptop from one site to another, keeping my previous resolv.conf intact, and NM doesn't remove it, then my laptop will try to query the previous site's DNS. They may not like that; depending on how paranoid they are, they may even take measures to block my traffic. Even if not, I may see some really bizarre DNS behavior which could be difficult to troubleshoot, whereas having no DNS at all will be very obvious very quickly.
So you only have one network interface active at a time? Our servers typically have at least 6 NICs and it is pretty common to have at least 4 active on different subnets. And bringing up a new interface does _not_ mean I always want to use the DNS servers or default route DHCP might offer.
On 2014-04-30, Les Mikesell lesmikesell@gmail.com wrote:
On Wed, Apr 30, 2014 at 10:01 AM, Keith Keller kkeller@wombat.san-francisco.ca.us wrote:
No settings might be better. If I take my laptop from one site to another, keeping my previous resolv.conf intact, and NM doesn't remove it, then my laptop will try to query the previous site's DNS. They may not like that; depending on how paranoid they are, they may even take measures to block my traffic. Even if not, I may see some really bizarre DNS behavior which could be difficult to troubleshoot, whereas having no DNS at all will be very obvious very quickly.
So you only have one network interface active at a time?
That is of course not what I wrote. The above is just one example where I might prefer an empty resolv.conf instead of an old (and possibly incorrect) one.
Our servers typically have at least 6 NICs and it is pretty common to have at least 4 active on different subnets. And bringing up a new interface does _not_ mean I always want to use the DNS servers or default route DHCP might offer.
So in this case you might prefer an old resolv.conf instead of a new one or an empty one I don't recall anyone ever writing that any of these scenarios is always preferable over the other.
At any rate, for CentOS 6 we can still say "if you don't like NM, don't use it".
--keith
On Wed, Apr 30, 2014 at 4:34 PM, Keith Keller kkeller@wombat.san-francisco.ca.us wrote:
At any rate, for CentOS 6 we can still say "if you don't like NM, don't use it".
Yes, but we are approaching the end of an era. As soon as 7 is out, you won't be able to get applications for 6 and you'll be forced to switch, Oh wait, that already happened for flash and chrome, didn't it?
On 04/30/2014 11:01 AM, Keith Keller wrote:
I don't use NetworkManager, so I don't know the answer to this question: is there a way to tell it not to clobber portions of your network configuration, and/or to provide it with defaults if it can't determine values for a particular option?
The granularity is pretty poor, but you can choose, as the 'Method:' used for a connection several types. One of these is 'Automatic (DHCP)' and the next one down is 'Automatic (DHCP) addresses only' which should, IIRC, leave resolv.conf alone (useful if you're using something like OpenDNS).
I'd personally like to see more configurability here, but that's a post for another day.
Keith Keller wrote:
I don't mind NM editing resolv.conf if it knows
- or even thinks it knows - how to improve
on the current settings, but what I don't understand is why it occasionally deletes the current settings without substituting anything else. I can't imagine any situation where this would help? Maybe the present settings are defective in some way; but no settings cannot possibly be better.
No settings might be better. If I take my laptop from one site to another, keeping my previous resolv.conf intact, and NM doesn't remove it, then my laptop will try to query the previous site's DNS. They may not like that; depending on how paranoid they are, they may even take measures to block my traffic.
Does this happen? I've never encountered it. In my case, the probability of my DNS settings in resolv.conf not working in a new site is close to zero, so you are replacing something that might possibly not work by something that is certain not to work.
On 04/29/2014 03:05 PM, Les Mikesell wrote:
There are two sides to this. On the one hand you want to be able to nail down server configurations - and probably anything that is going to stay wired.
Ok, I'll bite on this one.
*Why* do we want a server configuration to be nailed down? Is it due to a real need, or is it due to the inadequacies in the tools to allow fully dynamic and potentially transparently load-balanced dynamic configuration? Or is it due to the perceived need to control things manually instead of using effective automation? I do say 'effective' automation, yes, since ineffective or partially effective automation is worse than no automation. But one of the cornerstones of good sysadmin practice is to automate those things that should be automated.
Dynamic DNS and/or mDNS with associated addresses deals with the need for a static IP; SRV records in the DNS can deal with the need for a static name, as long as you have a domain; and something like (but different from!) Universal PnP can deal with that.
NetworkManager (and similar automation) has application in cloud-based things, where the server needs to be as dynamic as the device accessing the server. It also has application in embedded things, where you want to plug in an appliance to a network and have its services available regardless of the network environment (maybe no DHCP, maybe no DNS, maybe dynamic addresses, and maybe static; it really shouldn't matter).
Lamar Owen wrote:
On 04/29/2014 03:05 PM, Les Mikesell wrote:
There are two sides to this. On the one hand you want to be able to nail down server configurations - and probably anything that is going to stay wired.
Ok, I'll bite on this one.
*Why* do we want a server configuration to be nailed down? Is it due to a real need, or is it due to the inadequacies in the tools to allow fully dynamic and potentially transparently load-balanced dynamic configuration? Or is it due to the perceived need to control things
<snip> I've got two rooms, with a number of servers in each room behind a firewall, *required* by US law (HIPAA & PII data). I've got compute clusters, and all the compute nodes are all 192.168.etc, and they MUST NOT CHANGE, EVER!!! All of those setups are behind their own switches.
Tall me how I need NM to manage them.
mark
On 04/30/2014 10:47 AM, m.roth@5-cent.us wrote:
I've got two rooms, with a number of servers in each room behind a firewall, *required* by US law (HIPAA & PII data). I've got compute clusters, and all the compute nodes are all 192.168.etc, and they MUST NOT CHANGE, EVER!!! All of those setups are behind their own switches. Tall me how I need NM to manage them.
*You* don't, at least not at the moment.
But others with a different setup might. That's why we have the choice.
On Wed, Apr 30, 2014 at 11:50 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 10:47 AM, m.roth@5-cent.us wrote:
I've got two rooms, with a number of servers in each room behind a firewall, *required* by US law (HIPAA & PII data). I've got compute clusters, and all the compute nodes are all 192.168.etc, and they MUST NOT CHANGE, EVER!!! All of those setups are behind their own switches. Tall me how I need NM to manage them.
*You* don't, at least not at the moment.
But others with a different setup might. That's why we have the choice.
Choice is great, surprises not so much. And I find it surprising that NM sometimes runs, sometimes doesn't, depending on seemingly unrelated things. And I still don't understand how to control what it would do for, say, a dynamically inserted USB device. Is it possible to make it take 'address only' from DCHP in that context?
On 04/30/2014 12:56 PM, Les Mikesell wrote:
Choice is great, surprises not so much. And I find it surprising that NM sometimes runs, sometimes doesn't, depending on seemingly unrelated things.
Those would be bugs, and bugs need fixing. But they can't be fixed if they're not reported.
And I still don't understand how to control what it would do for, say, a dynamically inserted USB device. Is it possible to make it take 'address only' from DCHP in that context?
NetworkManager doesn't work in terms of interfaces, but in terms of connections. I'll have to try it with a USB Wi Fi before I can answer completely, but when you create a connection you get this option, and I would think (and I'm going to try it, since I do have a USB Wi Fi NIC at home, just not sure if it's supported by ELrepo or not) upon insertion a dialog to create a connection will come up, and you select the option from the pulldown in the IPv4 tab. The udev framework allows connections to 'belong' to different NICs, as far as I can tell, and is what makes the connection persistent across reboots in that sense. But I reserve the right to be wrong.
On Wed, Apr 30, 2014 at 9:32 AM, Lamar Owen lowen@pari.edu wrote:
On 04/29/2014 03:05 PM, Les Mikesell wrote:
There are two sides to this. On the one hand you want to be able to nail down server configurations - and probably anything that is going to stay wired.
Ok, I'll bite on this one.
*Why* do we want a server configuration to be nailed down? Is it due to a real need, or is it due to the inadequacies in the tools to allow fully dynamic and potentially transparently load-balanced dynamic configuration? Or is it due to the perceived need to control things manually instead of using effective automation? I do say 'effective' automation, yes, since ineffective or partially effective automation is worse than no automation. But one of the cornerstones of good sysadmin practice is to automate those things that should be automated.
You forgot to mention interoperable along with effective and complete. When a network can run perfectly without a human controlling the names and addresses precisely at some level or another regardless of what you plug into it, I'll happily agree that automation would be an improvment. Right now I can't even dream of that as a possibility. And so each component needs to configured by a human - and stay that way - or it isn't going to work with the rest of the world.
Dynamic DNS and/or mDNS with associated addresses deals with the need for a static IP;
Is that secure?
SRV records in the DNS can deal with the need for a static name, as long as you have a domain; and something like (but different from!) Universal PnP can deal with that.
Is that a standard that is universal?
NetworkManager (and similar automation) has application in cloud-based things, where the server needs to be as dynamic as the device accessing the server.
You just pushed the management somewhere else - you didn't eliminate it.
It also has application in embedded things, where you want to plug in an appliance to a network and have its services available regardless of the network environment (maybe no DHCP, maybe no DNS, maybe dynamic addresses, and maybe static; it really shouldn't matter).
Your argument makes sense for devices that don't provide a reasonable interface for their own configuration. But how does that apply to a server with a full Linux distribution?
My last test with Network Manager was a couple of years ago. At that time, a client that was set to boot using DHCP and NM would not set its hostname when such was provided with the DHCP response. That was a show stopper for me (none of my 200+ non-wifi clients have any configuration on them that identifies the machine in any way). Is this still the case?
Steve
On 04/30/2014 11:10 AM, Steve Thompson wrote:
My last test with Network Manager was a couple of years ago. At that time, a client that was set to boot using DHCP and NM would not set its hostname when such was provided with the DHCP response. That was a show stopper for me (none of my 200+ non-wifi clients have any configuration on them that identifies the machine in any way). Is this still the case?
To the best of my knowledge, DNS is queried for the hostname info if a connection is set to come up at boot, and I believe it's the first connection that comes up that gets the prize.
I get a hostname upon boot with my laptops and my desktops, wired and wireless, with CentOS 6.5
On 04/30/2014 11:03 AM, Les Mikesell wrote:
You forgot to mention interoperable along with effective and complete.
No, I didn't forget it.
Dynamic DNS and/or mDNS with associated addresses deals with the need for a static IP;
Is that secure?
Dynamic DNS can be, yes. It depends upon the way the zone file is updated and whether it's Internet-exposed on not.
If we're relying on mDNS we're probably disconnected.
But you've been around long enough to know that security and convenience are inversely proportional.
Is [the SRV DNS record] a standard that is universal?
RFC 2782. Becoming more common, and very common for VoIP networks using SIP.
You just pushed the management somewhere else - you didn't eliminate it.
Why yes, yes I did push the management elsewhere. If you have a hundred thousand cloud nodes, where would you rather manage them; at the individual node level, or in a centralized manner? Go to a cloud panel, select 'deploy development PostgreSQL server' and a bit later connect to it and get to work. (Yes, I know you need AAA and all kinds of other things, but for the application developer who needs a clean sandbox to test something, being able to roll a clean temp server out without admin intervention could be very useful).
Your argument makes sense for devices that don't provide a reasonable interface for their own configuration. But how does that apply to a server with a full Linux distribution?
Embedded devices, with what I would consider to be full Linux distributions on them, with nothing more than a network device to manage them already exist. Network device meaning Wi Fi, too. NAS appliances are but one application; the WD MyBook Live, for instance, has a complete non-GUI Debian on it, and there are repos for various packages (for grins and giggles I installed IRAF on one, and ran it with ssh X forwarding to my laptop). Is a NAS appliance not a server?
On Wed, Apr 30, 2014 at 12:17 PM, Lamar Owen lowen@pari.edu wrote:
You forgot to mention interoperable along with effective and complete.
No, I didn't forget it.
Dynamic DNS and/or mDNS with associated addresses deals with the need for a static IP;
Is that secure?
Dynamic DNS can be, yes. It depends upon the way the zone file is updated and whether it's Internet-exposed on not.
So how can it be dynamic, but controlled at the same time?
But you've been around long enough to know that security and convenience are inversely proportional.
Sort-of. You just have to work out convenient operations over secure channels.
Is [the SRV DNS record] a standard that is universal?
RFC 2782. Becoming more common, and very common for VoIP networks using SIP.
I'll take that as a 'no' for the general case.
You just pushed the management somewhere else - you didn't eliminate it.
Why yes, yes I did push the management elsewhere. If you have a hundred thousand cloud nodes, where would you rather manage them; at the individual node level, or in a centralized manner?
I'd like to mange things the same way, regardless of the count.
Go to a cloud panel, select 'deploy development PostgreSQL server' and a bit later connect to it and get to work.
How is that easier than saying 'ssh nodename yum -y install postgresql-server'/ Something I already know how to do and how to make happen any number of ties - and something that works on real hardware and in spite of the differences in VM cloud tools.
(Yes, I know you need AAA and all kinds of other things, but for the application developer who needs a clean sandbox to test something, being able to roll a clean temp server out without admin intervention could be very useful).
At the expense of being black magic that won't work outside of that environment. I don't like magic. I don't like things that lock you in to only one vendor/tool/OS.
Your argument makes sense for devices that don't provide a reasonable interface for their own configuration. But how does that apply to a server with a full Linux distribution?
Embedded devices, with what I would consider to be full Linux distributions on them, with nothing more than a network device to manage them already exist. Network device meaning Wi Fi, too. NAS appliances are but one application; the WD MyBook Live, for instance, has a complete non-GUI Debian on it, and there are repos for various packages (for grins and giggles I installed IRAF on one, and ran it with ssh X forwarding to my laptop). Is a NAS appliance not a server?
Actually, I'd like to see a single device do all of that gunk plus have an HDMI out to act as a media player so a typical home would only need one extra 'thing' besides the computer/tablet/phone. But it doesn't matter - you still have to configure it somehow. Do you want things to guess at your firewall rules?
On 04/30/2014 01:46 PM, Les Mikesell wrote:
On Wed, Apr 30, 2014 at 12:17 PM, Lamar Owen lowen@pari.edu wrote:
Dynamic DNS can be, yes. It depends upon the way the zone file is updated and whether it's Internet-exposed on not.
So how can it be dynamic, but controlled at the same time?
Set up a DD-WRT consumer router for use with OpenDNS by way of dns-o-matic and you'll see how. Now replace OpenDNS and dns-o-matic with your own services.
I'll take [SRV record examples] as a 'no' for the general case.
How is an RFC quote and an example of a running standardized application using the feature a 'no?' Please read https://en.wikipedia.org/wiki/SRV_record and see just how standardized it is.
How is [rolling a cloud instance dev VM] easier than saying 'ssh nodename yum -y install postgresql-server'/ Something I already know how to do and how to make happen any number of ties - and something that works on real hardware and in spite of the differences in VM cloud tools.
How do you guarantee a clean sandbox? In the cloud case, every VM rolled is as clean as the template that generated it, and gives you a known starting point. And I use PostgreSQL as the example since I maintained those RPMs for five years, and I understand the need for a clean sandbox, having learned the hard way what can happen if you don't take the care to make your sandbox clean (this was pre-mach, and definitely pre-mock, and buildroots had to be carefully regulated since they weren't cleanly sandboxed by mock and kin).
At the expense of being black magic that won't work outside of that environment. I don't like magic. I don't like things that lock you in to only one vendor/tool/OS.
OpenStack will do most of what I'm talking about already.
Actually, I'd like to see a single device do all of that gunk plus have an HDMI out to act as a media player so a typical home would only need one extra 'thing' besides the computer/tablet/phone. But it doesn't matter - you still have to configure it somehow. Do you want things to guess at your firewall rules?
That last point is exactly what UPNP was supposed to solve.
Such a device as you want exists; see the GuruPlug Display and descendants. They are definitely tinkering boxen, and they do have their issues (I have a GuruPlug Server Plus with the eSATA port and the infamous overheating problems) but they are available.
Lamar Owen wrote:
On 04/30/2014 01:46 PM, Les Mikesell wrote:
On Wed, Apr 30, 2014 at 12:17 PM, Lamar Owen lowen@pari.edu wrote:
Dynamic DNS can be, yes. It depends upon the way the zone file is updated and whether it's Internet-exposed on not.
So how can it be dynamic, but controlled at the same time?
Set up a DD-WRT consumer router for use with OpenDNS by way of dns-o-matic and you'll see how. Now replace OpenDNS and dns-o-matic with your own services.
<snip> Um, er... DD-WRT is off-topic, so if anyone wants the *REAL* RANT and howto, contact me offlist. The short version is a) that's the most amateur, in the worst sense of the word, project I've ever seen, and b) it took me about a month, and three or so debrickings, to get a good version....
mark
On Wed, Apr 30, 2014 at 1:21 PM, Lamar Owen lowen@pari.edu wrote:
I'll take [SRV record examples] as a 'no' for the general case.
How is an RFC quote and an example of a running standardized application using the feature a 'no?' Please read https://en.wikipedia.org/wiki/SRV_record and see just how standardized it is.
So can I expect it to work with ssh? SMTP? SNMP? Or any application I'm likely to use? Who's going to open the corresponding firewall holes?
How is [rolling a cloud instance dev VM] easier than saying 'ssh nodename yum -y install postgresql-server'/ Something I already know how to do and how to make happen any number of ties - and something that works on real hardware and in spite of the differences in VM cloud tools.
How do you guarantee a clean sandbox?
Either clonezilla or a minimal OS install to start. Or if it is a VM, copy/revert an image. But except for development build systems we mostly work with hardware.
In the cloud case, every VM rolled is as clean as the template that generated it, and gives you a known starting point. And I use PostgreSQL as the example since I maintained those RPMs for five years, and I understand the need for a clean sandbox, having learned the hard way what can happen if you don't take the care to make your sandbox clean (this was pre-mach, and definitely pre-mock, and buildroots had to be carefully regulated since they weren't cleanly sandboxed by mock and kin).
At the expense of being black magic that won't work outside of that environment. I don't like magic. I don't like things that lock you in to only one vendor/tool/OS.
OpenStack will do most of what I'm talking about already.
On real hardware?
Actually, I'd like to see a single device do all of that gunk plus have an HDMI out to act as a media player so a typical home would only need one extra 'thing' besides the computer/tablet/phone. But it doesn't matter - you still have to configure it somehow. Do you want things to guess at your firewall rules?
That last point is exactly what UPNP was supposed to solve.
Great... Why have a firewall when holes open by magic at an unsecure application's request?
Such a device as you want exists; see the GuruPlug Display and descendants. They are definitely tinkering boxen, and they do have their issues (I have a GuruPlug Server Plus with the eSATA port and the infamous overheating problems) but they are available.
I'd really like at least a 4-port switch and room for at least a pair of 2.5" drives in what could still be a relatively tiny case. That is, combine everything in a typical router, nas, and media player. Current CPUs should be able to handle all those tasks at once.
Steve Clark wrote:
On 04/29/2014 02:22 PM, Lamar Owen wrote:
On 04/28/2014 06:19 PM, m.roth@5-cent.us wrote:
Is this an impromptu poll? I think we had one for NM ("it's so much better in fedora, it was reworked..."), and everyone else, if it's not a laptop, wants it to Go Away. But will they listen to us?
The answer is found in the package set for RHEL7. The time to have voted has long past, and was in the Fedora train. NM is and will be in EL7, and it will be there for ten years, if RH keeps to its support schedule. They won't pull it after the RC.
At least in EL6 you can in fact yum remove NM without it taking your whole system away. I haven't tried on EL7.
But, I also haven't had any issues with NetworkManager in my use cases, which includes much more than just laptops. I also am aware that others have had issues, particularly with bridging and bonding.
"The NetworkManager daemon attempts to make networking
configuration and operation as pain-less and automatic as possible by managing the primary network connection and other network interfaces, like Ethernet, WiFi, and Mobile Broadband devices. NetworkManager will connect any network device when a connection for that device becomes available, unless that behavior is disabled. Information about networking is exported via a D-Bus interface to any interested application, providing a rich API with which to inspect and control network settings and operation."
This may be fine for users that don't know what they are doing or don't have a stable networking environment, but I have found for me it causes nothing but heartache. The first thing I do is disable it.
The sad part is that it makes us not understand what is really happening with our systems and when something doesn't work we have no idea where
to look.
For one thing, if we're not active on the fedora lists, then we have no vote, it sounds like. And IMO, a lot of fedora folks are desktop folks, and thinking, perhaps, of competing with ubuntu.
I think upstream might consider, esp. that we're now a "partner", talking to *us*. I mean, this is an ENTERPRISE o/s, and that means, heavily, *servers*, and does anyone actually use wireless, or anything other than hardwired, for a server?
mark
Frank Cox wrote:
On Tue, 29 Apr 2014 15:17:09 -0400 m.roth@5-cent.us wrote:
does anyone actually use wireless, or anything other than hardwired, for a server?
That depends on how you define "server".
A Dell PowerEdge, or an HP DLx80, or a Penguin, or.... Why, what other values of "server" do you have?
mark
On Tue, 29 Apr 2014 15:44:10 -0400 m.roth@5-cent.us wrote:
A Dell PowerEdge, or an HP DLx80, or a Penguin, or.... Why, what other values of "server" do you have?
Transferring files from one computer to another via ssh or ftp, for one. Backup via rsync for two. Database access for three.
Need I go on?
Frank Cox wrote:
On Tue, 29 Apr 2014 15:44:10 -0400 m.roth@5-cent.us wrote:
A Dell PowerEdge, or an HP DLx80, or a Penguin, or.... Why, what other values of "server" do you have?
Transferring files from one computer to another via ssh or ftp, for one. Backup via rsync for two. Database access for three.
Need I go on?
Two or three? Like, at home? Very small office? My system at home, I'm setting up for backups, and I will get around to samba shares... but I call it a workstation.
"Trains stop at a train station, buses stop at a bus station...."
mark "I can stop playing solitaire any time I want...."
Warren Young wrote:
On 4/29/2014 14:02, m.roth@5-cent.us wrote:
"Trains stop at a train station, buses stop at a bus station...."
Taxis stop at the train station, cars park at the bus station, busses pull up to the airport...
The lines aren't as sharp as you're trying to draw them.
You completely missed the joke. I hate explaining jokes, it kills them.
"...so doesn't work stop at a workstation?"
mark
On 4/29/2014 13:17, m.roth@5-cent.us wrote:
I mean, this is an ENTERPRISE o/s, and that means, heavily, *servers*, and does anyone actually use wireless, or anything other than hardwired, for a server?
I think you're setting up false dichotomies here. It isn't about desktop vs server, or WiFi vs wired.
First, both CentOS and Ubuntu have server and desktop focused variants. RHEL7 will make this separation even clearer[1], though it seems the reason has more to do with keeping the ISOs to single layer DVD size than because they intend for the Workstation/Client and Server editions to functionally diverge.
Second, as to whether there are servers that use WiFi, of course there are. Print servers, embedded systems, media servers, IP cameras... Lots of Linux servers use WiFi.
Back in the days when Big Iron Unix was the biggest piece of the market, the very thing being complained about in this thread would have been touted as a great feature over inflexible desktop OSes. Multipath I/O, hot-swap disk controllers, NIC failover, etc. all happened in that world first. Is dynamic networking any different, really?
----
[1] RHEL 7 is apparently going to come out in 4-6 separate editions. See [http://distrowatch.com/?newsid=08406 The article only talks about three of the editions, but I've also noticed mention elsewhere of Compute Node, Atomic Host, and Guest editions. I don't know if that's really 6 separate versions, or if I, too, am making distinctions where there are none.
Warren Young wrote:
On 4/29/2014 13:17, m.roth@5-cent.us wrote:
I mean, this is an ENTERPRISE o/s, and that means, heavily, *servers*, and does anyone actually use wireless, or anything other than hardwired, for a server?
I think you're setting up false dichotomies here. It isn't about desktop vs server, or WiFi vs wired.
First, both CentOS and Ubuntu have server and desktop focused variants.
<snip>
Back in the days when Big Iron Unix was the biggest piece of the market, the very thing being complained about in this thread would have been touted as a great feature over inflexible desktop OSes. Multipath I/O, hot-swap disk controllers, NIC failover, etc. all happened in that world first. Is dynamic networking any different, really?
Yes. There are a lot of servers that *require* special setups - think of h/a failover systems, or, as someone mentioned, systems with multiple ports, and some of those are on/feed internal subnets. I can't see how NM can do other than mangle that.
[1] RHEL 7 is apparently going to come out in 4-6 separate editions. See [http://distrowatch.com/?newsid=08406 The article only talks about three of the editions, but I've also noticed mention elsewhere of Compute Node, Atomic Host, and Guest editions. I don't know if that's really 6 separate versions, or if I, too, am making distinctions where there are none.
I didn't see anything about "computer node", etc. Guest, I would assume, are for kiosk-type setups. Compute node... it automatically detects a GPU(s)? It comes with PBS/Torque installed? Fuse? Gluster? Ready to be joined to a cluster? I'd like to see what their definition of "compute node" is....
But thanks very much for the link - I didn't know that RC 7 is out this week....
mark
On 4/29/2014 14:15, m.roth@5-cent.us wrote:
Compute node... it automatically detects a GPU(s)? It comes with PBS/Torque installed? Fuse? Gluster? Ready to be joined to a cluster? I'd like to see what their definition of "compute node" is....
It's probably the RHEL7 version of their HPC offering:
http://www.redhat.com/products/enterprise-linux/scientific-computing/
But thanks very much for the link - I didn't know that RC 7 is out this week....
It's just the release candidate: http://goo.gl/cM1q2h
A final release date has not been publicly announced, despite the Red Hat Summit a couple of weeks ago.
FWIW, I saw somewhere that those ISOs are going to disappear fairly soon. Grab them if you want to have a play before final release.
On 04/29/2014 03:17 PM, m.roth@5-cent.us wrote:
I think upstream might consider, esp. that we're now a "partner", talking to *us*. I mean, this is an ENTERPRISE o/s, and that means, heavily, *servers*, and does anyone actually use wireless, or anything other than hardwired, for a server?
Enterprise != servers. Server != hardwired.
The enterprise desktop is real, and it is not going away.
Wirelessly-attached servers are out there, especially in manufacturing.
On 04/29/2014 02:42 PM, Steve Clark wrote:
This may be fine for users that don't know what they are doing or don't have a stable networking environment, but I have found for me it causes nothing but heartache.
Steve, first, if this comes off as a rant, that's not my intention, and it's not directed to you personally.
My experience? There is no such thing as a 100% stable networking environment. Systems like Tandem's NonStop take that a step further, and realize that there's no such thing as a 100% stable CPU, either.
This whole discussion reminds me of the SELinux discussions, and the oft-quoted advice to just disable it, it just gets in the way of The Way I'm Used To Doing Things (TM).
The first thing I do is disable it. The sad part is that it makes us not understand what is really happening with our systems and when something doesn't work we have no idea where to look.
NetworkManager is well-documented. You just have to read the docs and be willing to try something new. It also logs to /var/log/messages in plain text, too. There are more pieces, yes, to trace through. But, unless you install the Desktop group or the anaconda package on your server you won't get NetworkManager on it. If you install the Desktop package, there's a bit of an assumption that you want a Desktop, no?
I have been using UNIX/BSD/Linux since the mid eighties and hate where things appear to be going - looking more and more like Windows. my $.02
Looking like Windows is not a capital crime. (No, I am not a Windows freak; I've used *nix of various types probably as long as you have, and I haven't used any Windows as my primary desktop of choice since Windows 95 was a pup, and have never used a Windows Server as my primary server of choice.)
NetworkManager's goal is extremely simple, and is in the README. It's simply: "NetworkManager attempts to keep an active network connection available at all times." Networks are unreliable. Period. That's why we have BGP and OSPF and all the other interior and exterior gateway protocols, because network links are 'best-effort' services; QoS depends upon the expectation of unreliability, in fact, since the only way to guarantee any packet a timeslot in a full pipe is to throw a different packet out the door. See the absolutely delightful video 'Warriors of the .Net' (www.warriorsofthe.net and elsewhere). We bond interfaces because one could go down, right? (This is one area where NM is weak, incidentally).
I cannot foresee every failure in any manual configuration. We have dynamic routing protocols for a reason, since nobody can foresee how to weight every possible static route.
Back in the late 1800's people who had used tillers to steer their horseless carriages probably though the same thing about this new fancy gizmo called a steering wheel. And automatic transmissions? Heresy!
Much of what I learned with Xenix on the Tandy 6000, Convergent Unix System V Rel 2 on the AT&T 3B1, Apollo DomainOS (using the 4.3BSD 'personality' for the most part), SunOS and later Solaris on Sun3 and SPARC hardware, and older Linux on PC and non-PC hardware still applies; but things move on as requirements change. (At least I can still have my vi! I HAVE used vi since the 80's, and it is still the same quirky beast it always was, even in Xenix V7 on the T6K.).
But the GUI on the 3B1? And those 'pads' on DomainOS? Not portable, and fallen by the wayside.
Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren't, better. Systemd is one of these things, and it will be interesting to see how that all plays out over the next few years.
On Wed, Apr 30, 2014 at 9:15 AM, Lamar Owen lowen@pari.edu wrote:
NetworkManager is well-documented. You just have to read the docs and be willing to try something new. It also logs to /var/log/messages in plain text, too. There are more pieces, yes, to trace through. But, unless you install the Desktop group or the anaconda package on your server you won't get NetworkManager on it. If you install the Desktop package, there's a bit of an assumption that you want a Desktop, no?
No. Just no. Not if you think that means there is just one Desktop and it is physically attached to the box you are installing. That hasn't been a reasonable assumption for anything running X, ever, and even less so with freenx/x2go. You want the applications on a stable, stably networked server and the displays out where people work.
On 04/30/2014 10:36 AM, Les Mikesell wrote:
On Wed, Apr 30, 2014 at 9:15 AM, Lamar Owen lowen@pari.edu wrote:
...If you install the Desktop package, there's a bit of an assumption that you want a Desktop, no?
No. Just no. Not if you think that means there is just one Desktop and it is physically attached to the box you are installing.
I don't; I'm familiar with LTSP and similar. In these cases a different group could be defined that includes all of the packages of the Desktop group but without NM, and called 'LTSP Desktop Server' or 'Virtual Desktop Server' or similar. But in X there is no real difference between a local X server and a remote one, other than the display number and the plumbing. Perhaps to make it even clearer the existing Desktop group could be renamed 'Console Desktop' but that's a bit much, since most Desktop users are console users; that's not to say that there is not a 'Citrix Terminal Services'-like use case out there. And you can yum remove NetworkManager without major impact, as long as you make sure to re-enable the other network service.
That hasn't been a reasonable assumption for anything running X, ever, and even less so with freenx/x2go.
Interestingly, X turns the whole client/server thing on its head..... and always has. This is more of a 'VDI' type thing, though, and is not the common Desktop use case. Apollo had this problem licked for the local network years ago; the X way is a bit of a regression from the very non-standard way DomainOS did things. Vestiges of the DomainOS way still show up in the Andrew Filesystem, though.
You want the applications on a stable, stably networked server and the displays out where people work.
So, pardon the logic, you want the clients running on reliable servers and the servers running on the remote clients. (Yes, I know what I just said..... it's supposed to be humorous......). But think about cloud desktops for a moment, and think about dynamic cloud desktop service mobility that follows you (network-wise, for lowest latency) to give you the best user experience. (No, VDI is not doing this seamlessly yet).
On Wed, Apr 30, 2014 at 9:55 AM, Lamar Owen lowen@pari.edu wrote:
That hasn't been a reasonable assumption for anything running X, ever, and even less so with freenx/x2go.
Interestingly, X turns the whole client/server thing on its head..... and always has.
But freenx/NX/x2go put the big picture back the way it belongs. That is both ends run proxy/caching stubs that can disconnect and reconnect from each other without breaking things. The host running the desktop (what you think of as the server) also runs a proxy X display server. The host with the physical display (what you think of as a client) runs a proxy client and server,
You want the applications on a stable, stably networked server and the displays out where people work.
So, pardon the logic, you want the clients running on reliable servers and the servers running on the remote clients. (Yes, I know what I just said..... it's supposed to be humorous......). But think about cloud desktops for a moment, and think about dynamic cloud desktop service mobility that follows you (network-wise, for lowest latency) to give you the best user experience. (No, VDI is not doing this seamlessly yet).
If you've never used NX or x2go, try it. You really do want that caching/proxy layer to deal with network latency and give you the ability to disconnect and pick up your still-running session from a different client - and I mean client in the logical sense. X2go even has a handy way to set up remote rdp sessions to windows targets over its ssh tunnel and caching layer.
On 04/30/2014 11:18 AM, Les Mikesell wrote:
But freenx/NX/x2go put the big picture back the way it belongs.
For certain usess I agree with that; for others, not so much. Seamlessly pulling applications from an application server to the display server has its distinct advantages, particularly for certain expensive commercial applications.
If you've never used NX or x2go, try it.
I've been using NX (both the commercial version and the free version) for remote telescope control use for over five years, acting as a proxy for Windows RDP. Works fine.
On Wed, Apr 30, 2014 at 10:45 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 11:18 AM, Les Mikesell wrote:
But freenx/NX/x2go put the big picture back the way it belongs.
For certain usess I agree with that; for others, not so much. Seamlessly pulling applications from an application server to the display server has its distinct advantages, particularly for certain expensive commercial applications.
Not sure what you mean here or how it can be seamless, since there's no general requirement for the CPU or OS of the display to have anything in common with the system running the application - unless maybe it is java which doesn't need X for remoting. On the other hand, NX/x2go are running real X servers at the display end, so the same things should be possible with a little variation in the plumbing - and probably a loss of the ability to reconnect transparently.
If you've never used NX or x2go, try it.
I've been using NX (both the commercial version and the free version) for remote telescope control use for over five years, acting as a proxy for Windows RDP. Works fine.
X2go is approximately the same, just with open source clients and more current development. And if you've updated your Centos systems with the EPEL repo enabled recently, you are already running their version of the nx libs.
Lamar Owen wrote:
On 04/29/2014 02:42 PM, Steve Clark wrote:
This may be fine for users that don't know what they are doing or don't have a stable networking environment, but I have found for me it causes nothing but heartache.
<snip>
My experience? There is no such thing as a 100% stable networking environment. Systems like Tandem's NonStop take that a step further, and realize that there's no such thing as a 100% stable CPU, either.
Define "stable". please. I have servers (and I really, REALLY want to reboot them, but they're home directory or project servers, and so it's really hard to get to do that, since people have jobs that run for days or weeks, that have run flawlessly for > 300 days, with nothing vaguely significant problems.
This whole discussion reminds me of the SELinux discussions, and the oft-quoted advice to just disable it, it just gets in the way of The Way I'm Used To Doing Things (TM).
That's a complete misrepresentation of the other side of *that* argument.
The first thing I do is disable it. The sad part is that it makes us not understand what is really happening with our systems and when something doesn't work we have no idea where to look.
NetworkManager is well-documented. You just have to read the docs and be willing to try something new. It also logs to /var/log/messages in
WHY? I'm not a huge fan of "if it ain't broke, don't fix it", but fixing something that, 90% of the time, is no big deal to configure and run, with layers of complexity that have created both new issues, and broken things that are set up in a given way for a reason, does not endear it to me. <snip>
I have been using UNIX/BSD/Linux since the mid eighties and hate where things appear to be going - looking more and more like Windows. my $.02
Yup. Agreed ('91 for me, though I did try Coherent in the late 80's....).
Looking like Windows is not a capital crime. (No, I am not a Windows freak; I've used *nix of various types probably as long as you have, and I haven't used any Windows as my primary desktop of choice since Windows 95 was a pup, and have never used a Windows Server as my primary server of choice.)
*Looking* like it, in terms of GUI, isn't a killer (fvwm2, anyone?)... unless you're talking Lose 8, er, Win8. *Configuring* *Nix that way *is* a Bad Thing.
NetworkManager's goal is extremely simple, and is in the README. It's simply: "NetworkManager attempts to keep an active network connection available at all times." Networks are unreliable. Period. That's why
I boggle at this. I've not had unreliable networks, not any place I've worked, nor where I lived, and that goes back to dial-up in the far exurbs of Austin, TX. <snip>
Back in the late 1800's people who had used tillers to steer their horseless carriages probably though the same thing about this new fancy gizmo called a steering wheel. And automatic transmissions? Heresy!
That *does* come off as snide and supercilious, esp. in this specific forum, with the backgrounds of most of us. <snip>
but things move on as requirements change. (At least I can still have my vi! I HAVE used vi since the 80's, and it is still the same quirky beast it always was, even in Xenix V7 on the T6K.).
Just you wait: maybe we should all join some fedora list where we can vote, before they try to force us all to ... EMACS! (alt.religion.editors....)
Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren't, better. Systemd is one of these things, and it will be interesting to see how that all plays out over the next few years.
Again, newer does not mean better, either. And if you're going to go on about the heresy of automatic transmissions, I'll throw back in your face that when I was young, the fabric of dungarees (blue jeans, er, "jeans" to you) had a weight, I'd guess, about 14 or 16; these days, if you're really, really lucky, they might be 9, which is why they wear out so soon. And as for the quality of cell phones (oh, of course that's worn out, it's *soooo* old, it must be last year's model...)
mark
On 04/30/2014 10:41 AM, m.roth@5-cent.us wrote:
Define "stable". please.
I define stable in this context as 'behaving in a completely consistent and predictable fashion.'
I have servers (and I really, REALLY want to reboot them, but they're home directory or project servers, and so it's really hard to get to do that, since people have jobs that run for days or weeks, that have run flawlessly for > 300 days, with nothing vaguely significant problems.
Truly stable systems allow rolling reboots with no interruption of services. EMC and others have had this licked for years with their storage arrays; Tandem had it solved for CPU's and RAM inside a single system image, back in the 80's (and even though it was a bit, ah, interestingly implemented). Truly stable system remain stable even when their parts are unstable. A truly stable system will be stable even when every one of the constituent parts are inherently unstable. And a truly stable system is hard to make.
That's a complete misrepresentation of the other side of [the always disable SELinux] argument.
I said it reminds me of it, not that it's identical to it.
I'm not a huge fan of "if it ain't broke, don't fix it", but fixing something that, 90% of the time, is no big deal to configure and run, with layers of complexity that have created both new issues, and broken things that are set up in a given way for a reason, does not endear it to me.
Reliable and highly available networking using the the traditional Linux networking way is broken for many use cases, not all of which are desktop-oriented. It is broken, and it needs fixing, for those cases. And I *am* a fan of 'it it ain't broke don't fix it.'
*Configuring* *Nix [the Windows] way *is* a Bad Thing.
A Bad Thing is not a capital crime, and Windows does do some things right, as much as I don't like saying that.
That *does* come off as snide and supercilious, esp. in this specific forum, with the backgrounds of most of us.
I really try hard to not be snide or offend very often, but the idea that something needs to stay a certain way either just because it's always been that way or because we can't do it the way someone else who we don't like has done it deserves a bit of a reality check, really. Or do we want to go back to the Way It Was Done before this pun called Unix launched? I've run ITS on an emulated DECsystem 10 in SIMH; I'm glad a better way was developed.
The perl mantra is and always has been 'there's more than one way to do it.' NetworkManager is a different way to do it, and while far from perfect it is the means Red Hat has decided to use in EL7.
Just you wait: maybe we should all join some fedora list where we can vote, before they try to force us all to ... EMACS!
And, if there were no alternatives I'd use it. It's not that big of a deal to learn something different, even as busy as I am. Who knows, I might even find that I like it.
Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren't, better.
Again, newer does not mean better, either.
Very correct; and in EL6 at least you can use the older way or the newer way. But if the newer way can be fixed to meet Red Hat's needs, then they're going to use it. If it can't, well, the RH distributions' histories prove that they're not afraid to pull the new and go with something else, too, when the need arises.
On Wed Apr 30 11:22:56 AM, Lamar Owen wrote:
I really try hard to not be snide or offend very often, but the idea that something needs to stay a certain way either just because it's always been that way or because we can't do it the way someone else who we don't like has done it deserves a bit of a reality check, really. Or do we want to go back to the Way It Was Done before this pun called Unix launched? I've run ITS on an emulated DECsystem 10 in SIMH; I'm glad a better way was developed.
I deleted my first reply. But you've twice used this argument and I'm afraid I can't let it pass.
I find this common argument execrable. It seems to suggest that if I don't accept and embrace the new things that you do, I'm somehow a Luddite or my thinking is backwards. Is all your money in bitcoins yet?
I run CentOS because I want stability. It works and I know how to work it. When something like this is changed, there is an opportunity cost for having to figure out how to get it back to the way I want it to be (compare to recent issues with Mozilla Chrome, uh, Firefox 29). In the aggregate, how much time will be wasted by admins getting this to work when 7 comes out?
Cheers, Zube
On Wed, Apr 30, 2014 at 10:39 AM, Zube Zube@stat.colostate.edu wrote:
I run CentOS because I want stability. It works and I know how to work it. When something like this is changed, there is an opportunity cost for having to figure out how to get it back to the way I want it to be (compare to recent issues with Mozilla Chrome, uh, Firefox 29). In the aggregate, how much time will be wasted by admins getting this to work when 7 comes out?
Yes, in enterprise environments there is a huge development/testing cost for every change that has to be made in configurations or operating procedures. I think it is unfortunate there there is no standard defined for configuration files or tools to stabilize it and make common operations across platforms possible in spite of the bizarre differences each vendor tries to add. Something like posix for system management...
On 04/30/2014 12:02 PM, Les Mikesell wrote:
I think it is unfortunate there there is no standard defined for configuration files or tools to stabilize it and make common operations across platforms possible in spite of the bizarre differences each vendor tries to add. Something like posix for system management...
But, Les, we'd have to make changes to get things standardized. It would be nice if the standard already existed, but it does not.
On Wed, Apr 30, 2014 at 11:14 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 12:02 PM, Les Mikesell wrote:
I think it is unfortunate there there is no standard defined for configuration files or tools to stabilize it and make common operations across platforms possible in spite of the bizarre differences each vendor tries to add. Something like posix for system management...
But, Les, we'd have to make changes to get things standardized. It would be nice if the standard already existed, but it does not.
Yes, I blame all our economic problems on the wastefulness of duplicated effort in learning to manage computers. That and everyone having to stock a near-infinite number of printer ink cartridges. Imagine what you could accomplish with a more productive use of all those smart person-hours and real estate.
Les Mikesell wrote:
On Wed, Apr 30, 2014 at 11:14 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 12:02 PM, Les Mikesell wrote:
<snip>
Yes, I blame all our economic problems on the wastefulness of duplicated effort in learning to manage computers. That and everyone having to stock a near-infinite number of printer ink cartridges. Imagine what you could accomplish with a more productive use of all those smart person-hours and real estate.
Stocking all those toner cartridges? You've seen my basement server, er, sorry, "computer lab"?
mark "well, they said try to get three years' worth, with the sequester and all...."
On Wed, Apr 30, 2014 at 1:04 PM, m.roth@5-cent.us wrote:
Les Mikesell wrote:
On Wed, Apr 30, 2014 at 11:14 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 12:02 PM, Les Mikesell wrote:
<snip> > Yes, I blame all our economic problems on the wastefulness of > duplicated effort in learning to manage computers. That and everyone > having to stock a near-infinite number of printer ink cartridges. > Imagine what you could accomplish with a more productive use of all > those smart person-hours and real estate.
Stocking all those toner cartridges? You've seen my basement server, er, sorry, "computer lab"?
No, everyone is in the same boat in terms of the damage from lack of interoperability standards. Makes me wonder why we have cars that are all approximately the correct widths to fit on a road and brake and accelerator pedals in the same relative positions.
Les Mikesell wrote:
On Wed, Apr 30, 2014 at 1:04 PM, m.roth@5-cent.us wrote:
Les Mikesell wrote:
On Wed, Apr 30, 2014 at 11:14 AM, Lamar Owen lowen@pari.edu wrote:
On 04/30/2014 12:02 PM, Les Mikesell wrote:
<snip> > Yes, I blame all our economic problems on the wastefulness of > duplicated effort in learning to manage computers. That and everyone > having to stock a near-infinite number of printer ink cartridges. > Imagine what you could accomplish with a more productive use of all > those smart person-hours and real estate.
Stocking all those toner cartridges? You've seen my basement server, er, sorry, "computer lab"?
No, everyone is in the same boat in terms of the damage from lack of interoperability standards. Makes me wonder why we have cars that are all approximately the correct widths to fit on a road and brake and accelerator pedals in the same relative positions.
a) Human fits to where pedals are. b) I still go with the Roman milspec on main vehicle wheel widths....
mark
On 4/30/2014 11:13 AM, m.roth@5-cent.us wrote:
Makes me wonder why we have cars that are
all approximately the correct widths to fit on a road and brake and accelerator pedals in the same relative positions.
a) Human fits to where pedals are. b) I still go with the Roman milspec on main vehicle wheel widths....
Model T"s had the throttle on the steering wheel, along with a manual ignition advance, the wheel brake was a hand lever, and the left pedal (where you'd expect a clutch) operated the bands on a planetary transmission. foot down on the left pedal was low gear, foot up was high gear. the middle pedal is reverse. the rightmost pedal (where you'd expect gas) is a transmission brake.
On 04/30/2014 11:39 AM, Zube wrote:
I find this common argument execrable. It seems to suggest that if I don't accept and embrace the new things that you do, I'm somehow a Luddite or my thinking is backwards.
That's not what I think, nor is it what I said. Being unwilling to even try something new is being a Luddite; going back to the old because the new isn't working is not being a Luddite. Being unwilling to try a newer version of something that didn't work previously is also being a Luddite. Isn't there a middle ground between 'love it' and 'hate it?' I *am* a big fan of 'if it ain't broke don't fix it' but the old way for some use cases is indeed broken.
But the simple fact is that NetworkManager is with us for a long time coming. You don't have to use it if you don't need it's particular strengths, or if its particular weaknesses get in the way, but it is there and will be there for at least ten years. Like any other piece of software it has its advantages and disadvantages; use what fits for your situation.
While this paragraph started life being tagged as a snide remark, perhaps it's not; it's certainly not meant to be snide this time. I don't see too many automobiles with tillers these days, nor do I see many first-generation steering wheels. But I see lots of 'double tillers' all the time (as handlebars are in essence double tillers). The double tiller works marvellously well for the motorcycle use case; can you imagine a motorcycle with a steering wheel (they may exist, but I've not personally seen one)?
Is all your money in bitcoins yet?
None of my money is in bitcoin, although I've wondered if the EPIC VLIW architecture of the IA-64 wouldn't be ideal for mining purposes.
I run CentOS because I want stability.
As do I, for that particular meaning of 'stability.' And I have C5 machines in production, and they'll be in production until end of support. Heh, I still have a Red Hat Linux 5.2 machine in (not connected to the Internet) production.
In the aggregate, how much time will be wasted by admins getting this to work when 7 comes out?
Is learning a different way of doing things always a waste of time? But then again, I've always enjoyed learning new things, and learning new ways to use old things (after all, I'm in the process of rebuilding a TRS-80 Model 4P with a new hard disk interface that uses SD cards simply because I find it to be fun). That is one reason I have the job that I do; learning new ways of using old things is part of my official job description, although not in those exact words.
On Wed Apr 30 12:12:41 PM, Lamar Owen wrote:
That's not what I think, nor is it what I said.
Quote 1:
Back in the late 1800's people who had used tillers to steer their horseless carriages probably though the same thing about this new fancy gizmo called a steering wheel. And automatic transmissions? Heresy!
Quote 2:
I really try hard to not be snide or offend very often, but the idea that something needs to stay a certain way either just because it's always been that way or because we can't do it the way someone else who we don't like has done it deserves a bit of a reality check, really. Or do we want to go back to the Way It Was Done before this pun called Unix launched? I've run ITS on an emulated DECsystem 10 in SIMH; I'm glad a better way was developed.
I dunno. "Heresy!" "reality check, really." Sure seems to be the case to me. You certainly aren't praising people who don't embrace the change you do. I'll drop it and let others decide.
Being unwilling to even try something new is being a Luddite; going back to the old because the new isn't working is not being a Luddite. Being unwilling to try a newer version of something that didn't work previously is also being a Luddite. Isn't there a middle ground between 'love it' and 'hate it?'
Yes, of course.
I *am* a big fan of 'if it ain't broke don't fix it' but the old way for some use cases is indeed broken.
Sure. Given that I have no need of NM, what part is broken that NM fixes for me? Or do the "some use cases" not apply to anyone who uses CentOS on static IP desktops?
[snippity]
Is learning a different way of doing things always a waste of time?
Of course not, but alas, my time is limited. If I had nothing else to occupy my time, changes such as these would not trouble me so. What is very expensive, from an opportunity cost standpoint, is to have to learn to do something in a new way that does not bring me any new benefit. Perhaps I'm mistaken about this (goodness knows "mistake maker" is etched on the business cards I don't have), but whenever new, more complex things replace simple things "for my own good", I know that I'll be spending a chunk of time that I could have spent in more fruitful pursuits.
Cheers, Zube
On 04/30/2014 12:40 PM, Zube wrote:
I dunno. "Heresy!" "reality check, really." Sure seems to be the case to me. You certainly aren't praising people who don't embrace the change you do. I'll drop it and let others decide.
'Not embracing' and 'being actively antagonistic to any change' are too different things. The Luddite is antagonistic to any change; one who is just cautious is careful what one embraces. The tiller versus steering wheel analogy was a bit of hyperbole, and was meant to be. The reality check is that things are moving on, and if one wants one's skills to stay current one must learn those skills, even if one doesn't embrace the changes that require those new skills. That's the middle ground; not actively for or against, just staying up to date on the state of the art. And I'm neither rabidly for NM, nor am I rabidly against NM, but since it's there I'm going to take the time to learn why it's there and see if I can use it in those cases where it makes sense to use it, just like any other technology I'm considering.
Sure. Given that I have no need of NM, what part is broken that NM fixes for me?
Are you sure you will never have need for NM?
Or do the "some use cases" not apply to anyone who uses CentOS on static IP desktops?
Totally static desktops, no.
Lamar Owen wrote:
On 04/30/2014 12:40 PM, Zube wrote:
<snip>
Sure. Given that I have no need of NM, what part is broken that NM fixes for me?
Are you sure you will never have need for NM?
Or do the "some use cases" not apply to anyone who uses CentOS on static IP desktops?
Totally static desktops, no.
At work, we *only* give out IPs to MAC addresses in the dhcpd configuration files. I'm thinking to do this at home, too. (Heh - try a driveby logon to *my* home network....)
mark
Lamar Owen wrote: <snip>
But the simple fact is that NetworkManager is with us for a long time coming. You don't have to use it if you don't need it's particular
<snip> Which leads to a thought: you said that the time to "vote" on NM was long past. My response was that none of *us* saw, or were solicited, and the hope that now that we're partnered with upstream, we might be.
I do have a reason for that hope... remember the thread a month or so ago, where *we* *were* asked about tcp-wrappers? For things that mean major changes - systemd, NM, etc, I'm hoping that, in the future, we *also* get solicited in the same way for our views.
mark
On 04/30/2014 01:48 PM, m.roth@5-cent.us wrote:
I do have a reason for that hope... remember the thread a month or so ago, where *we* *were* asked about tcp-wrappers?
Yes; I don't recall if I commented or not.
For things that mean major changes - systemd, NM, etc, I'm hoping that, in the future, we *also* get solicited in the same way for our views.
And it was the Fedora train that solicited the input, not the EL train (using a cisco-speak term).
And maybe this community's input may have bearing..... on EL8. EL7 is pretty much a done deal, and EL6 is way past a done deal.
On 04/30/2014 11:22 AM, Lamar Owen wrote:
On 04/30/2014 10:41 AM, m.roth@5-cent.us wrote:
Define "stable". please.
I define stable in this context as 'behaving in a completely consistent and predictable fashion.'
I have servers (and I really, REALLY want to reboot them, but they're home directory or project servers, and so it's really hard to get to do that, since people have jobs that run for days or weeks, that have run flawlessly for > 300 days, with nothing vaguely significant problems.
Truly stable systems allow rolling reboots with no interruption of services. EMC and others have had this licked for years with their storage arrays; Tandem had it solved for CPU's and RAM inside a single system image, back in the 80's (and even though it was a bit, ah, interestingly implemented). Truly stable system remain stable even when their parts are unstable. A truly stable system will be stable even when every one of the constituent parts are inherently unstable. And a truly stable system is hard to make.
That's a complete misrepresentation of the other side of [the always disable SELinux] argument.
I said it reminds me of it, not that it's identical to it.
I'm not a huge fan of "if it ain't broke, don't fix it", but fixing something that, 90% of the time, is no big deal to configure and run, with layers of complexity that have created both new issues, and broken things that are set up in a given way for a reason, does not endear it to me.
Reliable and highly available networking using the the traditional Linux networking way is broken for many use cases, not all of which are desktop-oriented. It is broken, and it needs fixing, for those cases. And I *am* a fan of 'it it ain't broke don't fix it.'
*Configuring* *Nix [the Windows] way *is* a Bad Thing.
A Bad Thing is not a capital crime, and Windows does do some things right, as much as I don't like saying that.
What I meant about Windows is everything seems to be hidden behind some gui interface, which leads people to not really understand the underpinnings of what is truly happening. NM seems akin to this, at least the last time I tried to use several years ago.
I work in a development environment where we are constantly adding and removing systems and connections and for me it just gets in the way. I can quickly type ip a a ..., ip r a ... and be done with it.
That *does* come off as snide and supercilious, esp. in this specific forum, with the backgrounds of most of us.
I really try hard to not be snide or offend very often, but the idea that something needs to stay a certain way either just because it's always been that way or because we can't do it the way someone else who we don't like has done it deserves a bit of a reality check, really. Or do we want to go back to the Way It Was Done before this pun called Unix launched? I've run ITS on an emulated DECsystem 10 in SIMH; I'm glad a better way was developed.
The perl mantra is and always has been 'there's more than one way to do it.' NetworkManager is a different way to do it, and while far from perfect it is the means Red Hat has decided to use in EL7.
Just you wait: maybe we should all join some fedora list where we can vote, before they try to force us all to ... EMACS!
And, if there were no alternatives I'd use it. It's not that big of a deal to learn something different, even as busy as I am. Who knows, I might even find that I like it.
Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren't, better.
Again, newer does not mean better, either.
Very correct; and in EL6 at least you can use the older way or the newer way. But if the newer way can be fixed to meet Red Hat's needs, then they're going to use it. If it can't, well, the RH distributions' histories prove that they're not afraid to pull the new and go with something else, too, when the need arises.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, Apr 30, 2014 at 11:54 AM, Steve Clark sclark@netwolves.com wrote:
What I meant about Windows is everything seems to be hidden behind some gui interface, which leads people to not really understand the underpinnings of what is truly happening. NM seems akin to this, at least the last time I tried to use several years ago.
I work in a development environment where we are constantly adding and removing systems and connections and for me it just gets in the way. I can quickly type ip a a ..., ip r a ... and be done with it.
You do know that windows servers have a fairly complete set of command line options, don't you?
On 04/30/2014 01:01 PM, Les Mikesell wrote:
On Wed, Apr 30, 2014 at 11:54 AM, Steve Clark sclark@netwolves.com wrote:
What I meant about Windows is everything seems to be hidden behind some gui interface, which leads people to not really understand the underpinnings of what is truly happening. NM seems akin to this, at least the last time I tried to use several years ago.
I work in a development environment where we are constantly adding and removing systems and connections and for me it just gets in the way. I can quickly type ip a a ..., ip r a ... and be done with it.
You do know that windows servers have a fairly complete set of command line options, don't you?
Well the one and only time I configured an interface on windows from the command line I couldn't believe I had to type some great big string to identify the interface, of course I had looked up how to do it on the internet so there may have been a shorter way to do it.
I guess coming from a history of starting out on an IBM 1130 and proceeding thru Burroughs, NCR and Data General OSes and hardware I just got used to understanding at a very low level and doing things without the help of some fancy GUI.
Les Mikesell wrote:
On Wed, Apr 30, 2014 at 11:54 AM, Steve Clark sclark@netwolves.com wrote:
What I meant about Windows is everything seems to be hidden behind some gui interface, which leads people to not really understand the underpinnings of what is truly happening. NM seems akin to this, at least the last time I tried to use several years ago.
I work in a development environment where we are constantly adding and removing systems and connections and for me it just gets in the way. I can quickly type ip a a ..., ip r a ... and be done with it.
You do know that windows servers have a fairly complete set of command line options, don't you?
That depends on how tight management has them locked down...
mark "I know you're in your aa account, and you installed this inventory software, but you *can't* delete that old log file in that directory created during testing...."
Lamar Owen wrote:
My experience? There is no such thing as a 100% stable networking environment.
I agree that WiFi networking is difficult, but ethernet networking, in my experience, is 99.9% stable. I wish NM would just stick to WiFi.
NetworkManager is well-documented.
Where? I haven't come across any documents that explain clearly how NM is meant to be working, or eg what documents it is reading.
It also logs to /var/log/messages in plain text, too.
I find the NM messages on /var/log/messages ludicrously verbose; and even after wading through these messages it is difficult to determine exactly what is wrong. In my view NM should spend a little time trying to make these messages more helpful.
NetworkManager's goal is extremely simple, and is in the README. It's simply: "NetworkManager attempts to keep an active network connection available at all times." Networks are unreliable. Period.
WiFi networks are unreliable - in fact if you study the algorithms involved it is almost a miracle (in my view) that they work at all. Ethernet networks exchange packets in a completely different way, and are very reliable, and also easy to understand.
Older does not mean better, and many times newer things have to be tried out first to see if they are, or aren't, better. Systemd is one of these things, and it will be interesting to see how that all plays out over the next few years.
Personally, I see the advantages of systemd. But not nearly enough trouble has been taken, in my view, to make it simple to use. Just the fact that one has to type more characters to get to the same place (eg "systemctl start whatever.service" in place of "service whatever start") shows a lack of consideration for users.
On 04/30/2014 10:57 AM, Timothy Murphy wrote:
I agree that WiFi networking is difficult, but ethernet networking, in my experience, is 99.9% stable.
Sure is; but we do bonding for a reason.
I wish NM would just stick to WiFi.
There are other interfaces, like various VPN's and WWAN cards, where NM's notion that non-bootup connections belong to users is a useful thing.
I haven't come across any documents that explain clearly how NM is meant to be working, or eg what documents it is reading.
The upstream documentation has some info; the man pages (nm-applet(1), nm-connection-editor(1), nm-online(1), nm-tool(1), nmcli(1), NetworkManager.conf(5), nm-system-settings.conf(5), and NetworkManager(8) all have useful information. I'm sure it could be improved, but so far it's been useful to me. I should probably edit my initial sentence to 'NetworkManage is fairly well documented' instead, I guess.
I find the NM messages on /var/log/messages ludicrously verbose; and even after wading through these messages it is difficult to determine exactly what is wrong. In my view NM should spend a little time trying to make these messages more helpful.
Agreed. They're almost as opaque as SELinux avc denials.
Lamar Owen wrote:
On 04/30/2014 10:57 AM, Timothy Murphy wrote:
<snip>
The upstream documentation has some info; the man pages (nm-applet(1), nm-connection-editor(1), nm-online(1), nm-tool(1), nmcli(1), NetworkManager.conf(5), nm-system-settings.conf(5), and NetworkManager(8) all have useful information. I'm sure it could be improved, but so far it's been useful to me. I should probably edit my initial sentence to 'NetworkManage is fairly well documented' instead, I guess.
Great - that many manpages....
I find the NM messages on /var/log/messages ludicrously verbose; and even after wading through these messages it is difficult to determine exactly what is wrong. In my view NM should spend a little time trying to make these messages more helpful.
Agreed. They're almost as opaque as SELinux avc denials.
Just like what Lose, I mean, WinDoze, logs... paragraph long "error messages" that are mostly useless and information-free.
mark
On 04/30/2014 02:10 PM, m.roth@5-cent.us wrote:
Just like what Lose, I mean, WinDoze, logs... paragraph long "error messages" that are mostly useless and information-free.
As long as there is unique information to google, it will work out. And while I detest them, the Windows hexadecimal codes are very good for google.
I just want to see BugHlt:SckMud again. ( https://groups.google.com/forum/#!topic/comp.sys.tandy/rpZRWj9Y0nE ) At least let me laugh when it all comes crashing down. And, yes, you probably do want to read that post, but do it on your break. It's one of the most classic Usenet posts of all time.
I used the avc denial messages as an example for a reason; there is a tool that will help you with those. A similar troubleshooting tool for NM messages could (and should) be written.
Em 27-04-2014 01:33, Evan Rowley escreveu:
Is anyone frustrated by Network Manager? I wish CentOS just used the basic configuration files like the ones on BSD-style OSes. Those are so simple in comparison.
Each time I reboot, it seems like the configuration file I create for Network Manager gets destroyed and replaced with a default file. Nothing in the default file would actually make sense on my network, so I'm not even really sure how this machine is still connected to the network after a reboot destroys my previous configuration.
The only way I seem to be able to keep my proper DNS settings is through the GUI interface to Network Manager. I have to enter the configuration in each time I reboot. At the very least, I just want to stop Network Manager from wiping out my perfectly fine /etc/resolv.conf.
There has to be a better way.
Your report is weird because NM should be able to work with your standard ifcfg-<interface> files, as described in https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/...
If you think it's NM, a 'service NetworkManager restart' probably would reproduce the issue, and we could troubleshoot from there. If not, then something else is removing it and NM is just putting something where it was blank.
Nevertheless, it's going to get much better on CentOS 7. NM has been really worked on and now includes a 'nmcli' command, for managing NM through the console/scripts.
Marcelo