I have need to use the old network-scripts and not NetworkManager.
I did yum install network-scripts, I have ifcfg-eth0 set for ONBOOT=yes but it is not starting on boot.
What have I missed ?
Jerry
On Thu, Oct 03, 2019 at 02:42:54PM -0400, Jerry Geis wrote:
I have need to use the old network-scripts and not NetworkManager. I did yum install network-scripts, I have ifcfg-eth0 set for ONBOOT=yes but it is not starting on boot. What have I missed ?
systemctl status network
systemctl status network
AT BOOT: ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: inactive (dead) Docs: man:systemd-sysv-generator(8)
After: service network restart ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: active (running) since Thu 2019-10-03 15:12:05 EDT; 7s ago Docs: man:systemd-sysv-generator(8) Process: 7755 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 24034) Memory: 8.7M CGroup: /system.slice/network.service └─7940 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient-6ada23ed-d1ad-4f37-935c-86163fe61e7b-eth0.lease -pf /run/dhclient-eth0.pid eth0
Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] 'network-scripts' will be removed in one of the next major releases of RHEL. Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 03 15:12:02 localhost.localdomain network[7755]: [46B blob data] Oct 03 15:12:02 localhost.localdomain network[7755]: Bringing up interface eth0: Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x75ae6376) Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPACK from 10.0.2.2 (xid=0x75ae6376) Oct 03 15:12:04 localhost.localdomain dhclient[7907]: bound to 10.0.2.15 -- renewal in 34365 seconds. Oct 03 15:12:04 localhost.localdomain network[7755]: Determining IP information for eth0... done. Oct 03 15:12:04 localhost.localdomain network[7755]: [13B blob data] Oct 03 15:12:05 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking.
Contents of ifcfg-eth0 # Generated by parse-kickstart TYPE="Ethernet" DEVICE="eth0" UUID="6ada23ed-d1ad-4f37-935c-86163fe61e7b" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="yes"
Why is it not starting at boot ? Thanks,
Jerry
On Thu, 3 Oct 2019, Jerry Geis wrote:
systemctl status network
AT BOOT: ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: inactive (dead) Docs: man:systemd-sysv-generator(8)
After: service network restart ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: active (running) since Thu 2019-10-03 15:12:05 EDT; 7s ago Docs: man:systemd-sysv-generator(8) Process: 7755 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 24034) Memory: 8.7M CGroup: /system.slice/network.service └─7940 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient-6ada23ed-d1ad-4f37-935c-86163fe61e7b-eth0.lease -pf /run/dhclient-eth0.pid eth0
Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] 'network-scripts' will be removed in one of the next major releases of RHEL. Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 03 15:12:02 localhost.localdomain network[7755]: [46B blob data] Oct 03 15:12:02 localhost.localdomain network[7755]: Bringing up interface eth0: Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x75ae6376) Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPACK from 10.0.2.2 (xid=0x75ae6376) Oct 03 15:12:04 localhost.localdomain dhclient[7907]: bound to 10.0.2.15 -- renewal in 34365 seconds. Oct 03 15:12:04 localhost.localdomain network[7755]: Determining IP information for eth0... done. Oct 03 15:12:04 localhost.localdomain network[7755]: [13B blob data] Oct 03 15:12:05 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking.
Contents of ifcfg-eth0 # Generated by parse-kickstart TYPE="Ethernet" DEVICE="eth0" UUID="6ada23ed-d1ad-4f37-935c-86163fe61e7b" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="yes"
Why is it not starting at boot ?
I'd take a look at what NetworkManager thinks about it:
nmicli connection show eth0 | grep autoconnect:
If it's not set to 'yes', then you'll want to do so:
nmcli connection modify eth0 connection.autoconnect yes
As to the 'why,' I don't know. Here's the official explanation:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm...
On 10/3/19 9:57 PM, Paul Heinlein wrote:
On Thu, 3 Oct 2019, Jerry Geis wrote:
systemctl status network
AT BOOT: ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: inactive (dead) Docs: man:systemd-sysv-generator(8)
After: service network restart ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: active (running) since Thu 2019-10-03 15:12:05 EDT; 7s ago Docs: man:systemd-sysv-generator(8) Process: 7755 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 24034) Memory: 8.7M CGroup: /system.slice/network.service └─7940 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient-6ada23ed-d1ad-4f37-935c-86163fe61e7b-eth0.lease
-pf /run/dhclient-eth0.pid eth0
Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] 'network-scripts' will be removed in one of the next major releases of RHEL. Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 03 15:12:02 localhost.localdomain network[7755]: [46B blob data] Oct 03 15:12:02 localhost.localdomain network[7755]: Bringing up interface eth0: Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x75ae6376) Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPACK from 10.0.2.2 (xid=0x75ae6376) Oct 03 15:12:04 localhost.localdomain dhclient[7907]: bound to 10.0.2.15 -- renewal in 34365 seconds. Oct 03 15:12:04 localhost.localdomain network[7755]: Determining IP information for eth0... done. Oct 03 15:12:04 localhost.localdomain network[7755]: [13B blob data] Oct 03 15:12:05 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking.
Contents of ifcfg-eth0 # Generated by parse-kickstart TYPE="Ethernet" DEVICE="eth0" UUID="6ada23ed-d1ad-4f37-935c-86163fe61e7b" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="yes"
Why is it not starting at boot ?
I'd take a look at what NetworkManager thinks about it:
nmicli connection show eth0 | grep autoconnect:
If it's not set to 'yes', then you'll want to do so:
nmcli connection modify eth0 connection.autoconnect yes
As to the 'why,' I don't know. Here's the official explanation:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm...
In ifcfg-eth0 you need: NM_CONTROLED=NO
and/or to disable NetworkManager:
systemctl stop NetworkManager.service systemctl disable NetworkManager.service
Am 03.10.2019 um 21:14 schrieb Jerry Geis:
Contents of ifcfg-eth0 # Generated by parse-kickstart TYPE="Ethernet" DEVICE="eth0" UUID="6ada23ed-d1ad-4f37-935c-86163fe61e7b" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="yes"
Why is it not starting at boot ? Thanks,
Jerry
Set
NM_CONTROLLED=no
Alexander
On Thu, 2019-10-03 at 15:14 -0400, Jerry Geis wrote:
systemctl status network
AT BOOT: ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: inactive (dead) Docs: man:systemd-sysv-generator(8)
After: service network restart ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: active (running) since Thu 2019-10-03 15:12:05 EDT; 7s ago Docs: man:systemd-sysv-generator(8) Process: 7755 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 24034) Memory: 8.7M CGroup: /system.slice/network.service └─7940 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient-6ada23ed-d1ad-4f37-935c-86163fe61e7b- eth0.lease -pf /run/dhclient-eth0.pid eth0
Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] 'network-scripts' will be removed in one of the next major releases of RHEL. Oct 03 15:12:02 localhost.localdomain network[7755]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 03 15:12:02 localhost.localdomain network[7755]: [46B blob data] Oct 03 15:12:02 localhost.localdomain network[7755]: Bringing up interface eth0: Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x75ae6376) Oct 03 15:12:02 localhost.localdomain dhclient[7907]: DHCPACK from 10.0.2.2 (xid=0x75ae6376) Oct 03 15:12:04 localhost.localdomain dhclient[7907]: bound to 10.0.2.15 -- renewal in 34365 seconds. Oct 03 15:12:04 localhost.localdomain network[7755]: Determining IP information for eth0... done. Oct 03 15:12:04 localhost.localdomain network[7755]: [13B blob data] Oct 03 15:12:05 localhost.localdomain systemd[1]: Started LSB: Bring up/down networking.
Contents of ifcfg-eth0 # Generated by parse-kickstart TYPE="Ethernet" DEVICE="eth0" UUID="6ada23ed-d1ad-4f37-935c-86163fe61e7b" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="yes"
Why is it not starting at boot ? Thanks,
Jerry _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Don't run systemctl disable NetworkManager. You need to mask the service to ensure interdependent services are not starting it up for you behind you back. i.e. systemctl mask NetworkManager Also, when you start it ensure it is set to start on boot. e.g. systemctl enable --now network
Finally, heed the advice in the log. this is going away. I've had very few issues with NetworkManager since ~7.4 onward and would suggest giving it a go. I find it much easier to work with in scripts.
Tris
On 10/3/19 2:42 PM, Jerry Geis wrote:
I have need to use the old network-scripts and not NetworkManager.
Why? I'd like to understand more about the use case where this is a requirement.
On Fri, Oct 4, 2019 at 6:26 AM Jim Perrin jperrin@centos.org wrote:
On 10/3/19 2:42 PM, Jerry Geis wrote:
I have need to use the old network-scripts and not NetworkManager.
Why? I'd like to understand more about the use case where this is a requirement.
One example we have is qemu virtual machine hosts where setting up the bridge in the ifcfg scripts is easier and avoiding NetworkManager messing things up in a non-intuitive way is critical.
Also, we have 150+ machines with fixed IP addresses, always-on connections, and no wireless. Having NetworkManager do seemingly random things is not desirable.
FWIW we disable NetworkManager with systemctl in our potinstall kickstart scripts and it seems to do what we want.
On 10/4/19 2:27 PM, Phelps, Matthew wrote:
On Fri, Oct 4, 2019 at 6:26 AM Jim Perrin jperrin@centos.org wrote:
On 10/3/19 2:42 PM, Jerry Geis wrote:
I have need to use the old network-scripts and not NetworkManager.
Why? I'd like to understand more about the use case where this is a requirement.
One example we have is qemu virtual machine hosts where setting up the bridge in the ifcfg scripts is easier and avoiding NetworkManager messing things up in a non-intuitive way is critical.
Also, we have 150+ machines with fixed IP addresses, always-on connections, and no wireless. Having NetworkManager do seemingly random things is not desirable.
FWIW we disable NetworkManager with systemctl in our potinstall kickstart scripts and it seems to do what we want.
+1
Bridge for VM's is main reason I hate NM. I now mess with both NM and br0 controled by network because I use Windows VM on my laptop. As soon as you disconnect LAN cable your eth and bridge connection are gone and stupid KVM can not recover and reconnect to newly activated bridge when you return LAN cable, even only a second later...
Once upon a time, Ljubomir Ljubojevic centos@plnet.rs said:
Bridge for VM's is main reason I hate NM. I now mess with both NM and br0 controled by network because I use Windows VM on my laptop. As soon as you disconnect LAN cable your eth and bridge connection are gone and stupid KVM can not recover and reconnect to newly activated bridge when you return LAN cable, even only a second later...
See the NetworkManager-config-server package.
On 10/4/19 3:03 PM, Chris Adams wrote:
Once upon a time, Ljubomir Ljubojevic centos@plnet.rs said:
Bridge for VM's is main reason I hate NM. I now mess with both NM and br0 controled by network because I use Windows VM on my laptop. As soon as you disconnect LAN cable your eth and bridge connection are gone and stupid KVM can not recover and reconnect to newly activated bridge when you return LAN cable, even only a second later...
See the NetworkManager-config-server package.
Ahh, thanks. I was wondering about it but never investigated.
On 10/4/19 10:00 AM, Ljubomir Ljubojevic wrote:
On 10/4/19 3:03 PM, Chris Adams wrote:
... See the NetworkManager-config-server package.
Ahh, thanks. I was wondering about it but never investigated.
Hmmmm..... Description : This adds a NetworkManager configuration file to make it behave more like the old "network" service. In particular, it stops NetworkManager from automatically running DHCP on unconfigured ethernet devices, and allows connections with static IP addresses to be brought up even on ethernet devices with no carrier.
This package is intended to be installed by default for server deployments. ++++++++++ Well, learn something new every day.... nice. Time to learn a bit more about what it will do, and see about deploying to our KVM hosts..... I've not had the bridged network issues some seem to have been plagued with, and I have several KVM hosts with bridged networking (with multiple VLANs) using NetworkManager (using nmtui to configure a bridge isn't hard). I decided to configure it that way just ot see how easy or hard it was to do with NM, and to test its stability, and after passing testing under load I popped it into production, running a few Windows 7 guests and a couple of CentOS 7 guests.
On 10/4/19 4:42 PM, Lamar Owen wrote:
On 10/4/19 10:00 AM, Ljubomir Ljubojevic wrote:
On 10/4/19 3:03 PM, Chris Adams wrote:
... See the NetworkManager-config-server package.
Ahh, thanks. I was wondering about it but never investigated.
Hmmmm..... Description : This adds a NetworkManager configuration file to make it behave more like the old "network" service. In particular, it stops NetworkManager from automatically running DHCP on unconfigured ethernet devices, and allows connections with static IP addresses to be brought up even on ethernet devices with no carrier.
This package is intended to be installed by default for server deployments. ++++++++++ Well, learn something new every day.... nice. Time to learn a bit more about what it will do, and see about deploying to our KVM hosts..... I've not had the bridged network issues some seem to have been plagued with, and I have several KVM hosts with bridged networking (with multiple VLANs) using NetworkManager (using nmtui to configure a bridge isn't hard). I decided to configure it that way just ot see how easy or hard it was to do with NM, and to test its stability, and after passing testing under load I popped it into production, running a few Windows 7 guests and a couple of CentOS 7 guests.
It is OK if your KVM host is on LAN cable that never is disconnected or power goes down. But I have a laptop I use first at work where I use LAN and then at home where I use WLAN only, and suspending laptop is same as disconnecting LAN, bridge is disabled and KVM bridged network unhooked, and you can never reinitialize it without at least restarting kvm, and full treatmant is shuting down VM, restarting NM then network then starting VM again... So I just shutdown VM and laptop and boot everey itme I move. Maybe I can change this behavior now.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 10/4/19 11:02 AM, Ljubomir Ljubojevic wrote:
... It is OK if your KVM host is on LAN cable that never is disconnected or power goes down. But I have a laptop I use first at work where I use LAN and then at home where I use WLAN only, and suspending laptop is same as disconnecting LAN, bridge is disabled and KVM bridged network unhooked, and you can never reinitialize it without at least restarting kvm, and full treatmant is shuting down VM, restarting NM then network then starting VM again... So I just shutdown VM and laptop and boot everey itme I move. Maybe I can change this behavior now.
You and I have nearly identical use cases, interestingly enough. My laptop that I'm using right now to type this is my development machine for a number of KVM things I do in the data center as well. Since I run it docked with ethernet on my desk, but not docked and on WiFi at home, I've had to do two things: 1.) A real shutdown when I leave work. For some reason I've never be a fan of suspend/hibernate, and since I use LUKS I'd rather not leave the volume unlocked as it would be in a suspend/hibernate scenario; 2.) NAT-connected VMs in development, since I've never been able to get bridging to work properly over wireless (specification says it can't work, and I think that's true in practice, but I always reserve the right to be wrong!).
My laptop is at least as powerful as most of our servers, and it works great for development purposes.
On 10/4/19 5:27 PM, Lamar Owen wrote:
On 10/4/19 11:02 AM, Ljubomir Ljubojevic wrote:
... It is OK if your KVM host is on LAN cable that never is disconnected or power goes down. But I have a laptop I use first at work where I use LAN and then at home where I use WLAN only, and suspending laptop is same as disconnecting LAN, bridge is disabled and KVM bridged network unhooked, and you can never reinitialize it without at least restarting kvm, and full treatmant is shuting down VM, restarting NM then network then starting VM again... So I just shutdown VM and laptop and boot everey itme I move. Maybe I can change this behavior now.
You and I have nearly identical use cases, interestingly enough. My laptop that I'm using right now to type this is my development machine for a number of KVM things I do in the data center as well. Since I run it docked with ethernet on my desk, but not docked and on WiFi at home, I've had to do two things: 1.) A real shutdown when I leave work. For some reason I've never be a fan of suspend/hibernate, and since I use LUKS I'd rather not leave the volume unlocked as it would be in a suspend/hibernate scenario; 2.) NAT-connected VMs in development, since I've never been able to get bridging to work properly over wireless (specification says it can't work, and I think that's true in practice, but I always reserve the right to be wrong!).
I have VM in NAT mode mostly these days, but sometimes I need bridged network to recognize some hardware on the network, Mikrotik WiFi routers or printers so I need ability to go to bridge.
If this with NetworkManager-config-server package works, I can at most times (if I want) plug a LAN to my laptop and be happy. I do not use LUKS so suspend until I get home 10 minutes later is ok.
My laptop is at least as powerful as most of our servers, and it works great for development purposes.
I have Dell Vostro 15 with Core i7, 12GB RAM and 512GB SSD + 1TB HDD
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 10/4/19 11:39 AM, Ljubomir Ljubojevic wrote:
... I have VM in NAT mode mostly these days, but sometimes I need bridged network to recognize some hardware on the network, Mikrotik WiFi routers or printers so I need ability to go to bridge.
I've kludged together a solution for those times here by using the NAT connection, but then running an OpenVPN client on the guest to an OpenVPN server with layer-2 adjacency to those sorts of devices. That has the added bonus of letting those layer-2 services work even from off-site (part of the reason I use LUKS!). I use static addresses in the OpenVPN setup as well, allowing controlled access to certain resources (like the control interface addresses and ports to our two 26-meter radio telescopes).
If this with NetworkManager-config-server package works, I can at most times (if I want) plug a LAN to my laptop and be happy.
I am interested in what you find!
I have Dell Vostro 15 with Core i7, 12GB RAM and 512GB SSD + 1TB HDD
Dell Precision M6700 with Core i7-3740QM @ 2.7GHz, 24GB RAM, 500GB SSD plus 2x 1TB HGST 7K1000's. I never buy new, always gently preowned, and it's amazing to me how well the 3740QM performs relative to newer stuff.... and I paid less than 10% of MSRP for it....
On 10/4/19 5:55 PM, Lamar Owen wrote:
On 10/4/19 11:39 AM, Ljubomir Ljubojevic wrote:
...
I've kludged together a solution for those times here by using the NAT connection, but then running an OpenVPN client on the guest to an OpenVPN server with layer-2 adjacency to those sorts of devices. That has the added bonus of letting those layer-2 services work even from off-site (part of the reason I use LUKS!). I use static addresses in the OpenVPN setup as well, allowing controlled access to certain resources (like the control interface addresses and ports to our two 26-meter radio telescopes).
I also have OpenVPN server (on Mikrotik router in our office) and OpenVPN client in Windows VM and I use it in the same maner as you do :-)
If this with NetworkManager-config-server package works, I can at most times (if I want) plug a LAN to my laptop and be happy.
I am interested in what you find!
...
On 2019-10-04 10:27, Lamar Owen wrote:
On 10/4/19 11:02 AM, Ljubomir Ljubojevic wrote:
... It is OK if your KVM host is on LAN cable that never is disconnected or power goes down. But I have a laptop I use first at work where I use LAN and then at home where I use WLAN only, and suspending laptop is same as disconnecting LAN, bridge is disabled and KVM bridged network unhooked, and you can never reinitialize it without at least restarting kvm, and full treatmant is shuting down VM, restarting NM then network then starting VM again... So I just shutdown VM and laptop and boot everey itme I move. Maybe I can change this behavior now.
You and I have nearly identical use cases, interestingly enough. My laptop that I'm using right now to type this is my development machine for a number of KVM things I do in the data center as well. Since I run it docked with ethernet on my desk, but not docked and on WiFi at home, I've had to do two things: 1.) A real shutdown when I leave work. For some reason I've never be a fan of suspend/hibernate, and since I use LUKS I'd rather not leave the volume unlocked as it would be in a suspend/hibernate scenario; 2.) NAT-connected VMs in development, since I've never been able to get bridging to work properly over wireless (specification says it can't work, and I think that's true in practice, but I always reserve the right to be wrong!).
I wonder if it is possible to do what I do on FreeBSD laptop: there I created link aggregation interface which includes wired adapter and wireless one (in that priority order), making networking acting "as smart as macintosh does ;-) ". I'm sure one of Linux Experts can point us in right direction (at the moment I just use GUI applet to enable interfaces etc).
Valeri
My laptop is at least as powerful as most of our servers, and it works great for development purposes.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 2019-10-04 08:03, Chris Adams wrote:
Once upon a time, Ljubomir Ljubojevic centos@plnet.rs said:
Bridge for VM's is main reason I hate NM.
+1
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
Do not make any changes [in the program] unless they are absolutely necessary.
This rule was helping to not introduce new bugs. Debugging is really expensive process (that is why it often gets abridged in favor of spending effort on yet more "new features" - see, e.g. firefox and friends).
Yet one more thing is: building superstructure on top of what actually works. NM is one of examples. Printer configuration tool is another (whereas CUPS web interface - http://localhost:631 - is same simple, and is even better). I understand potential goal: to give newcomers the way to handle thing (by pointing, clicking and "it works" ;-). But there is a limit to the extent Linux can steal Microsoft's userbase. At some point having your machine behave as iPad gets so annoying that some Linux folks flee either their DE (Desktop Environment) to something "more traditional", e.g. mate; or some go lengths, and flee their workstations and laptops to one of BSD descendents (my main system on laptop is FreeBSD, though it also boots to MS Windows and Ubuntu Linux).
I know it sounds like a rant, but I decided against putting rant tags on this one.
Valeri
I now mess with both NM and br0 controled by network because I use Windows VM on my laptop. As soon as you disconnect LAN cable your eth and bridge connection are gone and stupid KVM can not recover and reconnect to newly activated bridge when you return LAN cable, even only a second later...
See the NetworkManager-config-server package.
On Fri, 4 Oct 2019 at 10:41, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On 2019-10-04 08:03, Chris Adams wrote:
Once upon a time, Ljubomir Ljubojevic centos@plnet.rs said:
Bridge for VM's is main reason I hate NM.
+1
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
It is the same evolution you see in other industries. Auto mechanics constantly complain about how the newer generation is 'dumber' for not knowing the beauty of a vehicle that the mechanic had when they were in their teens. [Of course they also rail on the fact that their grandparents car was a complete junk that was too simple to work.] Most of the tools we had 30 years ago in computers are like working on a Model T era vehicle. They allowed for a lot of configuration choices and fine tuning but they also were limited vastly in other ways. You can't run a fleet of 1000 Model TT trucks made in 1923 as well as you could 1000 1933 trucks. You ended up losing some of the knowledge of hand-crafting your own gears but you got the ability to go faster, carry heavier loads and better gas mileage without working as hard at getting a mile out of a quart.
The transmissions of the 1933 were considered 'automatic' compared to some 1912 vehicles.. even if you had a clutch because you no longer had to get out and turn something to make it go in reverse. The 'truly' automatic transmissions of the 1950's were horrible and it wasn't until the 1970's where they became 'liveable'. Today trying to find a real stick shift is almost impossible as you find out that the most are really talking to a computer which does the shifting when it decides is optimal.
As that happens the place where a programmer makes changes goes higher and higher. They no longer see a system by itself but see 10,000 nodes sitting in some cloud. They really could care less if 10% of them drop off because there is a tool which is going ot bring 1000 back online when that happens. However they may still be worrying about making a change 'low' level to them. It is just light years above where those of us with only 10 or a 100 systems can dream about.
On 10/4/19 4:59 PM, Stephen John Smoogen wrote:
On Fri, 4 Oct 2019 at 10:41, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On 2019-10-04 08:03, Chris Adams wrote:
Once upon a time, Ljubomir Ljubojevic centos@plnet.rs said:
Bridge for VM's is main reason I hate NM.
+1
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
It is the same evolution you see in other industries. Auto mechanics constantly complain about how the newer generation is 'dumber' for not knowing the beauty of a vehicle that the mechanic had when they were in their teens. [Of course they also rail on the fact that their grandparents car was a complete junk that was too simple to work.] Most of the tools we had 30 years ago in computers are like working on a Model T era vehicle. They allowed for a lot of configuration choices and fine tuning but they also were limited vastly in other ways. You can't run a fleet of 1000 Model TT trucks made in 1923 as well as you could 1000 1933 trucks. You ended up losing some of the knowledge of hand-crafting your own gears but you got the ability to go faster, carry heavier loads and better gas mileage without working as hard at getting a mile out of a quart.
The transmissions of the 1933 were considered 'automatic' compared to some 1912 vehicles.. even if you had a clutch because you no longer had to get out and turn something to make it go in reverse. The 'truly' automatic transmissions of the 1950's were horrible and it wasn't until the 1970's where they became 'liveable'. Today trying to find a real stick shift is almost impossible as you find out that the most are really talking to a computer which does the shifting when it decides is optimal.
In Europe most cars are still stick, around 80%.
As that happens the place where a programmer makes changes goes higher and higher. They no longer see a system by itself but see 10,000 nodes sitting in some cloud. They really could care less if 10% of them drop off because there is a tool which is going ot bring 1000 back online when that happens. However they may still be worrying about making a change 'low' level to them. It is just light years above where those of us with only 10 or a 100 systems can dream about.
On 10/4/19 10:40 AM, Valeri Galtsev wrote:
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
Do not make any changes [in the program] unless they are absolutely necessary.
I have in the past agreed with this assessment more than once. And I _am_ somewhat of an old hand at this, having run Unix and Unix-like systems for a bit over 30 years.
The fact of the matter is that, even though some of the old ways work just fine and don't need to be changed, many more times I've seen that, if the old way was a kludge to begin with, maybe there really is a better way to do it. Take the transition from horse and buggy to automobile for instance. Iron rim tires work just great for the buggy, not so great for the automobile; a change had to be made in an old technology (the wheel) to meet the needs of the new automobile. Lots of wheelwrights probably fought that change, too.
I've seen the old ways, and there are more kludges out there than some would like to admit. (obOldWayRef: article on 'the kluge' from the 1966 Datamation book 'Faith, Hope, and Parity.') Just remember: the old ways back then was punch card and batch; what do you mean you want more than one person to use such an expensive thing as a computer live, wasting its valuable time? Many seem to forget just how subversive Unix was back in the day relative the the old ways.
... Yet one more thing is: building superstructure on top of what actually works.
The definition of what works can and does change over time. Sure, an iron rim wheel can work for the new automobile, but the basic change in what the wheel needed to do (with buggy the wheel doesn't need to provide good traction, that's what hooves are for, and narrow and smooth work best; with the automobile all of a sudden the drive wheels need to provide traction, and even though the iron-rim wheel still works after a fashion on smooth ground, there is a better way to do it). I can just hear the old-school wheelwrights saying "well, if it gets stuck in the mud then just don't go in the mud!" or "why would anyone want to go faster than the horse-drawn buggy could?" or "why would you need to turn that quickly and at that speed?" or "why in the world would you want brakes to stop you that quickly?" and the list goes on.....
I _am_ old-school in thought, but I do consciously make the effort to understand the newer reasoning, rather than be the greybeard that constantly talks about how I did it in the old days. Heh, in the old days I made it work with K&R C, 1MB of RAM, and an 8MHz CPU.... and I griped about the misfeatures then!.....
Today, I'm doing things with containers, virtualization, dynamic load balancing, software-defined infrastructure/IaaS, etc that the old ways simply cannot handle. NetworkManager/systemd/etc in CentOS are far from perfect, but at least they're trying to solve the newer problems that the old ways in many cases simply cannot.
On 10/4/2019 8:17 AM, Lamar Owen wrote:
On 10/4/19 10:40 AM, Valeri Galtsev wrote:
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
Do not make any changes [in the program] unless they are absolutely necessary.
I have in the past agreed with this assessment more than once. And I _am_ somewhat of an old hand at this, having run Unix and Unix-like systems for a bit over 30 years.
The fact of the matter is that, even though some of the old ways work just fine and don't need to be changed, many more times I've seen that, if the old way was a kludge to begin with, maybe there really is a better way to do it. Take the transition from horse and buggy to automobile for instance. Iron rim tires work just great for the buggy, not so great for the automobile; a change had to be made in an old technology (the wheel) to meet the needs of the new automobile. Lots of wheelwrights probably fought that change, too.
...
Today, I'm doing things with containers, virtualization, dynamic load balancing, software-defined infrastructure/IaaS, etc that the old ways simply cannot handle. NetworkManager/systemd/etc in CentOS are far from perfect, but at least they're trying to solve the newer problems that the old ways in many cases simply cannot.
This is a bit orthogonal, though. (Witness the effort to remove systemd requirements from containers.) An engineer is expected to understand the component parts rationally to arrive at some sort of professional conclusion that something is likely to work properly. This is not helped by a switch from imperative and deterministic to declarative and dynamic, which underlies many of the changes we've had to deal with in the past decade. There is a time and place for the latter, and it's good to have options available... but there are many times and places (especially in the Enterprise space) where the opposite is necessary, and it's FAR more reasonable to layer dynamic manipulation on top of a deterministically-configured core than the other way around.
-jc
On Fri, 4 Oct 2019 at 18:11, Japheth Cleaver cleaver@terabithia.org wrote:
On 10/4/2019 8:17 AM, Lamar Owen wrote:
On 10/4/19 10:40 AM, Valeri Galtsev wrote:
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
Do not make any changes [in the program] unless they are absolutely necessary.
I have in the past agreed with this assessment more than once. And I _am_ somewhat of an old hand at this, having run Unix and Unix-like systems for a bit over 30 years.
The fact of the matter is that, even though some of the old ways work just fine and don't need to be changed, many more times I've seen that, if the old way was a kludge to begin with, maybe there really is a better way to do it. Take the transition from horse and buggy to automobile for instance. Iron rim tires work just great for the buggy, not so great for the automobile; a change had to be made in an old technology (the wheel) to meet the needs of the new automobile. Lots of wheelwrights probably fought that change, too.
...
Today, I'm doing things with containers, virtualization, dynamic load balancing, software-defined infrastructure/IaaS, etc that the old ways simply cannot handle. NetworkManager/systemd/etc in CentOS are far from perfect, but at least they're trying to solve the newer problems that the old ways in many cases simply cannot.
This is a bit orthogonal, though. (Witness the effort to remove systemd requirements from containers.) An engineer is expected to understand the component parts rationally to arrive at some sort of professional conclusion that something is likely to work properly. This is not helped by a switch from imperative and deterministic to declarative and dynamic, which underlies many of the changes we've had to deal with in the past decade. There is a time and place for the latter, and it's good to have options available... but there are many times and places (especially in the Enterprise space) where the opposite is necessary, and it's FAR more reasonable to layer dynamic manipulation on top of a deterministically-configured core than the other way around.
On the other hand, most of the idea that the old config scripts were deterministic and imperative was built on a large amount of hacks to try and make it so. Having spent more time than I want dealing with systems which seem to be just like everything else but coming up with eth0 being eth4 (I am looking at you 40 Dell, HP and IBM boxes) on a reboot half the time.. I have come to see that a lot of scripts are full of race conditions and slowdowns to try and stop those race conditions from happening. If anything messed up from a kernel change, bios update, a switch update?, etc and you could be completely in the weeds wondering why imperative was failing. It was failing because it was never absolutely true.
The problem is that as hardware rollouts have grown larger and larger spending time trying to figure out why 400 to 1000 out of 10,000 systems are weird.. is too much time. You want something which will try to figure it out itself and do the 'right' thing.. even if it means that eth4 on those 400 boxes are now the main interface versus the eth0 on the 3600. And yes.. as an non-neurotypical person.. I find that incredibly infuriating.. however I have also realized that most businesses don't care I and others find it that way. They just want those 10,000 to 100,000 systems to come up and work.
-jc
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On Oct 5, 2019, at 10:29 AM, Stephen John Smoogen smooge@gmail.com wrote:
On Fri, 4 Oct 2019 at 18:11, Japheth Cleaver cleaver@terabithia.org wrote:
On 10/4/2019 8:17 AM, Lamar Owen wrote:
On 10/4/19 10:40 AM, Valeri Galtsev wrote:
My impression is younger generation doesn't value rules that programmers were following 2-3 decades ago. One of which is:
Do not make any changes [in the program] unless they are absolutely necessary.
I have in the past agreed with this assessment more than once. And I _am_ somewhat of an old hand at this, having run Unix and Unix-like systems for a bit over 30 years.
The fact of the matter is that, even though some of the old ways work just fine and don't need to be changed, many more times I've seen that, if the old way was a kludge to begin with, maybe there really is a better way to do it. Take the transition from horse and buggy to automobile for instance. Iron rim tires work just great for the buggy, not so great for the automobile; a change had to be made in an old technology (the wheel) to meet the needs of the new automobile. Lots of wheelwrights probably fought that change, too.
...
Today, I'm doing things with containers, virtualization, dynamic load balancing, software-defined infrastructure/IaaS, etc that the old ways simply cannot handle. NetworkManager/systemd/etc in CentOS are far from perfect, but at least they're trying to solve the newer problems that the old ways in many cases simply cannot.
This is a bit orthogonal, though. (Witness the effort to remove systemd requirements from containers.) An engineer is expected to understand the component parts rationally to arrive at some sort of professional conclusion that something is likely to work properly. This is not helped by a switch from imperative and deterministic to declarative and dynamic, which underlies many of the changes we've had to deal with in the past decade. There is a time and place for the latter, and it's good to have options available... but there are many times and places (especially in the Enterprise space) where the opposite is necessary, and it's FAR more reasonable to layer dynamic manipulation on top of a deterministically-configured core than the other way around.
On the other hand, most of the idea that the old config scripts were deterministic and imperative was built on a large amount of hacks to try and make it so. Having spent more time than I want dealing with systems which seem to be just like everything else but coming up with eth0 being eth4 (I am looking at you 40 Dell, HP and IBM boxes) on a reboot half the time.. I have come to see that a lot of scripts are full of race conditions and slowdowns to try and stop those race conditions from happening. If anything messed up from a kernel change, bios update, a switch update?, etc and you could be completely in the weeds wondering why imperative was failing. It was failing because it was never absolutely true.
The problem is that as hardware rollouts have grown larger and larger spending time trying to figure out why 400 to 1000 out of 10,000 systems are weird.. is too much time. You want something which will try to figure it out itself and do the 'right' thing.. even if it means that eth4 on those 400 boxes are now the main interface versus the eth0 on the 3600. And yes.. as an non-neurotypical person.. I find that incredibly infuriating.. however I have also realized that most businesses don't care I and others find it that way. They just want those 10,000 to 100,000 systems to come up and work.
And this is where big guys with thousand of boxes win, and us small guys with few dozens of boxes have to learn what works for big guys, and use it on our small scale. And no, this is not a rant, it is just realization of the reality, so I can adjust to it.
Thanks everybody, this was insightful - for me.
Valeri
-jc
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
-- Stephen J Smoogen. _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 10/5/19 11:29 AM, Stephen John Smoogen wrote:
... On the other hand, most of the idea that the old config scripts were deterministic and imperative was built on a large amount of hacks to try and make it so. Having spent more time than I want dealing with systems which seem to be just like everything else but coming up with eth0 being eth4 (I am looking at you 40 Dell, HP and IBM boxes) on a reboot half the time..
...
I remember having that happen a few times back in CentOS 4.x days, where eth0 would silently become eth1 after a kernel update and ifcfg-eth0 would break hard. It's gotten a lot better than it was, and I've had very few problems in my use cases with NM. YMMV, of course.
On Fri, 2019-10-04 at 11:17 -0400, Lamar Owen wrote:
On 10/4/19 10:40 AM, Valeri Galtsev wrote:
Do not make any changes [in the program] unless they are absolutely necessary.
Especially with production programs.
Take the transition from horse and buggy to automobile for instance. Iron rim tires work just great for the buggy, not so great for the automobile; a change had to be made in an old technology (the wheel) to meet the needs of the new automobile.
Technically it was never an "upgrade" but a brand new and alternative system.
Just remember: the old ways back then was punch card and batch;
With a minimum of 3 tapes; disks had not been invented. Some British universities had a magnetic drum.
I _am_ old-school in thought, but I do consciously make the effort to understand the newer reasoning, rather than be the greybeard that constantly talks about how I did it in the old days. Heh, in the old days I made it work with K&R C, 1MB of RAM, and an 8MHz CPU....
Luxury. Try running on a 32k single processor computer, started with booting the card reader which read cards that booted from a tape.
Today, I'm doing things with containers, virtualization, dynamic load balancing, software-defined infrastructure/IaaS, etc that the old ways simply cannot handle.
No comparison between 50+ years ago with this constantly developing and fascinating New World. However KISS remains valid. If it works smoothly, don't mess it up.
Regards.
On 10/5/19 2:14 PM, Always Learning wrote:
Technically [the new automobile] was never an "upgrade" but a brand new and alternative system. ...
The automobile was originally billed in many areas as the 'horseless carriage,' an upgrade.
Luxury. Try running on a 32k single processor computer, started with booting the card reader which read cards that booted from a tape.
...
Frontpanel, 256 words (12-bit words), and paper tape. But I also never had that straight-8 on the net, either, but, via uucp, I did have the T6K on Usenet.
No comparison between 50+ years ago with this constantly developing and fascinating New World. However KISS remains valid. If it works smoothly, don't mess it up.
But that's the point; the previous solution didn't work as smoothly for all use cases as one might want to believe.
On Sat, 2019-10-05 at 15:59 -0400, Lamar Owen wrote:
Frontpanel, 256 words (12-bit words), and paper tape. But I also never had that straight-8 on the net, either, but, via uucp, I did have the T6K on Usenet.
My second machine was 9 bits = 8 + parity. 10 years later I was working on 36 bit words = 4 x 9 bit ACSII = 6 x 6 bit BCD.
In my later computer life, the best thing I ever did, 10 years ago, was to abandon all m$ and move to C 5.3 which was truly a computer programmer's dream. Liberating and exhilarating.
chunk, chunk, chunk, ding, 110 baud terminals, then along came Terminets at a faster 300 baud. Think I can still punch a 80 column card using a hand punch.
Many now take for granted Centos without fully appreciating how powerful and empowering it is or the tremendous work done by those creating, maintaining and testing it and others developing extensions and repos. Life without Centos would be bleak.
Regards,
On Fri, Oct 04, 2019 at 08:27:08AM -0400, Phelps, Matthew wrote:
Also, we have 150+ machines with fixed IP addresses, always-on connections, and no wireless. Having NetworkManager do seemingly random things is not desirable.
I mention this every time people bash NetworkManager on servers.
I have NM set up on all our servers. Why? Because the legacy network-scripts service tries to bring up the interface once on boot. We had a power outage to the entire floor of our datacenter and the linux systems booted faster than the network infrastructure. Any linux system not using NM tried to bring up the interface, saw that there was no connection, and gave up. We had to physically reboot those hosts. Systems running NM dynamically brought up their interface when the interface became active.