On Wed, October 29, 2014 9:28 am, Reindl Harald wrote:
Am 29.10.2014 um 15:22 schrieb Valeri Galtsev:
On Wed, October 29, 2014 9:06 am, Steve Clark wrote:
On 10/29/2014 10:02 AM, Beartooth wrote:
I'm running CentOS 6 (6.5 iirc) on my wife's machine, which I've been updating pretty much every day. Today yum got 425 packages!
Somewhere a dam must have broken. Sometimes some of us don't appreciate how much work the developers do.
Strength to their arms, and many heartfelt thanks!
+100
Me too. I was [mistakenly, apparently] always considering 5.[n+1], 6.[m+1] just re-spins, thus providing latest packages with _backported_ security patches/bugfixes, aimed at providing installation media that is not entail millions of updates. "Releases" with newer versions, drivers included in kernel shuffled, the new kernel (without any necessity in it) which causes hassle to reboot the box... This all effectively defeats the "Enterprise" portion of the name of the system, doesn't it?
if you think there is no necessity for the new kernel who is forcing you to reboot?
I like that "If" clause of yours... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
Valeri
"enterprise OS" is nothing about never ever reboot, it's about API/ABI stability
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
On Wed, October 29, 2014 4:02 pm, Beartooth wrote:
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Wed, Oct 29, 2014 at 4:12 PM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
Things break and need maintenance. If your services can't tolerate that, you need more redundancy. As for the OS updates (which are only one of the many things that can break...), they are 'pretty well' vetted by upstream so breakage is rare and your odds are better installing them than not. But you don't have to reboot right now - schedule it for a convenient time.
Things break and need maintenance. If your services can't tolerate that, you need more redundancy. As for the OS updates (which are only one of the many things that can break...), they are 'pretty well' vetted by upstream so breakage is rare and your odds are better installing them than not. But you don't have to reboot right now - schedule it for a convenient time.
Technically a kernel patch isn’t for something “that broke”, it’s for something “that was written wrong to begin with”…
Just to be pedantic.
-- Nate Duehr denverpilot@me.com
On Thu, Oct 30, 2014 at 7:34 PM, Nathan Duehr denverpilot@me.com wrote:
Things break and need maintenance. If your services can't tolerate that, you need more redundancy. As for the OS updates (which are only one of the many things that can break...), they are 'pretty well' vetted by upstream so breakage is rare and your odds are better installing them than not. But you don't have to reboot right now - schedule it for a convenient time.
Technically a kernel patch isn’t for something “that broke”, it’s for something “that was written wrong to begin with”…
Just to be pedantic.
True, but pretty much everything was written wrong to begin with, back in the day when everyone thought bad guys just shouldn't be allowed to use the network. And the fixes are trickling in bit by bit.
True, but pretty much everything was written wrong to begin with, back in the day when everyone thought bad guys just shouldn't be allowed to use the network. And the fixes are trickling in bit by bit.
Been hearing that “back in the day” excuse since Novell / IPX was big. Wash, rinse, repeat.
There have always been “bad guys” on networks.
That excuse will still be used long after I’m dead… but an excuse, it most certainly is.
You can find all sorts of examples of things written long after Internet security was a known/given in the kernel, that had to be replaced. Same with just about every piece of application software.
-- Nate Duehr denverpilot@me.com
On 11/4/2014 11:32 AM, Nathan Duehr wrote:
Been hearing that “back in the day” excuse since Novell / IPX was big. Wash, rinse, repeat.
which would have been 1980s to mid 90s.
the fundamental IP application protocols like FTP, Telnet date back to the late 60s and early 1970s, concurrent with the development of TCP/IP and ARPANET. There /was/ no 'network' before this for 'bad guys' to be on.
On Tue, Nov 4, 2014 at 1:32 PM, Nathan Duehr denverpilot@me.com wrote:
True, but pretty much everything was written wrong to begin with, back in the day when everyone thought bad guys just shouldn't be allowed to use the network. And the fixes are trickling in bit by bit.
Been hearing that “back in the day” excuse since Novell / IPX was big. Wash, rinse, repeat.
There have always been “bad guys” on networks.
That excuse will still be used long after I’m dead… but an excuse, it most certainly is.
It was made official in 1987 with the first known instance of an internet worm that exploited sendmail. The person who released the viral code was held responsible rather than the vendors that shipped the obvious vulnerability - even the commercial vendors that repackaged it and charged for it. Thus the next several decades of taking no responsibility for shipping horrible vulnerabilities was set in motion. And of course there are an assortment of conspiracy theories about how some of the back doors were intentional.
On Thu, Oct 30, 2014 at 10:12 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 4:02 pm, Beartooth wrote:
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update kernel, they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
This is a corner that system administrators have allowed themselves to be
painted into. It's not a law of nature. Civilized organisations will always allow a maintenance Window. In the Windows world it is not an issue. Servers can be rebooted with much more freedom than in the Linux/Unix world.
Cheers,
Cliff
On Wed, October 29, 2014 6:32 pm, Cliff Pratt wrote:
On Thu, Oct 30, 2014 at 10:12 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 4:02 pm, Beartooth wrote:
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update
kernel,
they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
This is a corner that system administrators have allowed themselves to be
painted into. It's not a law of nature. Civilized organisations will always allow a maintenance Window. In the Windows world it is not an issue. Servers can be rebooted with much more freedom than in the Linux/Unix world.
Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users... No, I don't think Unix admins will start into the direction of Windows world, sorry. I don't even like Windows world mentioned as an example for Unix world! (Don't take me too literally, everybody welcomes good things "other worlds" have...)
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 10/29/2014 4:40 PM, Valeri Galtsev wrote:
Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users... No, I don't think Unix admins will start into the direction of Windows world, sorry. I don't even like Windows world mentioned as an example for Unix world! (Don't take me too literally, everybody welcomes good things "other worlds" have...)
in my enterprise world, production systems are fully redundant, and have staging servers running identical software configurations. all upgrades and upgrade procedures are tested on staging before being deployed in production. quite often, the staging systems double as the Disaster Recovery systems, but thats another story. virtually all production systems either have a schedulable downtime (2am sunday morning?), or support rolling upgrades with no downtime (such as our 24/7 factory operations where downtime == no product).
personally, I'm very glad I work in development, where our informal SLA is more like 9-9 5 days/week (developers like to work late).
in my enterprise world, production systems are fully redundant, and have staging servers running identical software configurations. all upgrades and upgrade procedures are tested on staging before being deployed in production. quite often, the staging systems double as the Disaster Recovery systems, but thats another story. virtually all production systems either have a schedulable downtime (2am sunday morning?), or support rolling upgrades with no downtime (such as our 24/7 factory operations where downtime == no product).
personally, I'm very glad I work in development, where our informal SLA is more like 9-9 5 days/week (developers like to work late).
Sounds like you have a dream job John! At the very least for a company that spends money on proper hardware!
I used to work with IBM mainframes back when the dinosaurs were hatchlings. At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?
We soon went to 24x7, but the reason was not because the users wanted it. It was because the engineers and systems programmers wanted time with no users.
Cheers,
Cliff
On Thu, Oct 30, 2014 at 12:57 PM, John R Pierce pierce@hogranch.com wrote:
On 10/29/2014 4:40 PM, Valeri Galtsev wrote:
Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users... No, I don't think Unix admins will start into the direction of Windows world, sorry. I don't even like Windows world mentioned as an example for Unix world! (Don't take me too literally, everybody welcomes good things "other worlds" have...)
in my enterprise world, production systems are fully redundant, and have staging servers running identical software configurations. all upgrades and upgrade procedures are tested on staging before being deployed in production. quite often, the staging systems double as the Disaster Recovery systems, but thats another story. virtually all production systems either have a schedulable downtime (2am sunday morning?), or support rolling upgrades with no downtime (such as our 24/7 factory operations where downtime == no product).
personally, I'm very glad I work in development, where our informal SLA is more like 9-9 5 days/week (developers like to work late).
-- john r pierce 37N 122W somewhere on the middle of the left coast
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 10/30/2014 1:07 AM, Cliff Pratt wrote:
I used to work with IBM mainframes back when the dinosaurs were hatchlings. At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?
We soon went to 24x7, but the reason was not because the users wanted it. It was because the engineers and systems programmers wanted time with no users.
main reason I remember for keeping stuff running was, it was more reliable if the temperature was relatively constant... temperature flucations led to more hardware failures than any other source input variable.
Bending a spoon 100 times it will break.. Keep temp the same hot or cold no bends.. thus the tracks do not break...
Its not 22Deg Celsius or 28Deg it is keeping the temp the same, as the temp changes the metal expands and contracts..
Regards Michael Cole
On Thursday, October 30, 2014 1:21:22 AM John R Pierce wrote:
On 10/30/2014 1:07 AM, Cliff Pratt wrote:
I used to work with IBM mainframes back when the dinosaurs were
hatchlings.
At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?
We soon went to 24x7, but the reason was not because the users wanted it. It was because the engineers and systems programmers wanted time with no users.
main reason I remember for keeping stuff running was, it was more reliable if the temperature was relatively constant... temperature flucations led to more hardware failures than any other source input variable.
--
On Thu, Oct 30, 2014 at 9:21 PM, John R Pierce pierce@hogranch.com wrote:
On 10/30/2014 1:07 AM, Cliff Pratt wrote:
I used to work with IBM mainframes back when the dinosaurs were hatchlings. At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?
We soon went to 24x7, but the reason was not because the users wanted it. It was because the engineers and systems programmers wanted time with no users.
main reason I remember for keeping stuff running was, it was more reliable if the temperature was relatively constant... temperature flucations led to more hardware failures than any other source input variable.
Yes, that too. We had quite a few cases of machine mondayitis.
Cheers,
Cliff
On Thu, 2014-10-30 at 21:07 +1300, Cliff Pratt wrote:
I used to work with IBM mainframes back when the dinosaurs were hatchlings. At one place I worked the machine was powered off on Friday at 5pm and powered up at 7am on Monday! Can you imagine that these days?
In my early days, the entire system was powered down before the last person went home.
Regards,
Paul. England, EU.
That's exactly what I mean. It's not a matter of "starting into the Windows world". My point was that Windows admins have not become obsessed with "uptime", and hence given their users the expectation of 100% availability.
I'm all for being responsible to users - and that means patching and if that means some downtime, then the users in general would not be put out, if their expectations had not been raised to expect no downtime.
Cheers,
Cliff
On Thu, Oct 30, 2014 at 12:40 PM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 6:32 pm, Cliff Pratt wrote:
On Thu, Oct 30, 2014 at 10:12 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 4:02 pm, Beartooth wrote:
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As we, normal people know that if the distro maintainers had to update
kernel,
they had a reason (otherwise, something else breaks). So, we are left running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
This is a corner that system administrators have allowed themselves to be
painted into. It's not a law of nature. Civilized organisations will always allow a maintenance Window. In the Windows world it is not an issue. Servers can be rebooted with much more freedom than in the Linux/Unix world.
Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users... No, I don't think Unix admins will start into the direction of Windows world, sorry. I don't even like Windows world mentioned as an example for Unix world! (Don't take me too literally, everybody welcomes good things "other worlds" have...)
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++ _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, October 30, 2014 3:01 am, Cliff Pratt wrote:
That's exactly what I mean. It's not a matter of "starting into the Windows world". My point was that Windows admins have not become obsessed with "uptime", and hence given their users the expectation of 100% availability.
I'm all for being responsible to users - and that means patching and if that means some downtime,
If I remember Unix world, patching almost never led to downtime and almost always could be accomplished in presence of users logged in.
Valeri
then the users in general would not be put out,
if their expectations had not been raised to expect no downtime.
Cheers,
Cliff
On Thu, Oct 30, 2014 at 12:40 PM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 6:32 pm, Cliff Pratt wrote:
On Thu, Oct 30, 2014 at 10:12 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Wed, October 29, 2014 4:02 pm, Beartooth wrote:
On Wed, 29 Oct 2014 11:44:42 -0500, Valeri Galtsev wrote:
... Basically, if one thinks he knows more than system vendor, he is just schizophrenic. And we, normal people, do give schizophrenics a privilege to be on their own. As
we,
normal people know that if the distro maintainers had to update
kernel,
they had a reason (otherwise, something else breaks). So, we are
left
running _this_ system, even though it's stressful, still not as stressful as running "bleeding edge" fedora, right? ;-)
What? Stressful?? Fedora??? Naaahhh ...
I'm sorry, apart from my laptop, I also run servers. And services are supposed to be up 24/7. And a bunch of people are always logged in... You do the math.
This is a corner that system administrators have allowed themselves
to
be
painted into. It's not a law of nature. Civilized organisations will always allow a maintenance Window. In the Windows world it is not an issue. Servers can be rebooted with much more freedom than in the Linux/Unix world.
Yes, indeed. Those are blasted Unix sysadmins (Hm, I flatter myself by thinking of being one too) that push themselves into being too responsible to their users... No, I don't think Unix admins will start into the direction of Windows world, sorry. I don't even like Windows world mentioned as an example for Unix world! (Don't take me too literally, everybody welcomes good things "other worlds" have...)
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++ _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
Once upon a time, Valeri Galtsev galtsev@kicp.uchicago.edu said:
If I remember Unix world, patching almost never led to downtime and almost always could be accomplished in presence of users logged in.
I think that's a rose-colored glasses look in the rear-view mirror. The "traditional" Unix flavors I dealt with (Solaris and DEC Unix) required reboots; DEC Unix pretty much required going to single-user mode to even install a patch kit. When it wasn't required, it was highly recommended by the documentation.
On Thu, Oct 30, 2014 at 08:00:16AM -0500, Valeri Galtsev wrote:
If I remember Unix world, patching almost never led to downtime and almost always could be accomplished in presence of users logged in.
RHEL has kpatch: http://rhelblog.redhat.com/2014/02/26/kpatch/
Technologies like kpatch, ksplice, kGraft, etc. will make it so you don't have to reboot to get kernel patches. However, I'm more concerned with updating software like glibc, openssl, nss, etc. for running processes. It doesn't matter if you're running Linux or FreeBSD or other UNIXes, if you update the underlying software applications and libraries under the user's processes, there's always a chance (and quite likely) that something will break.