Hey folks,
I searched the list archives and found this :
echo "AUTOFSCK_TIMEOUT=5" > /etc/sysconfig/autofsck echo "AUTOFSCK_DEF_CHECK=yes" >> /etc/sysconfig/autofsck
http://lists.centos.org/pipermail/centos/2006-November/029837.html http://lists.centos.org/pipermail/centos/2009-September/thread.html#81934
Will this do all disks?
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
There was a suggestion in the 2nd thread above that with ext3 this should not be required with proper hardware (my paraphrase). I'm using all IBM stuff - x3550, x3650, x3800 and some of the earlier models like x330. I can't imagine this being an issue. But nonetheless I do have some issues on a couple of systems that look like they need fsck'ing
thanks, -Alan
Alan McKay wrote:
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
No out of band management?
nate
On Wed, Jan 6, 2010 at 12:34 PM, nate centos@linuxpowered.net wrote:
Alan McKay wrote:
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
No out of band management?
nate
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
On 1/6/2010 12:05 PM, Brian Mathis wrote:
On Wed, Jan 6, 2010 at 12:34 PM, natecentos@linuxpowered.net wrote:
Alan McKay wrote:
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
No out of band management?
nate
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
But, on the other hand they should never need it, except perhaps when installing the OS if you don't use a full-auto method or clone disks.
On Wed, Jan 06, 2010 at 12:30:15PM -0600, Les Mikesell wrote:
On 1/6/2010 12:05 PM, Brian Mathis wrote:
On Wed, Jan 6, 2010 at 12:34 PM, natecentos@linuxpowered.net wrote:
No out of band management?
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
But, on the other hand they should never need it, except perhaps when installing the OS if you don't use a full-auto method or clone disks.
All hardware sucks, all software sucks.
Your machine _will_ go wrong and you _will_ need remote console access and remote power ability. Especially if you have thousands of these things.
On 1/6/2010 1:17 PM, Stephen Harris wrote:
On Wed, Jan 6, 2010 at 12:34 PM, natecentos@linuxpowered.net wrote:
No out of band management?
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
But, on the other hand they should never need it, except perhaps when installing the OS if you don't use a full-auto method or clone disks.
All hardware sucks, all software sucks.
Your machine _will_ go wrong and you _will_ need remote console access and remote power ability. Especially if you have thousands of these things.
Of course things break - including the extra stuff you might add for out of band access, but (a) many/most of the things that break can't be fixed remotely so you are going to end needing to swap things out anyway, and (b) if you have thousands you should have enough redundancy to survive until you get around to swapping the broken thing with something that works. An IP KVM might save a trip or hands-on support call once in a while but the odds aren't that great.
On Wed, Jan 06, 2010 at 02:11:10PM -0600, Les Mikesell wrote:
On 1/6/2010 1:17 PM, Stephen Harris wrote:
All hardware sucks, all software sucks.
Your machine _will_ go wrong and you _will_ need remote console access and remote power ability. Especially if you have thousands of these things.
Of course things break - including the extra stuff you might add for out of band access, but (a) many/most of the things that break can't be fixed remotely so you are going to end needing to swap things out
That wasn't my experience when I was an SA.
anyway, and (b) if you have thousands you should have enough redundancy to survive until you get around to swapping the broken thing with something that works. An IP KVM might save a trip or hands-on support
Nope; these machines are all used for different applications, by different sub-lines of business, and are not sitting there idle.
If you have 1000 boxes all doing the same thing (hi, Google!) then sure; if you have 1000 boxes all doing _different_ things then no.
call once in a while but the odds aren't that great.
When I used to be on call (fortunately not for 9 years, now) and when I got paged for a problem, I'd estimate 1 in 20 of the calls required a direct root login, and that was limitted to the console. Some of the problems could stop a normal user logging in (eg automount failure; NIS binding issue; network routing issue). Some of those required a physical presence, but the majority of them could be fixed from the remote console.
Heck, even for scheduled changes (bring server up single user, networking not active) remote consoles made the work possible.
Remote consoles (whether it be a IP KVM, or an old style Sun Sparc serial console) aren't luxuries for a large environment; they're essential.
On Wed, Jan 6, 2010 at 3:11 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 1/6/2010 1:17 PM, Stephen Harris wrote:
On Wed, Jan 6, 2010 at 12:34 PM, natecentos@linuxpowered.net wrote:
No out of band management?
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
But, on the other hand they should never need it, except perhaps when installing the OS if you don't use a full-auto method or clone disks.
All hardware sucks, all software sucks.
Your machine _will_ go wrong and you _will_ need remote console access and remote power ability. Especially if you have thousands of these things.
Of course things break - including the extra stuff you might add for out of band access, but (a) many/most of the things that break can't be fixed remotely so you are going to end needing to swap things out anyway, and (b) if you have thousands you should have enough redundancy to survive until you get around to swapping the broken thing with something that works. An IP KVM might save a trip or hands-on support call once in a while but the odds aren't that great.
-- Les Mikesell lesmikesell@gmail.com
Things break, but you hope they don't all break at the same time. If the KVM goes down, it's probably not at the same time you need to fsck a server that you just rebooted. And when other things break, the goal is to have enough redundancy on hand to be able to fix the problem until you can replace whatever broke without dropping everything at 2am.
Also, depending on your provider, a remote KVM winds up being cheaper than the cost of a few remote-hands calls.
2010/1/6 Les Mikesell lesmikesell@gmail.com:
But, on the other hand they should never need it, except perhaps when installing the OS if you don't use a full-auto method or clone disks.
Reminds me of the quote, "In theory there is no difference between theory and practice. In practise, there is".
Call me paranoid, but we use an integrated card and an IP based KVM. In theory I shouldn't have needed them, in practice I've been very grateful for them.
Ben
On Wed, Jan 6, 2010 at 1:05 PM, Brian Mathis brian.mathis@gmail.com wrote:
No out of band management?
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
Oh believe me, I want to get there. It's high on my list this year ... I'm still relatively new here
On 1/6/2010 2:36 PM, Alan McKay wrote:
On Wed, Jan 6, 2010 at 1:05 PM, Brian Mathisbrian.mathis@gmail.com wrote:
No out of band management?
My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM.
Oh believe me, I want to get there. It's high on my list this year ... I'm still relatively new here
At least they're nowhere near as expensive as they used to be. On my list as well for the new year, along with a weather duck.
On 1/6/2010 11:19 AM, Alan McKay wrote:
Hey folks,
I searched the list archives and found this :
echo "AUTOFSCK_TIMEOUT=5"> /etc/sysconfig/autofsck echo "AUTOFSCK_DEF_CHECK=yes">> /etc/sysconfig/autofsck
http://lists.centos.org/pipermail/centos/2006-November/029837.html http://lists.centos.org/pipermail/centos/2009-September/thread.html#81934
Will this do all disks?
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
There was a suggestion in the 2nd thread above that with ext3 this should not be required with proper hardware (my paraphrase). I'm using all IBM stuff - x3550, x3650, x3800 and some of the earlier models like x330. I can't imagine this being an issue. But nonetheless I do have some issues on a couple of systems that look like they need fsck'ing
It will happen by itself at some default interval. I've forgotten exactly what the timing is but it is infrequent enough that it always takes me by surprise when it takes an extra 10 minutes for a remote system to come back up.
At Wed, 06 Jan 2010 11:45:46 -0600 CentOS mailing list centos@centos.org wrote:
On 1/6/2010 11:19 AM, Alan McKay wrote:
Hey folks,
I searched the list archives and found this :
echo "AUTOFSCK_TIMEOUT=5"> /etc/sysconfig/autofsck echo "AUTOFSCK_DEF_CHECK=yes">> /etc/sysconfig/autofsck
http://lists.centos.org/pipermail/centos/2006-November/029837.html http://lists.centos.org/pipermail/centos/2009-September/thread.html#81934
Will this do all disks?
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
There was a suggestion in the 2nd thread above that with ext3 this should not be required with proper hardware (my paraphrase). I'm using all IBM stuff - x3550, x3650, x3800 and some of the earlier models like x330. I can't imagine this being an issue. But nonetheless I do have some issues on a couple of systems that look like they need fsck'ing
It will happen by itself at some default interval. I've forgotten exactly what the timing is but it is infrequent enough that it always takes me by surprise when it takes an extra 10 minutes for a remote system to come back up.
There are two metrics used: number of times a FS is mounted and number of days since last fsck/mount. For machines that don't get rebooted often (eg servers) the 'number of times a FS is mounted' almost never kicks in and the 'number of days since last fsck/mount' does.
On Wed, Jan 6, 2010 at 9:17 PM, Robert Heller heller@deepsoft.com wrote:
There are two metrics used: number of times a FS is mounted and number of days since last fsck/mount. For machines that don't get rebooted often (eg servers) the 'number of times a FS is mounted' almost never kicks in and the 'number of days since last fsck/mount' does.
-- Robert Heller -- 978-544-6933 Deepwoods Software -- Download the Model Railroad System http://www.deepsoft.com/ -- Binaries for Linux and MS-Windows heller@deepsoft.com -- http://www.deepsoft.com/ModelRailroadSystem/ _______________________________________________
Is it "absolutely" necessary to run this on servers? Especially since they don't reboot often, but when they do it takes ages for fsck to finish - which on web servers causes extra unwanted downtime.
Or is there a way to run fsck with the server running? I know it's a bad idea, but is there any way to run it, without causing too much downtime? I just had one server run fsck for 2+ hours, which is not really feasible in our line of business.
Rudi Ahlers wrote:
Is it "absolutely" necessary to run this on servers? Especially since they don't reboot often, but when they do it takes ages for fsck to finish - which on web servers causes extra unwanted downtime.
Or is there a way to run fsck with the server running? I know it's a bad idea, but is there any way to run it, without causing too much downtime? I just had one server run fsck for 2+ hours, which is not really feasible in our line of business.
For me at least on my SAN volumes I disable the fsck check after X number of days or X number of mounts. Of all the times over the years where I have seen this fsck triggered by those I have never, ever seen it detect any problems.
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
nate
On Thu, Feb 18, 2010 at 2:13 AM, nate centos@linuxpowered.net wrote:
Rudi Ahlers wrote:
Is it "absolutely" necessary to run this on servers? Especially since
they
don't reboot often, but when they do it takes ages for fsck to finish - which on web servers causes extra unwanted downtime.
Or is there a way to run fsck with the server running? I know it's a bad idea, but is there any way to run it, without causing too much downtime?
I
just had one server run fsck for 2+ hours, which is not really feasible
in
our line of business.
For me at least on my SAN volumes I disable the fsck check after X number of days or X number of mounts. Of all the times over the years where I have seen this fsck triggered by those I have never, ever seen it detect any problems.
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
nate
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours. nate _______________________________________________
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
If you have your XEN VMs in LVM volumes there is no filesystem for fsck to check - so no 2+ hours for the physical. Do you mean with "2+ hours" the accumulated time for the filesystems in all VMs being checked?
Henry
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours. nate _______________________________________________
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
If you have your XEN VMs in LVM volumes there is no filesystem for fsck to check - so no 2+ hours for the physical. Do you mean with "2+ hours" the accumulated time for the filesystems in all VMs being checked?
Henry
Yes, sorry, that's what I meant :)
The server booted up, ran fsck, then each VM, as it booted up ran fsck as well - which just slowed down the whole process since there's a 5 minute delay in starting each VM. But by the time most users could reconnect, 2+ hours have lapsed. this particular server wasn't rebooted in 274 days,but had to reboot for kernel & software updates. Most of the VPS's were running CentOS 5.3 as well, and come had uptimes of 150+ days plus. The one I checked was 195 days before the reboot. So, some VPS's came up quicker than others.
But, how does one get past this? I know we need to reboot from time to time, but more than often it's (preferably) not sooner than 6 - 10 months, so fsck will run.
On 02/18/2010 09:54 AM, Rudi Ahlers wrote: ...
But, how does one get past this? I know we need to reboot from time to time, but more than often it's (preferably) not sooner than 6 - 10 months, so fsck will run.
Turn off automatic fsck with "tune2fs -i 0 -c 0" and instead do a manual fsck (reboot with forcefsck or "touch /forcefsck" before reboot) at regular intervals. This must be done on the VM's as well.
Announce the downtime to the users in advance and do it in the evening or weekend.
Mogens
At Thu, 18 Feb 2010 10:07:27 +0100 CentOS mailing list centos@centos.org wrote:
On 02/18/2010 09:54 AM, Rudi Ahlers wrote: ...
But, how does one get past this? I know we need to reboot from time to time, but more than often it's (preferably) not sooner than 6 - 10 months, so fsck will run.
Turn off automatic fsck with "tune2fs -i 0 -c 0" and instead do a manual fsck (reboot with forcefsck or "touch /forcefsck" before reboot) at regular intervals. This must be done on the VM's as well.
Announce the downtime to the users in advance and do it in the evening or weekend.
And stagger the fscks to spread things out and shorten the downtimes.
Mogens
Rudi Ahlers wrote on Thu, 18 Feb 2010 10:54:11 +0200:
The server booted up, ran fsck, then each VM, as it booted up ran fsck as well - which just slowed down the whole process since there's a 5 minute delay in starting each VM.
Why would you autostart a VM only every 5 minutes? Or did you mean the next one only started once the earlier one had finished fsck? As mine aren't doing this I haven't ever seen that.
But by the time most users could reconnect, 2+
hours have lapsed. this particular server wasn't rebooted in 274 days,
Which means it ran without many kernel security updates for a long time.
But, how does one get past this? I know we need to reboot from time to time, but more than often it's (preferably) not sooner than 6 - 10 months, so fsck will run.
You use tune2fs on the VM filesystems as Mogens explains. Then you don't have any extra downtime for the VMs. When I reboot servers it takes about ten extra ping losses for the dom0 fsck if it is time. For the VMs you then run it twice a year manually (or by script) from dom0. Take the VM down, mount the LVM partitions, run fsck, unmount, start again. Should take less than 5 minutes per VM.
Kai
On Thu, 18 Feb 2010, Kai Schaetzl wrote:
Rudi Ahlers wrote on Thu, 18 Feb 2010 10:54:11 +0200:
The server booted up, ran fsck, then each VM, as it booted up ran fsck as well - which just slowed down the whole process since there's a 5 minute delay in starting each VM.
Why would you autostart a VM only every 5 minutes? Or did you mean the next one only started once the earlier one had finished fsck? As mine aren't doing this I haven't ever seen that.
We found (suffered) a 'thundering herd' effect in autostarting all domU's at once where 20 or more were present, and so added control logic to stagger such starts after a reboot (and also track running/stopped state outside of dropping links in the /etc/xen/ directory hierarchy)
-- Russ herrold
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
nate wrote:
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
Um, we've only got some cluster nodes with drives under 750G; most of ours have that, or 950G, and we're moving a lot to 1T, and some 1.5T. Then there's large raid arrays. They take a *while* to fsck. Unless I'm having serious disk problems, I've had to boot using fastboot as a kernel line parm for grub.
mark
------- Original message -------
From: m.roth@5-cent.us Sent: 18.2.'10, 18:21
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
nate wrote:
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
Um, we've only got some cluster nodes with drives under 750G; most of ours have that, or 950G, and we're moving a lot to 1T, and some 1.5T. Then there's large raid arrays. They take a *while* to fsck. Unless I'm having serious disk problems, I've had to boot using fastboot as a kernel line parm for grub.
What about using a 'decent' file system, such as XFS?
mark
Timo
Timo wrote:
------- Original message -------
From: m.roth@5-cent.us Sent: 18.2.'10, 18:21
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
nate wrote:
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
Um, we've only got some cluster nodes with drives under 750G; most of ours have that, or 950G, and we're moving a lot to 1T, and some 1.5T. Then there's large raid arrays. They take a *while* to fsck. Unless I'm having serious disk problems, I've had to boot using fastboot as a kernel line parm for grub.
What about using a 'decent' file system, such as XFS?
Lessee, I'm *not* talking about my system at home. a) I'm talking about work; b) My manager, my co-worker, and myself support nearly 200 servers, including 5 clusters. Some people that we support "only" run jobs that go for 2-4 days, but there was the guy who was running a job that I had to wait until it was finished to reboot the NFS server with his home directory... and I waited ->two weeks<-.
Then there's the question of how we'd migrate.
Sorry, time for a real world check.
mark "that is, if my manager was willing in the first place"
------- Original message -------
From: m.roth@5-cent.us To: centos@centos.org Sent: 18.2.'10, 18:55
Timo wrote:
------- Original message -------
From: m.roth@5-cent.us Sent: 18.2.'10, 18:21
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
nate wrote:
I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours.
This particular server has 2x 500GB HDD's with failry "full" XEN VM's on it, each with it's own LVM volumes, so I guess it's a bit more complex than a normal ext2 system :)
Um, we've only got some cluster nodes with drives under 750G; most of ours have that, or 950G, and we're moving a lot to 1T, and some 1.5T. Then there's large raid arrays. They take a *while* to fsck. Unless I'm having serious disk problems, I've had to boot using fastboot as a kernel line parm for grub.
What about using a 'decent' file system, such as XFS?
Lessee, I'm *not* talking about my system at home.
Hm, I'm talking about both: My file server (about 10TiByte RAID) and...
a) I'm talking about work;
..my systems at work.
b) My manager, my co-worker, and myself support nearly 200 servers, including 5 clusters.
Roughly the same numbers here - and that's the CentOS boxen only, not to mention xBSD, Solaris, and AIX machines.
IMHO one of the main reasons for running (Open)Solaris for most people (read: those w/o having to run Solaris due to historical reasons) do so because of ZFS.
XFS has different feature sets, strenghts and weaknesses, but the advantage of both XFS and ZFS is their total lack of fsck hassles, compared to extN.
Some people that we support "only" run jobs that go for 2-4 days, but there was the guy who was running a job that I had to wait until it was finished to reboot the NFS server with his home directory... and I waited ->two weeks<-.
So, where's exactly the connection to underlying file systems?
Then there's the question of how we'd migrate.
Sorry, time for a real world check.
One of my most favorite jokes is to begin IT related discussions with 'in an ideal world'... ;)
mark "that is, if my manager was willing in the first place"
Timo
Timo wrote:
From: m.roth@5-cent.us Sent: 18.2.'10, 18:55 Timo wrote:
------- Original message -------
From: m.roth@5-cent.us Sent: 18.2.'10, 18:21
On Thu, Feb 18, 2010 at 10:12 AM, Henry Ritzlmayr fedora-list@rc0.atwrote:
nate wrote:
<snip>
a) I'm talking about work;
..my systems at work.
b) My manager, my co-worker, and myself support nearly 200 servers, including 5 clusters.
Roughly the same numbers here - and that's the CentOS boxen only, not to mention xBSD, Solaris, and AIX machines.
Yeah, we have a few Solaris boxes, a mind-boggler SGI, and (I kid you not) some VMS systems.
IMHO one of the main reasons for running (Open)Solaris for most people (read: those w/o having to run Solaris due to historical reasons) do so because of ZFS.
XFS has different feature sets, strenghts and weaknesses, but the advantage of both XFS and ZFS is their total lack of fsck hassles, compared to extN.
Some people that we support "only" run jobs that go for 2-4 days, but there was the guy who was running a job that I had to wait until it was finished to reboot the NFS server with his home directory... and I waited ->two weeks<-.
So, where's exactly the connection to underlying file systems?
It was dumping large amounts of data into his home directory... which was NFS mounted from the server I needed to reboot.
Then there's the question of how we'd migrate.
Sorry, time for a real world check.
One of my most favorite jokes is to begin IT related discussions with 'in an ideal world'... ;)
*chuckle* <snip> mark
m.roth@5-cent.us wrote:
It was dumping large amounts of data into his home directory... which was NFS mounted from the server I needed to reboot.
That's why I like HA clusters, our NFS cluster runs on top of CentOS, and if we needed to reboot a node it would have minimal impact, the other system takes over the IPs and MAC addresses.
To-date the only time we've rebooted the NFS systems have been software updates(3 of them in the past year or so).
At my previous company I was planning on trying to "roll my own" nfs cluster on RHEL but never got round to it before I left the company
http://sources.redhat.com/cluster/doc/nfscookbook.pdf
nate
nate wrote:
m.roth@5-cent.us wrote:
It was dumping large amounts of data into his home directory... which was NFS mounted from the server I needed to reboot.
That's why I like HA clusters, our NFS cluster runs on top of CentOS, and if we needed to reboot a node it would have minimal impact, the other system takes over the IPs and MAC addresses.
To-date the only time we've rebooted the NFS systems have been software updates(3 of them in the past year or so).
Right... but you're assigning me the power to cost justify all that hardware, when a) I work for a federal contractor, and b) I support an agency of the US Gov't (your tax dollars at work, y'know).
mark "actually, the good guys, in this case"`
Dear all, I've IBM X3250 and CentOS 5.4 x86_64, HTTPD, PHP already installed,
Feb 18 18:35:02 ext-fw kernel: php[2933]: segfault at 00007fff7f03bfe8 rip 000000000056ed70 rsp 00007fff7f03c018 error 6 Feb 18 18:40:01 ext-fw kernel: php[2950]: segfault at 00007fff6005cf58 rip 00000000005a9155 rsp 00007fff6005cf60 error 6 Feb 18 18:41:06 ext-fw kernel: httpd[2967]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6 Feb 18 18:41:06 ext-fw kernel: httpd[2968]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6 Feb 18 18:41:07 ext-fw kernel: httpd[2969]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6 Feb 18 18:41:10 ext-fw kernel: httpd[2971]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6 Feb 18 18:41:11 ext-fw kernel: httpd[2973]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6 Feb 18 18:41:12 ext-fw kernel: httpd[2972]: segfault at 00007fffac4898d8 rip 00002b706e527795 rsp 00007fffac4898e0 error 6
I want to setup cacti as network monitoring, but when I trying to accessed this server I'm getting error message on browser (firefox) connect trought proxy:
*Invalid Response* error was encountered while trying to process the request:
GET /cacti/ HTTP/1.1 Host: ext-fw.lerindro.co.id User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive
The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator.
So, please tell my how to fix this "Bug"?
Warm regards, David
Hey folks,
I searched the list archives and found this :
echo "AUTOFSCK_TIMEOUT=5" > /etc/sysconfig/autofsck echo "AUTOFSCK_DEF_CHECK=yes" >> /etc/sysconfig/autofsck
http://lists.centos.org/pipermail/centos/2006-November/029837.html http://lists.centos.org/pipermail/centos/2009-September/thread.html#81934
Will this do all disks?
I want to do a reboot of a couple of systems during our maintenance window and fsck them, but would rather try it from home first and not go to the data center. Then of course rush there like a madman if they don't come back up :-)
<snip> Two things: make sure that the sixth field in /etc/fstab isn't zero, or it won't be checked.
The other thing to consider is how long will the fsck take - minutes, hours, days? How big are the filesystems?
mark