On 5/26/2010 3:17 PM, JohnS wrote:
And, in fact, that is exactly what happened. The default= line was set to 1, so it booted the old kernel instead of the new one. Other than that, it seems to be fine. I wonder what causes that? I've never noticed that behavior in my other systems. (But maybe I should go check now...)
I have *no* idea. I've even seen it pointing to 2, or 4. Anyone here have any idea why it wouldn't *always* change the default to 0?
mark
Where did you get the kernel from? There is a reason why I ask this because all installed kernels I have installed that were built by CentOS do the right thing. As in update the boot sequence for you.
The exception is The Upstream Real Time Kernel does not do this and is docoed.
Now the PAE Kernel I can not speak for because I do not use it. I only utilize the pae form for 32 bit under the RT Kernel which pae is built into for 32bits.
I think this fails where you initially install a non-PAE kernel and later add RAM and change to the PAE version.
How on Gods Green Earth is a STICK OF RAM going to change the damn BOOT Order? PFt my RAID 1ed Memory Just changed my boot order of my grid rack. Let me fix it back in the bios.
It's not the stick of RAM - it's the fact the the grub conf editing is set up to match your initial kernel type and isn't triggered by the install of the PAE kernel or it's subsequent updates. Look in /etc/sysconfig/kernel.