Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more. So I decided to re-install Centos4. The partitions were already there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to diagnose furthere.
Any ideas?
Regards,
Ugo
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more.
So I decided to re-install Centos4. The partitions were already
there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting from a mirrored partition?
Christopher Chan wrote:
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more. So I decided to re-install Centos4. The partitions were already there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
Ugo Bellavance wrote:
Christopher Chan wrote:
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more. So I decided to re-install Centos4. The partitions were already there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
Regards,
Ugo
Ugo Bellavance wrote:
Ugo Bellavance wrote:
Christopher Chan wrote:
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more. So I decided to re-install Centos4. The partitions were already there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
Thanks,
Ugo
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Christopher Chan pisze:
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
So system is booting form first or second riad1 disk
Irens
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
So system is booting form first or second riad1 disk
Ok, so to sum up from what I understand of my problem:
Installation of CentOS4 -> Installs grub only on one HDD partition out of 2, in the mirror.
One disk fails, the one that has grub
System won't boot because it can't find grub on the other drive.
If I had centos5 there in the first place, the setup would have taken care of installing grub on the 2 mirrored raid partitions.
Am I right?
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
Regards,
Ugo
Ugo Bellavance wrote:
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
> grub cannot find its second stage. Are you booting from a > mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
So system is booting form first or second riad1 disk
Ok, so to sum up from what I understand of my problem:
Installation of CentOS4 -> Installs grub only on one HDD partition out of 2, in the mirror.
Well, I believe you chose md0 or something to install grub on right?
One disk fails, the one that has grub
Not necessarily. All it takes is a change in the sequence of disk assignment.
System won't boot because it can't find grub on the other drive.
Correct. It needs to be instructed to look on its own drive.
If I had centos5 there in the first place, the setup would have taken care of installing grub on the 2 mirrored raid partitions.
Am I right?
Yes.
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
So long as you have all the necessary grub files, there is not much danger. Even if you are missing a config file on this side of the mirror (impossible...) so long as you load grub stage 2, you will have the power you need to continue if you have access (eg: via serial)
Christopher Chan wrote:
Ugo Bellavance wrote:
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
>> grub cannot find its second stage. Are you booting from a >> mirrored partition? > > Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
So system is booting form first or second riad1 disk
Ok, so to sum up from what I understand of my problem:
Installation of CentOS4 -> Installs grub only on one HDD partition out of 2, in the mirror.
Well, I believe you chose md0 or something to install grub on right?
One disk fails, the one that has grub
Not necessarily. All it takes is a change in the sequence of disk assignment.
System won't boot because it can't find grub on the other drive.
Correct. It needs to be instructed to look on its own drive.
If I had centos5 there in the first place, the setup would have taken care of installing grub on the 2 mirrored raid partitions.
Am I right?
Yes.
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
So long as you have all the necessary grub files, there is not much danger. Even if you are missing a config file on this side of the mirror (impossible...) so long as you load grub stage 2, you will have the power you need to continue if you have access (eg: via serial)
I was talking about all my others centos4 machines..
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
So long as you have all the necessary grub files, there is not much danger. Even if you are missing a config file on this side of the mirror (impossible...) so long as you load grub stage 2, you will have the power you need to continue if you have access (eg: via serial)
I was talking about all my others centos4 machines..
Yes I know. I was trying to also tell you why it is not dangerous to install grub. My apologies if I was confusing.
Christopher Chan wrote:
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
It won't hurt to re-install. The tricky part is knowing where the 2nd drive in the set will appear when the first fails so you can do the configuration correctly.
So long as you have all the necessary grub files, there is not much danger. Even if you are missing a config file on this side of the mirror (impossible...) so long as you load grub stage 2, you will have the power you need to continue if you have access (eg: via serial)
Worst case: boot the install cd with 'linux rescue' at the boot prompt, let it detect and mount the system partitions, do the chroot it suggests, then 'grub-install /dev/sda' (or wherever the boot drive appears now), exit twice, and remove the CD as you reboot.
on 11/29/2007 7:53 AM Les Mikesell spake the following:
Christopher Chan wrote:
Is there a way to know where grub is installed? I have a few servers running in RAID 1 software for /boot, I gotta fix this. If I can't tell whether it is installed or not, is it dangerous to re-install it using the command above?
It won't hurt to re-install. The tricky part is knowing where the 2nd drive in the set will appear when the first fails so you can do the configuration correctly.
This isn't as tricky as you think. It just takes some thought. PATA disks always will be where they are. They are set by their jumpers for master/slave. But since most pata controllers lock when a drive dies, it is less than optimal. You can get a little safer by either using only the master part of the controller, or have each drives mate on the opposite controller and opposite channel, IE pair primary master with secondary slave and so on. Not for a production system, but will work OK for a home backup system. Sata usually will move to the next detected drive. IE... sdb will become sda
IF< all drives are on the same controller. If the system has multiple sata
controllers, it could be a guess, or remove one and see what happens. SCSI usually also moves to next drive because of the drives ID setting.
One of the safest methods is to create a grub boot cd and boot from that. That way it won't matter which drives fail unless too many fail to maintain parity.
So long as you have all the necessary grub files, there is not much danger. Even if you are missing a config file on this side of the mirror (impossible...) so long as you load grub stage 2, you will have the power you need to continue if you have access (eg: via serial)
Worst case: boot the install cd with 'linux rescue' at the boot prompt, let it detect and mount the system partitions, do the chroot it suggests, then 'grub-install /dev/sda' (or wherever the boot drive appears now), exit twice, and remove the CD as you reboot.
Scott Silva wrote:
It won't hurt to re-install. The tricky part is knowing where the 2nd drive in the set will appear when the first fails so you can do the configuration correctly.
This isn't as tricky as you think. It just takes some thought.
Yes, but you need to think from the bios perspective, since you have to get the 'root (hdx,x)' value right for the grub invocation.
PATA disks always will be where they are. They are set by their jumpers for master/slave. But since most pata controllers lock when a drive dies, it is less than optimal. You can get a little safer by either using only the master part of the controller, or have each drives mate on the opposite controller and opposite channel, IE pair primary master with secondary slave and so on. Not for a production system, but will work OK for a home backup system. Sata usually will move to the next detected drive. IE... sdb will become sda >IF< all drives are on the same controller. If the system has multiple sata controllers, it could be a guess, or remove one and see what happens. SCSI usually also moves to next drive because of the drives ID setting.
The situation I've normally seen is scsi and failure modes where the drive is not seen at all by the controller - but I don't know if that is always the case. I don't think I've had an ide failure where I didn't have to disconnect the bad drive to boot anyway.
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
Ok, on one system I had /boot as /dev/md0, and md0 is composed of /dev/hda1 and /dev/hdb1.
I have done:
grub> device (hd0) /dev/hdb
grub> root (hd0,0) Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub .conf"... succeeded Done.
Am I in the right way?
Thanks,
Ugo
Ugo Bellavance wrote:
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
> grub cannot find its second stage. Are you booting from a > mirrored partition?
Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
Ok, on one system I had /boot as /dev/md0, and md0 is composed of /dev/hda1 and /dev/hdb1.
I have done:
grub> device (hd0) /dev/hdb
grub> root (hd0,0) Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub .conf"... succeeded Done.
Am I in the right way?
Looks good. The important part is that it references the drive when doing the installation.
Christopher Chan wrote:
Ugo Bellavance wrote:
jancio_wodnik@wp.pl wrote:
Christopher Chan pisze:
>> grub cannot find its second stage. Are you booting from a >> mirrored partition? > > Yes
What could be a solution? And what could have happen upon the reboot?
That is weird. I just re-installed centos5 and it is now booting properly. What could I do to avoid this situation in the future?
IIRC, RHEL4 does not properly handle installation of grub on mirrored partitions and therefore Centos4 suffers from the same problem.
RHEL5 does it properly now as you can see. This has been a long outstanding problem of anaconda.
Yeap, this is true. After installing centos4 on RAID1 disk (software raid) i always do:
grub grub>device (hd0) /dev/hdc grub>root (hd0,0) grub>setup (hd0)
where /dev/hdc is second RAID DISK (it could be whatever: /dev/sdb1 etc)
Ok, on one system I had /boot as /dev/md0, and md0 is composed of /dev/hda1 and /dev/hdb1.
I have done:
grub> device (hd0) /dev/hdb
grub> root (hd0,0) Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub .conf"... succeeded Done.
Am I in the right way?
Looks good. The important part is that it references the drive when doing the installation.
What do you mean?
Looks good. The important part is that it references the drive when doing the installation.
What do you mean?
The problem is that grub stage1 is loaded but its instructions for locating stage2 point it to another disk which may or may not be at the location stored. That was the problem with grub installation done by anaconda and is the reason why you can see GRUB but it does not continue on because its installation point both instances of grub stage1 to a disk.
For that reason, you have to manually tell grub to install stage1 with instructions to look for the stage2 part that is on the same disk where the stage1 part is installed for >= RHEL4 systems.
--- Ugo Bellavance ugob@lubik.ca wrote:
Christopher Chan wrote:
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an
LSI scsi card.
I installed Centos4 a while ago and put it in a
datacenter. I
rebooted many weeks after, and the machine didn't
come back up. So I
went to the datacenter tonight to find out that
the server was stuck
at "GRUB". Nothing more. So I decided to re-install Centos4. The
partitions were
already there, so I just made sure partitions
were formatted and the
install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly
what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting
from a mirrored
partition?
Yes
Hello, I have found this article from Dell to be pretty straightforward. http://www.dell.com/downloads/global/power/1q04-hul.pdf It was written for RHEL3, but all info seems to apply to RHEL4 and it's various rebuilds.
Brett
____________________________________________________________________________________ Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now. http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
My dell SC430 came with instructions on how to set up booting from software raid under RHEL. I can't find an online copy but this might help http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html
Normally before I move a machine from testing to production I like to do some tests, including pulling a disk from the raid and knowing how to recover/restore etc.
If it is going to be remotely hosted, you also need to figure out how you are going to do it remotely.
John.
Ugo Bellavance wrote:
Christopher Chan wrote:
Ugo Bellavance wrote:
Hi,
I have a Dell poweredge 400SC here with an LSI scsi card.
I installed Centos4 a while ago and put it in a datacenter. I rebooted many weeks after, and the machine didn't come back up. So I went to the datacenter tonight to find out that the server was stuck at "GRUB". Nothing more. So I decided to re-install Centos4. The partitions were already there, so I just made sure partitions were formatted and the install went fine. When I rebooted, "GRUB".
I'm at this point and I don't know exactly what I could do to
diagnose furthere.
Any ideas?
grub cannot find its second stage. Are you booting from a mirrored partition?
Yes
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos