I tried to boot from my 6.0 USB key, no joy. Updated it to 6.2. Still no joy: it gets started, I do the disk layout, it formats the drives, and then fails, saying that it can't find "image# 1". Over in the log, I see a lot of it not finding any drive at all, yet all the h/d drives and sda2, which is what the USB key is, and where the linux partition is, are mounted.
Guys, any idea what "image# 1" is, or what configuration file is telling it to look in a wrong place?
Thanks in advance.
mark
From: "m.roth@5-cent.us" m.roth@5-cent.us
I tried to boot from my 6.0 USB key, no joy. Updated it to 6.2. Still no joy: it gets started, I do the disk layout, it formats the drives, and then fails, saying that it can't find "image# 1".
Works fine here... On some PCs/servers the key is sdb... syslinux.cfg: append initrd=initrd.img ks=hd:sda2:/ks.cfg repo=hd:sda2:/centos ks.cfg: harddrive --partition=sda2 --dir=/centos /centos contains images/install.img and the DVDs ISOs...
JD
John Doe wrote:
From: "m.roth@5-cent.us" m.roth@5-cent.us
I tried to boot from my 6.0 USB key, no joy. Updated it to 6.2. Still no joy: it gets started, I do the disk layout, it formats the drives, and then fails, saying that it can't find "image# 1".
Works fine here... On some PCs/servers the key is sdb... syslinux.cfg: append initrd=initrd.img ks=hd:sda2:/ks.cfg repo=hd:sda2:/centos ks.cfg: harddrive --partition=sda2 --dir=/centos /centos contains images/install.img and the DVDs ISOs...
Yeah, and normally the USB key shows up as sdb. I had to move it, for boot order, to get it to boot from it (weird BIOS). But as I said, what I don't know is what "image# 1" is referring to. It says "copy it to the right directory and try again"... but I don't know *which* *.img it's referring to. Karabanh? Johnny? Clues?
mark
m.roth@5-cent.us wrote:
John Doe wrote:
From: "m.roth@5-cent.us" m.roth@5-cent.us
I tried to boot from my 6.0 USB key, no joy. Updated it to 6.2. Still no joy: it gets started, I do the disk layout, it formats the drives, and then fails, saying that it can't find "image# 1".
Works fine here... On some PCs/servers the key is sdb... syslinux.cfg: append initrd=initrd.img ks=hd:sda2:/ks.cfg repo=hd:sda2:/centos ks.cfg: harddrive --partition=sda2 --dir=/centos /centos contains images/install.img and the DVDs ISOs...
Yeah, and normally the USB key shows up as sdb. I had to move it, for boot order, to get it to boot from it (weird BIOS). But as I said, what I don't know is what "image# 1" is referring to. It says "copy it to the right directory and try again"... but I don't know *which* *.img it's referring to. Karabanh? Johnny? Clues?
Following myself, I realized I should have made clear that it *did* boot to start, and loaded the install image, and I led it through the custom layout (our std. one, with a much larger /boot), it formats the drives, and *then* this popup comes up complaining.
mark
On 01/09/12 6:09 AM, John Doe wrote:
Works fine here... On some PCs/servers the key is sdb...
the days of relying on /dev/sd? are long past.
'scsi' devices renumber themselves on every boot. case in point, server I'm configuring now... has a LSI mptsas card with 2 disks mirrored for the OS and a megasas2 card with a large raid. when it first came, the megaraid had 2 raids on it, /dev/sda and /dev/sdb and the OS on the mpt card was on /dev/sdc .... I deleted these two raids and rebooted. now the OS was on /dev/sda ... I then defined a new larger raid60 on the megaraid, this was /dev/sdb then I rebooted and the megaraid was /dev/sda and the boot drive was /dev/sdb
I chose to mount my raid volume with uuid
|parted /dev/sda ||"mklabel gpt"| |parted -a optimal /dev/sda ||"mkpart primary 128k -1s"| |mkfs.xfs -f /dev/sda1| |uuid=$(xfs_admin -u /dev/sda1 | awk ||'{print $3}'||) # get the UUID| |echo ||"uuid=$uuid /data xfs defaults 1 2"| |> /etc/fstab| |mkdir /data| |mount /data|
as I had no need for LVM on this configuration, but
I have to say, I like Solaris's traditional disk numbering, /dev/dsk/c0t0d0s0 is channel 0, target 0, device 0, lun 0. the channel numbering is generally constant in a given system if you don't juggle IO cards around.
John R Pierce wrote:
On 01/09/12 6:09 AM, John Doe wrote:
Works fine here... On some PCs/servers the key is sdb...
the days of relying on /dev/sd? are long past.
Heh. See the point of a related thread, where mkswap -L did. not. work. No label...
'scsi' devices renumber themselves on every boot. case in point,
So we use labels. I *loathe* UUIDs. Quick, tell my yours on one system without looking (as would be the case if the drive crashed). <snip> I've even labelled software RAID partitions.
mark
On 01/09/12 10:33 AM, m.roth@5-cent.us wrote:
So we use labels. I*loathe* UUIDs. Quick, tell my yours on one system without looking (as would be the case if the drive crashed).
from my rescue environment, I'd use: xfs_admin -u /dev/xxxx (or the somewhat messier ext? equiv)
labels get messy too, when you have 27 systems and a half dozen file systems each. you want your labels globally unique so if you plug a volume into another system for repair there's no collisions. our hostnames tend to be messy and nearly as unreadable as a uuid, so embedding them in a label wouldn't actually be much help.
my install instructions for this server config ($job work in progress) currently read...
mkfs.xfs -f /dev/sda1 uuid=$(xfs_admin -u /dev/sda1 | awk '{print $3}') # get the UUID echo "UUID=$uuid /data xfs defaults 1 2" >> /etc/fstab mkdir /data mount /data
John R Pierce wrote:
On 01/09/12 10:33 AM, m.roth@5-cent.us wrote:
So we use labels. I*loathe* UUIDs. Quick, tell my yours on one system without looking (as would be the case if the drive crashed).
from my rescue environment, I'd use: xfs_admin -u /dev/xxxx (or the somewhat messier ext? equiv)
labels get messy too, when you have 27 systems and a half dozen file systems each. you want your labels globally unique so if you plug a volume into another system for repair there's no collisions. our
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
hostnames tend to be messy and nearly as unreadable as a uuid, so embedding them in a label wouldn't actually be much help.
Oh, you're in one of *those* places.... "This machine was bought under this account, and is part of this project, and there's 1-4 char abbreviations for each, and ..... <snip> mark
On Mon, Jan 9, 2012 at 1:11 PM, m.roth@5-cent.us wrote:
labels get messy too, when you have 27 systems and a half dozen file systems each. you want your labels globally unique so if you plug a volume into another system for repair there's no collisions. our
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
On which machine? Don't you ever move drives around? Things can get ugly with duplicate labels even if the reason you added a used disk was just to reformat it and reuse as a different mount.
Les Mikesell wrote:
On Mon, Jan 9, 2012 at 1:11 PM, m.roth@5-cent.us wrote:
labels get messy too, when you have 27 systems and a half dozen file systems each. you want your labels globally unique so if you plug a volume into another system for repair there's no collisions. our
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
On which machine? Don't you ever move drives around? Things can get ugly with duplicate labels even if the reason you added a used disk was just to reformat it and reuse as a different mount.
On all of them. They should be running the same o/s. Move them around? No, not unless we're replacing one that's either failed, or too small. And with hostname and IP via dhcp, there's only a few things to worry about, such as if it's an h/a or HPC cluster member, or backups, home directory server, whatever.
mark
On 01/09/12 11:11 AM, m.roth@5-cent.us wrote:
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
don't plug one of those into a different system for repair or you'll have all kinda grief. $HOSTNAME_root would be the sane way to do it...
hostnames tend to be messy and nearly as unreadable as a uuid, so embedding them in a label wouldn't actually be much help.
Oh, you're in one of *those* places.... "This machine was bought under this account, and is part of this project, and there's 1-4 char abbreviations for each, and .....
well, $job is at a large multinational... company standardized hostnames start with a 3 letter site prefix, then -S for server, then a 6 digit department ID, then -nnn as a server ID within that group. fug-ly. projects are too transient and servers tend to bounce around between physical and virtual over their life cycle.
John R Pierce wrote:
On 01/09/12 11:11 AM, m.roth@5-cent.us wrote:
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
don't plug one of those into a different system for repair or you'll have all kinda grief. $HOSTNAME_root would be the sane way to do it...
I'm trying to figure out why I'd plug one into a different system for repair. Either the drive's bad, or I'm re-embodying a server that died, but left good drives. If it's going bad, the *only* thing I'm going to do is plug it into a hot-swap bay (just about all of ours have those, love them) to recover some data, then wipe it.
hostnames tend to be messy and nearly as unreadable as a uuid, so embedding them in a label wouldn't actually be much help.
Oh, you're in one of *those* places.... "This machine was bought under this account, and is part of this project, and there's 1-4 char abbreviations for each, and .....
well, $job is at a large multinational... company standardized hostnames start with a 3 letter site prefix, then -S for server, then a 6 digit department ID, then -nnn as a server ID within that group. fug-ly. projects are too transient and servers tend to bounce around between physical and virtual over their life cycle.
Exactly what I was implying. Been there, but mostly in smaller groups, so we could name our own.
mark
On 01/09/12 12:05 PM, m.roth@5-cent.us wrote:
John R Pierce wrote:
On 01/09/12 11:11 AM,m.roth@5-cent.us wrote:
They are? I dunno - ours are labelled where they're intended to be mounted, like / or /boot
don't plug one of those into a different system for repair or you'll have all kinda grief. $HOSTNAME_root would be the sane way to do it...
I'm trying to figure out why I'd plug one into a different system for repair. Either the drive's bad, or I'm re-embodying a server that died, but left good drives. If it's going bad, the*only* thing I'm going to do is plug it into a hot-swap bay (just about all of ours have those, love them) to recover some data, then wipe it.
exactly. and if you put that drive in a hotswap bay of another system that is using the same label, thats a potential for a big mess. same thing with LVM volume groups, you want their names globally unique, I notice EL6 now embeds the hostname in the VG...