Hi.
I bought a new Adaptec 6405 card including new (much larger) SAS drives (arrays).
I need to copy content of the current SATA (old adaptec 2405) drives to the new SAS drives.
When I put the new controller into the machine, the card is seen and I can see that the kernel loads the new drives and the old drives. The problem is that the new drives are loaded as SDA and SDB, which then stops the kernel loading, becasue it cannot find root and get kernel panic.
Is there a way to tell the kernel in which order to load the drives and assign the drive order in a way that the new drives are assigned SDC and SDD and the old drives get SDA and SDB?
thanks Jobst
On 08/23/12 12:13, Jobst Schmalenbach wrote:
Is there a way to tell the kernel in which order to load the drives and assign the drive order in a way that the new drives are assigned SDC and SDD and the old drives get SDA and SDB?
use UUID= in fstab (lsblk -o NAME,KNAME,UUID) and you will get rid of all this headaches (if you have software raid the assembling is done internally based on UUID so you don't have to worry about mdraid)
HTH, Adrian
Hi Adrian
yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx? Is this correct?
thanks Jobst
On Thu, Aug 23, 2012 at 12:49:38PM +0300, Adrian Sevcenco (Adrian.Sevcenco@cern.ch) wrote:
On 08/23/12 12:13, Jobst Schmalenbach wrote:
Is there a way to tell the kernel in which order to load the drives and assign the drive order in a way that the new drives are assigned SDC and SDD and the old drives get SDA and SDB?
use UUID= in fstab (lsblk -o NAME,KNAME,UUID) and you will get rid of all this headaches (if you have software raid the assembling is done internally based on UUID so you don't have to worry about mdraid)
HTH, Adrian
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 23.8.2012 14:01, Jobst Schmalenbach wrote:
Hi Adrian
yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx? Is this correct?
After reboot sdx could be sdy, as you noticed. The solution: you dont access a drive via /dev/sdx You access per UUID and the kernel maps it to the appropiate sdXY which could be sdy after reboot.
I am not sure about initial ramdisks etc. maybe there is hardcoded stuff to sdx in there. Maybe it has to be rebuilt? Maybe you has to rebuild initrd as well as updating fstab?
Markus Falb wrote:
On 23.8.2012 14:01, Jobst Schmalenbach wrote:
Hi Adrian
yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx? Is this correct?
After reboot sdx could be sdy, as you noticed. The solution: you dont access a drive via /dev/sdx You access per UUID and the kernel maps it to the appropiate sdXY which could be sdy after reboot.
You can also label it. I loathe UUIDs - there is *no* way you're going to remember one when you need it. Labels are so much clearer.
I am not sure about initial ramdisks etc. maybe there is hardcoded stuff to sdx in there. Maybe it has to be rebuilt? Maybe you has to rebuild initrd as well as updating fstab?
I've actually never seen a system *not* know what the first drive was, hardware-wise. And grub will point to root hd(0,x), normally, not UUID or anything else. You *can* (and I do, all the time) use LABEL= on the kernel line.
mark
I agree with the UUID stuff, I do not like them for the exact same reason. I do not understand why RedHat cannot include the partition into the UUID, e.g.
dev-sda1-c05e-449a-837b-b2579b949d55
As for the first drive, when the kernel boots I think it assigns the drives in order of the controller on the system bus/slots. As the new controller sits lower in the slot system (i.e. closer to the CPUs) it is recognised first as I can see it appearing first in the order being initialized by the kernel. I cant move it below the old card as there is no slot that has the correct PCI-x8.
I will try the LABEL way of doing ....
I remember that was the same problem a few years back when one had multiple network interfaces .... until the MAC addresses where introduced into the ifcfg files.
Jobst
On Thu, Aug 23, 2012 at 09:40:24AM -0400, m.roth@5-cent.us (m.roth@5-cent.us) wrote:
Markus Falb wrote:
On 23.8.2012 14:01, Jobst Schmalenbach wrote:
Hi Adrian
yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx? Is this correct?
After reboot sdx could be sdy, as you noticed. The solution: you dont access a drive via /dev/sdx You access per UUID and the kernel maps it to the appropiate sdXY which could be sdy after reboot.
You can also label it. I loathe UUIDs - there is *no* way you're going to remember one when you need it. Labels are so much clearer.
I am not sure about initial ramdisks etc. maybe there is hardcoded stuff to sdx in there. Maybe it has to be rebuilt? Maybe you has to rebuild initrd as well as updating fstab?
I've actually never seen a system *not* know what the first drive was, hardware-wise. And grub will point to root hd(0,x), normally, not UUID or anything else. You *can* (and I do, all the time) use LABEL= on the kernel line.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 08/23/12 4:15 PM, Jobst Schmalenbach wrote:
I will try the LABEL way of doing ....
the problem with labels, there's no guarantee they will be unique. the default labels that the centos installer uses are the same on every system, so if you plug a drive into another computer, the odds are pretty high there will be a collision.
I've done that before to get some old data off a drive and the system appended a "1" to all matching label names.
On 8/23/2012 5:42 PM, John R Pierce wrote:
On 08/23/12 4:15 PM, Jobst Schmalenbach wrote:
I will try the LABEL way of doing ....
the problem with labels, there's no guarantee they will be unique. the default labels that the centos installer uses are the same on every system, so if you plug a drive into another computer, the odds are pretty high there will be a collision.
Ken godee wrote:
I've done that before to get some old data off a drive and the system appended a "1" to all matching label names.
On 8/23/2012 5:42 PM, John R Pierce wrote:
On 08/23/12 4:15 PM, Jobst Schmalenbach wrote:
I will try the LABEL way of doing ....
the problem with labels, there's no guarantee they will be unique. the default labels that the centos installer uses are the same on every system, so if you plug a drive into another computer, the odds are pretty high there will be a collision.
I'll step into this again: let's look at the context.
1. a drive's failed. No conflict. 2. a server's failed, and you want something off one of its disks: a) you put it in a hot swap bay, and aren't rebooting the server - you are going to be manually mounting it, so no conflict b) you need to replace the server in -10 sec: you throw the drive(s) into a standby box, and either i. it's got partitions labelled /boot and /; fine, you *want* it to use those ii. you want a drive from another disk on that failed system: no problem - see 2.a. c) you have a system without hot swap bays, and you install the drive from the failed system, and then you do have to power up; this is the only case I can think of, off the top of my head where you have a collision. In this case, you need linux rescue, and relabel.
So, where's the big issue with std. labels?
mark
On Fri, Aug 24, 2012 at 9:24 AM, m.roth@5-cent.us wrote:
I'll step into this again: let's look at the context.
- a drive's failed. No conflict.
- a server's failed, and you want something off one of its disks: a) you put it in a hot swap bay, and aren't rebooting the server - you are going to be manually mounting it, so no conflict b) you need to replace the server in -10 sec: you throw the drive(s) into a standby box, and either i. it's got partitions labelled /boot and /; fine, you *want* it to use those ii. you want a drive from another disk on that failed system: no problem - see 2.a. c) you have a system without hot swap bays, and you install the drive from the failed system, and then you do have to power up; this is the only case I can think of, off the top of my head where you have a collision. In this case, you need linux rescue, and relabel.
So, where's the big issue with std. labels?
You power down, add some disks that you want to re-use. Maybe even add a controller. Just because a bay looks like you can hot-swap doesn't mean it is a good idea if you don't have to. You boot up. When the label scheme was first rolled out, the machine wouldn't boot if it found a duplicate. Now it will pick one. Possibly the wrong one. As you might when you do a rescue boot for the relabel since you won't know which controller is detected first.
Les Mikesell wrote:
On Fri, Aug 24, 2012 at 9:24 AM, m.roth@5-cent.us wrote:
I'll step into this again: let's look at the context.
- a drive's failed. No conflict.
- a server's failed, and you want something off one of its disks: a) you put it in a hot swap bay, and aren't rebooting the server - you are going to be manually mounting it, so no conflict b) you need to replace the server in -10 sec: you throw the
drive(s) into a standby box, and either i. it's got partitions labelled /boot and /; fine, you *want* it to use those ii. you want a drive from another disk on that failed system: no problem - see 2.a. c) you have a system without hot swap bays, and you install the drive from the failed system, and then you do have to power up; this is the only case I can think of, off the top of my head where you have a collision. In this case, you need linux rescue, and relabel.
So, where's the big issue with std. labels?
You power down, add some disks that you want to re-use. Maybe even add a controller. Just because a bay looks like you can hot-swap doesn't mean it is a good idea if you don't have to. You boot up.
Okayyyy... We differ, here - I've come to adore hot-swap bays, and hate having to take a system apart to add another drive.
Reused disks - I reformat them, usually in a hot swap bay.
Of course, I *do* have some additional concerns - I have to worry about PII and HIPAA data that may, *possibly*, be on the drives.
When the label scheme was first rolled out, the machine wouldn't boot if it found a duplicate. Now it will pick one. Possibly the wrong one. As you might when you do a rescue boot for the relabel since you won't know which controller is detected first.
But you can do a rescue, mount, and look at what's on what the controller found.
mark
On Fri, Aug 24, 2012 at 10:04 AM, m.roth@5-cent.us wrote:
So, where's the big issue with std. labels?
You power down, add some disks that you want to re-use. Maybe even add a controller. Just because a bay looks like you can hot-swap doesn't mean it is a good idea if you don't have to. You boot up.
Okayyyy... We differ, here - I've come to adore hot-swap bays, and hate having to take a system apart to add another drive.
Same here, in terms of the actual swap. But I'm old enough to remember electronics that were sensitive to static, power fluctuations, etc., so I generally power down while doing it. And I don't want to create a scenario where the machine might do something unexpected if it did happen to reboot with the disks added.
Reused disks - I reformat them, usually in a hot swap bay.
Same here, but I've had unwanted surprises from duplicate labels before the format. Hence the conclusion that duplicate labels are as bad and idea as duplicate hostnames, IP addresses, or any other identifier would be.
Of course, I *do* have some additional concerns - I have to worry about PII and HIPAA data that may, *possibly*, be on the drives.
I normally don't have to worry about contents unless the disks leave the site.
When the label scheme was first rolled out, the machine wouldn't boot if it found a duplicate. Now it will pick one. Possibly the wrong one. As you might when you do a rescue boot for the relabel since you won't know which controller is detected first.
But you can do a rescue, mount, and look at what's on what the controller found.
And they all look alike...
On 08/23/12 15:01, Jobst Schmalenbach wrote:
Hi Adrian
Hi!
yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx?
AFAIK the sdX names are given by the bios so this is why when new hardware is addend and/or something is change hardware-wise the sdX nomenclature is changed. If you decide to use UUID nomenclature you should use it for ALL disks/partitions .. for centos that menas the beside fstab to modify grub to have something like root=UUID= in kernel command line from grub. IMHO the easiest way to change all to UUID is to boot a live-cd find out all UUIDs and modify the fstab and grub accordingly. I dont know about root(hd0,0) .. i have the grub and / installed on an disk which by system is recognized as /dev/sdc and in grub.conf i have hd(2,msdos1)
HTH, Adrian
Is this correct?
thanks Jobst
On Thu, Aug 23, 2012 at 12:49:38PM +0300, Adrian Sevcenco (Adrian.Sevcenco@cern.ch) wrote:
On 08/23/12 12:13, Jobst Schmalenbach wrote:
Is there a way to tell the kernel in which order to load the drives and assign the drive order in a way that the new drives are assigned SDC and SDD and the old drives get SDA and SDB?
use UUID= in fstab (lsblk -o NAME,KNAME,UUID) and you will get rid of all this headaches (if you have software raid the assembling is done internally based on UUID so you don't have to worry about mdraid)
HTH, Adrian
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Adrian Sevcenco wrote:
On 08/23/12 15:01, Jobst Schmalenbach wrote:
<snip>
I dont know about root(hd0,0) .. i have the grub and / installed on an disk which by system is recognized as /dev/sdc and in grub.conf i have hd(2,msdos1)
<snip> I've never seen anything like msdos1 - I assume it's a label, but have seen nothing suggesting I could do that. At any rate, that's h/d 3, or presented as the third drive to the BIOS?
mark
Adrian Sevcenco wrote:
On 08/23/12 15:01, Jobst Schmalenbach wrote:
<snip>
I dont know about root(hd0,0) .. i have the grub and / installed on an disk which by system is recognized as /dev/sdc and in grub.conf i have hd(2,msdos1)
Ok... to follow myself up, I started looking. There is zero indication in any manpage about the syntax you use. After doing some googling, I finally found http://www.gnu.org/software/grub/manual/grub.html#Device-syntax, and geez, the Gnu documentation is *dreadful* - why isn't there a syntax description for the root line of a grub entry? Then I found the above, which is *not* with the root directive, and they give examples including msdos1, msdos5, but with no explanation of what those names are - labels? the first and fifth partitions that are msdos format?
As I said in my previous post, I've never seen anything like that - it's always root (hdx,y).
mark
On 08/23/12 17:41, m.roth@5-cent.us wrote:
Adrian Sevcenco wrote:
On 08/23/12 15:01, Jobst Schmalenbach wrote:
<snip> > I dont know about root(hd0,0) .. i have the grub and / installed on an > disk which by system is recognized as /dev/sdc and in grub.conf i have > hd(2,msdos1) > Ok... to follow myself up, I started looking. There is zero indication in any manpage about the syntax you use. After doing some googling, I finally found <http://www.gnu.org/software/grub/manual/grub.html#Device-syntax>, and geez, the Gnu documentation is *dreadful* - why isn't there a syntax description for the root line of a grub entry? Then I found the above, which is *not* with the root directive, and they give examples including msdos1, msdos5, but with no explanation of what those names are - labels? the first and fifth partitions that are msdos format?
As I said in my previous post, I've never seen anything like that - it's always root (hdx,y).
err, sorry that was my mistake ... i copy pasted from wrong terminal (from my desktop fedora 16 grub 2 instead from the centos server where i look initially)
so, to wrap things up: on my centos 5 storage i have root (hd0,0) that stayed the same no matter how many block devices i added or removed from my hardware card... but in fstab i use only UUIDs
HTH, Adrian
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Adrian Sevcenco wrote:
On 08/23/12 17:41, m.roth@5-cent.us wrote:
Adrian Sevcenco wrote:
On 08/23/12 15:01, Jobst Schmalenbach wrote:
<snip> > I dont know about root(hd0,0) .. i have the grub and / installed on an > disk which by system is recognized as /dev/sdc and in grub.conf i have > hd(2,msdos1) > Ok... to follow myself up, I started looking. There is zero indication in any manpage about the syntax you use. After doing some googling, I finally found <http://www.gnu.org/software/grub/manual/grub.html#Device-syntax>, and geez, the Gnu documentation is *dreadful* - why isn't there a syntax description for the root line of a grub entry? Then I found the above, which is *not* with the root directive, and they give examples including msdos1, msdos5, but with no explanation of what those names are - labels? the first and fifth partitions that are msdos format?
As I said in my previous post, I've never seen anything like that - it's always root (hdx,y).
err, sorry that was my mistake ... i copy pasted from wrong terminal (from my desktop fedora 16 grub 2 instead from the centos server where i look initially)
Oh, no problem. I can't see how you could *possibly* have copied from the wrong term (says the guy with seven open for use, sudo -s on three, and one for later use for streaming media....)
so, to wrap things up: on my centos 5 storage i have root (hd0,0) that stayed the same no matter how many block devices i added or removed from my hardware card... but in fstab i use only UUIDs
Yup, what we've got. As I said, though, I hate UUIDs - for any non-RAIDed drives, we *always* label them, indicative of where they mount. That way, we all know what's expected, where a UUID tells you nothing of what it is.
mark