On Friday, September 30, 2011 12:26:28 PM Les Mikesell wrote:
On Fri, Sep 30, 2011 at 11:03 AM, Lamar Owen lowen@pari.edu wrote:
For example when mounting by label was first implemented, having a duplicate label (very likely if you move disks around at all since the installer always used the same labels) would keep the system from booting at all. You had to just say 'what were they thinking...' - and wonder about the rest of the system.
Again I'll say that no matter what scheme were to be used there are issues and problems. 'What were they thinking?' is something that obviously the nameless 'they' must answer for themselves, but at the same time I'm reminded of the old engineering adage 'the better is the enemy of the good enough' meaning that while you can always make a product 'better' you must recognize when it is good enough for the targeted use case. And if your particular corner case is not the targeted use case... well, things do break. Try not to have known corner cases or be prepared to work around the breakage.
But 'breakage' and 'bugginess' are not synonyms; something can be broken for a corner case but not be a bug in the general sense. Is the current filesystem mounting standard broken? In certain use cases most certainly. Is the current filesystem mounting standard buggy? For the targeted use cases probably not. After all, upstream developers and CentOS builders all operate within finite resource limits; it takes infinite resources to reach perfection.
It's not at all hard to change the labels after the install.
To what? It's something that is going to hold some data in the future. And you may not know you need to re-mount it until the machine that labeled it is gone or dead and the drive is all that is left.
The only truly unique identifier belonging to the drive and externally visible is the drive serial number. Or you can literally and physically label the disk with information about its filesystems; I've both seen that done and have done it in certain hotswap cases.
Within 5.x I've found auto-assembled md devices to be pretty reliable at identifying themselves, but booting the 6.x livecd completely messed that up on the one machine where I tried it.
There seem to be enough differences in the md scheme of 5.x and 6.x to discourage disk interchange among the two in mdraid cases. Having said that, I have an EL6.1 (upstream EL) machine with this: [root@www ~]# cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdae1[0] sdaf1[1] 732570841 blocks super 1.2 [2/2] [UU]
unused devices: <none> [root@www ~]#
Yeah, md127. But it works reliably, so why change it? When LUNs go away or whatnot, the member disks, between boots, will change in terms of device name (for a while they were /dev/sdw and /dev/sdx, then I added some LUNs to the fibre channel and they went to /dev/sdz and /dev/sdaa; I've added a LUN or two since then (and thanks to multipathing) they are now at /dev/sdae and /dev/sdaf; the mirror hasn't broken. And this md set was created under CentOS 5 a couple of years back.
This would definitely break things if mount points in /etc/fstab are keyed by md number; that's not the case here, the filesystem is mounted by label.
But is that buggy? Depends entirely on use case.