Hi
CentOS 5.4(final) 2.6.18-164el5PAE. I am trying to prevent removable USB and eSATA devices from occupying /dev/sdX devices ahead of a 3ware RAID controller. For example: at boot, if a USB drive and eSATA HDD (connected to an LSI 1068E onboard controller, reflashed in "IT" mode to handle hotplug devices) were both present, they would occupy devices /dev/sdb and /dev/sdc, ahead of the RAID controller which ends up as /dev/sdd. As these are removable devices, they should normally get handled by custom udev script looking for adds matching KERNEL=="sd[c-z][0-9]" ,SUBSYSTEM=="block", so the volume handled by RAID controller gets grabbed by udev but fails to mount and subsequent udev plug events fails due the slots left empty below /dev/sdd. If no hotplug devices are present while booting, fstab handles mounting of the system and RAID volume:
SATA system HDD /dev/VolGroup00/LogVol00 / RAID array LABEL=STORE /store ## mounts == /dev/sdb1
I realise this description is kind of a tangle, but i am essentially looking for a way to hard-map the 3ware RAID controller to /dev/sdb (UUID won't work as there are multiples of this system) before PCI (?) enumeration picks up the USB and LSI-managed devices so that udev can take care of the device at /dev/sdc and above. I've tried blacklisting the mpt and usb-storage modules and short-circuiting SUBSYSTEM=="block" devices in 05-udev-early.rules, all with zero or negative effect. rc.sysinit doesn't appear to be the right place and that's about as deep down as i know how to go.
cheers,
cs
Cal Sawyer wrote on 03/31/2011 08:13 AM:
Hi
CentOS 5.4(final) 2.6.18-164el5PAE.
I hope you are aware that you are using a very obsolete OS with a lot of known (i.e. exploitable) security holes and bugs that have subsequently been fixed.
...
I realise this description is kind of a tangle
Indeed. Why does a line in /etc/fstab like
LABEL=STORE /store ext3 defaults 1 2
not work?
On 31/03/11 15:24, Phil Schaffner wrote:
Cal Sawyer wrote on 03/31/2011 08:13 AM:
CentOS 5.4(final) 2.6.18-164el5PAE.
I hope you are aware that you are using a very obsolete OS with a lot of known (i.e. exploitable) security holes and bugs that have subsequently been fixed.
Do you really mean "OS" - i.e. CentOS 5.4? I would guess you are merely referring to the kernel, 2.6.18-164el5PAE.
N
On Thu, Mar 31, 2011 at 8:46 AM, Nick oinksocket@letterboxes.org wrote:
On 31/03/11 15:24, Phil Schaffner wrote:
Cal Sawyer wrote on 03/31/2011 08:13 AM:
CentOS 5.4(final) 2.6.18-164el5PAE.
I hope you are aware that you are using a very obsolete OS with a lot of known (i.e. exploitable) security holes and bugs that have subsequently been fixed.
Do you really mean "OS" - i.e. CentOS 5.4? I would guess you are merely referring to the kernel, 2.6.18-164el5PAE.
You would seriously want to go through all the security updates that came out after 2009-09-02 and assess the vulnerability of your CentOS 5.4 system:
https://rhn.redhat.com/errata/rhel-server-errata-security.html
Akemi
On Thu, Mar 31, 2011 at 6:33 PM, Akemi Yagi amyagi@gmail.com wrote:
On Thu, Mar 31, 2011 at 8:46 AM, Nick oinksocket@letterboxes.org wrote:
On 31/03/11 15:24, Phil Schaffner wrote:
Cal Sawyer wrote on 03/31/2011 08:13 AM:
CentOS 5.4(final) 2.6.18-164el5PAE.
I hope you are aware that you are using a very obsolete OS with a lot of known (i.e. exploitable) security holes and bugs that have subsequently been fixed.
Do you really mean "OS" - i.e. CentOS 5.4? I would guess you are merely referring to the kernel, 2.6.18-164el5PAE.
You would seriously want to go through all the security updates that came out after 2009-09-02 and assess the vulnerability of your CentOS 5.4 system:
https://rhn.redhat.com/errata/rhel-server-errata-security.html
Akemi _______________________________________________
I don't think the OP asked how secure, or in-secure, his system is. So please try and keep on-topic?
@Cal,
You could assign a LABEL to each hard drive. The LABEL is attached to the drive's UID (I think?) so even if you move the drive to anther port it will still be accessible via the same LABEL
Look here: http://lissot.net/partition/ext2fs/labels.html
On Thu, Mar 31, 2011 at 06:57:00PM +0200, Rudi Ahlers wrote:
I don't think the OP asked how secure, or in-secure, his system is. So please try and keep on-topic?
Whether asked for or not it is negligent to _not_ point out that there are holes large enough to fly a space shuttle through on that box; just as it is negligent to _not_ encourage the OP to update the box.
@Cal,
This is twitter now? :)
John
On Thu, Mar 31, 2011 at 7:02 PM, John R. Dennison jrd@gerdesas.com wrote:
On Thu, Mar 31, 2011 at 06:57:00PM +0200, Rudi Ahlers wrote:
I don't think the OP asked how secure, or in-secure, his system is. So please try and keep on-topic?
Whether asked for or not it is negligent to _not_ point out that there are holes large enough to fly a space shuttle through on that box; just as it is negligent to _not_ encourage the OP to update the box.
@Cal,
This is twitter now? :) John
--
But as with so many posts on the mailing lists these days everyone seem to wonder off from the original topic and not even bother to help with OP with the original question. Surely he has a good reason why his system not updated yet.
Rudi Ahlers wrote:
John R. Dennison jrd@gerdesas.com wrote:
Rudi Ahlers wrote:
<snip />
But as with so many posts on the mailing lists these days everyone seem to wonder off from the original topic and not even bother to help with OP with the original question. Surely he has a good reason why his system not updated yet.
Amen.
To the OP, from man mke2fs:
-L new-volume-label Set the volume label for the filesystem to new-volume-label. The maximum length of the volume label is 16 bytes.
Method 1: When creating a new filesystem with mke2fs use the "-L mylabel" option, or use e2label to change the label on a previously- created filesystem. In /etc/fstab use a corresponding LABEL=mylabel filesystem spec (first field).
Method 2: Use blkid to get the uuid for your partitions. Then use UUID=00000000-1111-2222-3333-444455556666 in /etc/fstab in the first field. From my point of view the advantage is that if during some service manouver a disk is shuffled into the wrong slot, you can puzzle out how to put humpty-dumpty back together. (Having previously made an off-host copy of /etc/fstab.)
On Thursday, March 31, 2011 10:24:42 am Phil Schaffner wrote:
I hope you are aware that you are using a very obsolete OS with a lot of known (i.e. exploitable) security holes and bugs that have subsequently been fixed.
No to pick on you, Phil, but the OP may have very specific reasons to run that particular system; and he may not. And while the advice 'please update' is a good thing, it doesn't really answer the question at hand.
Cal, you might want to rethink your udev script for one; also, module load order may not be something you can easily control; while you might easily load the RAID controller's module first, then load the others, that is no guarantee that the RAID controller will be detected first. These days, you have to be resilient to drive device changes, even for the root filesystem. And that module load order will be found in the initramfs (initrd), and you'd have to go in and hack that to get things to load in the order you want. But then that will all break when you go to C6 or later, as udev is used from the initrd using dracut, and hardcoding module load order becomes much more difficult.
For instance, I have a Fedora 14 box (which acts much like a CentOS 6 box would act, and not like a C5 box acts, but that's beside the point) where different boots can bring up a different device order; in this particular case, I have a 3Ware RAID controller from which I boot and which has four 250GB SATA drives on it in RAID5 for boot and root, then a SATA/eSATA 64-bit PCI-X board (two internal SATA, two eSATA, 3Gb/s ports) with two interal 750GB drives in MD RAID 1, and I do boot with eSATA drives plugged in, or not, at different times. This box is also connected via dual-port Fibre Channel to our SAN, and it has several multipath LUNs associated with that.
My last detected SCSI device is currently /dev/sdab, but that is without any eSATA or USB devices, so I could see /dev/sdad or higher on occasion; and because it's set up with multipathing based on scsi_dh_emc, it doesn't have contiguous device names, either (the current setup is: sda, sdb, sdf, sdg, sdi, sdj, sdp, sdu, sdw, sdx, sdy, sdab). This is not only subject to change at each boot, but it's subject to change while the system is running, thanks to scsi_dh_emc, and thanks to there being more than even two paths to a given LUN.
I haven't had any trouble, even when the RAID array was the one that got detected dead last, as /dev/sdac (in /etc/fstab, /boot is mounted by UUID, and root by LVM).
But I'm also not automounting hotplugged eSATA drives, either. USB devices get automounted into /media with their labels as they should; that's the out-of-box behavior.
Trying to get specific about device order is going to become increasingly difficult as time goes on; you might consider trying to get away from hardcoded drive names.
But in the short term, try starting your hotplugged devices at /dev/sde or f and see if that fixes the /dev/sdd not working issue.