[CentOS] Rescan harddisk size without rebooting

Barry Brimer barry.brimer at bigfoot.com
Wed Nov 9 22:56:20 UTC 2005


Quoting kadafax <kadafax at gmail.com>:

> (I found that the module used was qla6320, dont know if there is an
> issue with this but I thought that the qla2200 would be more appropriate
> since the host card is a qla200.)
> let's go:
> Bryan J. Smith wrote:
>
> >kadafax <kadafax at gmail.com> wrote:
> >
> >
> >>Quite there: the new size is reflected...but the disks
> >>(I've got two virtual disks from the SAN, who are seen like
> >>scsi disk by the system) are now described as /dev/sdd and
> >>/dev/sde (initialy /dev/sdb and sdc)
> >>
> > Aww crap.  Yeah, forgot to mention you might need to release
> >the previous device assignments.
> >
> how to release device assignments? I didn't find any clue on google.
>
> >  Kernel 2.6 (CentOS 4) is
> >better at doing that automagically than kernel 2.4 (CentOS
> >3).  I assume you're on CentOS 3?
> >
> >
> nope CentOS 4.2
>
> >>(below output from a "fdisk -l", the disc which grew is
> >>initialy sdb).
> >>And a "pvdisplay" do not see any new physical extents
> >>available (perhaps I'm missing something on this, I'm
> >>totally new from this and way long to be totally cool with
> >>it)
> >>
> >>
> >Part of the problem might be because we probably should have
> >taken the volume group, and its physical extents, off-line
> >before doing the previous.  I should have thought this
> >through more, sorry.
> >
> ( come on :) )
> Will this take off-line the physical extents along with the VG? :
> ---
> [root at X ~]# vgchange -a n /dev/VG-B/LV-B
> ---
>
> >  Although it probably won't be of any
> >issue either.
> >
> >You could run "vgscan" as see what happens.
> >
> >
> after the 'vgchange -a n', 'rmmod & modprobe' then the 'vgchange -a y':
> ---
> [root at X ~]#  vgscan
>   Reading all physical volumes.  This may take a while...
>   Found volume group "VG-C" using metadata type lvm2
>   Found volume group "VG-B" using metadata type lvm2
>   Found volume group "VolGroup00" using metadata type lvm2
> ---
>
> >>Should I be worried about the "non-valid partition" table
> >>message?
> >>
> >>
> >Nope.  Next time you boot, it will self-correct.  LVM is
> >self-contained, so the vgscan handles locating all volumes.
> >
> >>I can use the LV (mount and work with files), so I don't
> >>know if it is a problem. Maybe the fact that there is no
> >>partition table for fdisk is why I can't make it grow?
> >>
> >>
> >
> >Hmmm, not sure.  Again, maybe a vgscan would correct things?
> >
> >
> >
> >>I mean, maybe I should have create first one
> >>partition (type LVM), then create a second one with the
> >>extra size ?
> >>Here is what I've done to create the logical volume:
> >>---
> >>[root at X ~]# pvcreate /dev/sdb
> >>
> >>
> >
> >Well, that won't work because the volumes aren't there on
> >/dev/sdb.
> >
> >
> >
> >>[root at X ~]# vgcreate VG-B /dev/sdb
> >>[root at X ~]# lvcreate -l 166911 VG-B -n LV-B
> >>// Note: 166911 is the maximum number of physical extents
> >>available
> >>before the grow
> >>[root at X ~]# mkfs.ext3 /dev/VG-B/LV-B
> >>---
> >>from there I could use the volume.
> >>
> >>
> the above was what I initialy made to create the LV before adding a HD
> to the SAN, before trying to expand it without having to reboot.
> More details:
> In the SAN: 5 then 6 * 250GB HD in one disk pool (RAID 5)
>                     2 Virtual disks: 1 * 250GB  <-- VG-C / LV-C
>                                               3 * 250GB <-- I've
> initialy created the VG-B / LV-B with this capacity
>
> then I added another 250GB HD to the pool and
>
> assigned it to this virtual disk. Now I'm trying to
>
> expand the VG/LV, without rebooting.
> (raw available capacity is lesser than 250GB for each disk because in
> this SAN (AX100sc) the OS (winXP...) reserve 20GB on the first 3 HD and
> since the others are in the same pool, I've "lost" 20GB on each disk
> (OT: I found this a little crappy...))
>
> >>---
> >>[root at onyx install]# pvdisplay
> >>  --- Physical volume ---
> >>  PV Name               /dev/sde
> >>  VG Name               VG-C
> >>  PV Size               217.00 GB / not usable 0
> >>  Allocatable           yes (but full)
> >>  PE Size (KByte)       4096
> >>  Total PE              55551
> >>  Free PE               0
> >>  Allocated PE          55551
> >>  PV UUID
> >>w3Q4hA-ALnz-4UuH-fdBB-FGOT-Rn2t-4iG2Vv
> >>
> >>  --- Physical volume ---
> >>  PV Name               /dev/sdd
> >>  VG Name               VG-B
> >>  PV Size               652.00 GB / not usable 0
> >>  Allocatable           yes (but full)
> >>  PE Size (KByte)       4096
> >>  Total PE              166911
> >>  Free PE               0
> >>  Allocated PE          166911
> >>  PV UUID
> >>AFcoa6-6eJl-AAd3-G0Vt-OTXR-niPn-3bTLXu
> >>
> >>  --- Physical volume ---
> >>  PV Name               /dev/sda2
> >>  VG Name               VolGroup00
> >>  PV Size               136.50 GB / not usable 0
> >>  Allocatable           yes
> >>  PE Size (KByte)       32768
> >>  Total PE              4368
> >>  Free PE               1
> >>  Allocated PE          4367
> >>  PV UUID
> >>OjQEXi-u3F8-wCXO-1eZY-W2j0-iZyG-5PwwnB
> >>----
> >>
> >>
> >
> >Well, you're volumes are clearly showing up on /dev/sdd and
> >sde.  Those are probably the device(s) you have to target for
> >expansion.
> >
> >
> but the /dev/sdb and sdc are gone, the PV have just followed the shift
> of the device assignments and the new available physical extents are not
> there. After putting off-line the VG (vgchange -a n) unloading then
> loading the qla module, the new correct size is shown by fdisk but not
> with LVM (no new physical extents appeared). How to gain the new extents
> within LVM when the HD growed up?
>
> (It's late here in europe, I will be back tomorrow. Thanks again for
> support)

I don't know if it will help your situation, but partprobe will force the kernel
to reread your partition table.



More information about the CentOS mailing list