Hi list, I have a SAN attached to a CentOS 4.2 server. I have expanded the size of the virtual disk within the SAN (by adding a new HD to the disk pool) and need CentOS to see the new size (CentOS see it as /dev/sdb). I'm using LVM. Do you know a method for the Volume Group to see that one of its harddisk is now bigger, without rebooting (it's not a problem with a reboot but since the LDAP directory is on this server, it is problematic). Thanks. kfx
On Wed, 2005-11-09 at 11:52 +0100, kadafax wrote:
Hi list, I have a SAN attached to a CentOS 4.2 server. I have expanded the size of the virtual disk within the SAN (by adding a new HD to the disk pool) and need CentOS to see the new size (CentOS see it as /dev/sdb). I'm using LVM. Do you know a method for the Volume Group to see that one of its harddisk is now bigger, without rebooting (it's not a problem with a reboot but since the LDAP directory is on this server, it is problematic). Thanks.
Anytime you expand what Linux sees as a "physical device" -- even though it's a volume over a FC HBA -- that is not always easy to address. You typically have to rescan the SCSI bus (which is what the FC HBA presents the storage as). Unfortunately, if you have filesystems mounted on that device, they may not update -- even if you're using LVM atop.
Last time I checked, there was no single command to rescan the SCSI bus in RHEL. Ironically enough, I typically just do a Google search for "Linux SCSI bus scan" and run the few commands listed whenever I need to do so. But I don't know what that might do to your running storage -- it could kill your mounts.
Thanks for reply Bryan, in fact since the mounted FS is for backup purpose only, it's not a big deal to un-mount it (far better than disabling the ldap service with a reboot). I've checked google and the result it gives seems to be too heavy for this production server (new scsi driver etc.). So if you have a solution who is working on un-mounted volume, I'm very interrested. kfx
PS: The SAN is an AX100sc from EMC. The host adapter is a QLA200 from QLogic. OS: CentOS 4.2
Bryan J. Smith wrote:
On Wed, 2005-11-09 at 11:52 +0100, kadafax wrote:
Hi list, I have a SAN attached to a CentOS 4.2 server. I have expanded the size of the virtual disk within the SAN (by adding a new HD to the disk pool) and need CentOS to see the new size (CentOS see it as /dev/sdb). I'm using LVM. Do you know a method for the Volume Group to see that one of its harddisk is now bigger, without rebooting (it's not a problem with a reboot but since the LDAP directory is on this server, it is problematic). Thanks.
Anytime you expand what Linux sees as a "physical device" -- even though it's a volume over a FC HBA -- that is not always easy to address. You typically have to rescan the SCSI bus (which is what the FC HBA presents the storage as). Unfortunately, if you have filesystems mounted on that device, they may not update -- even if you're using LVM atop.
Last time I checked, there was no single command to rescan the SCSI bus in RHEL. Ironically enough, I typically just do a Google search for "Linux SCSI bus scan" and run the few commands listed whenever I need to do so. But I don't know what that might do to your running storage -- it could kill your mounts.
kadafax kadafax@gmail.com wrote:
Thanks for reply Bryan, in fact since the mounted FS is for backup purpose only, it's not a big deal to un-mount it (far better than disabling the ldap service with a reboot). I've checked google and the result it gives seems to be too heavy for this production server (new scsi driver etc.). So if you have a solution who is working on un-mounted volume, I'm very interrested. PS: The SAN is an AX100sc from EMC. The host adapter is a QLA200 from QLogic. OS: CentOS 4.2
If all volumes are unmounted, then you can simply remove the driver and modprobe it again.
E.g., # /sbin/rmmod qla2200 # /sbin/modprobe qla2200
I'm assuming your /etc/modprobe.conf file has all options (SAN paths, etc...) for the card defined.
Once that is done, your /dev/sdb device should be updated to reflect the new geometry of the storage. Just add the new space as a new, physical volume, add it to your existing volume group, etc...
Bryan J. Smith wrote:
kadafax kadafax@gmail.com wrote:
Thanks for reply Bryan, in fact since the mounted FS is for backup purpose only, it's not a big deal to un-mount it (far better than disabling the ldap service with a reboot). I've checked google and the result it gives seems to be too heavy for this production server (new scsi driver etc.). So if you have a solution who is working on un-mounted volume, I'm very interrested. PS: The SAN is an AX100sc from EMC. The host adapter is a QLA200 from QLogic. OS: CentOS 4.2
If all volumes are unmounted, then you can simply remove the driver and modprobe it again.
E.g., # /sbin/rmmod qla2200 # /sbin/modprobe qla2200
I'm assuming your /etc/modprobe.conf file has all options (SAN paths, etc...) for the card defined.
Once that is done, your /dev/sdb device should be updated to reflect the new geometry of the storage. Just add the new space as a new, physical volume, add it to your existing volume group, etc...
Quite there: the new size is reflected...but the disks (I've got two virtual disks from the SAN, who are seen like scsi disk by the system) are now described as /dev/sdd and /dev/sde (initialy /dev/sdb and sdc) (below output from a "fdisk -l", the disc which grew is initialy sdb). And a "pvdisplay" do not see any new physical extents available (perhaps I'm missing something on this, I'm totally new from this and way long to be totally cool with it): ---
* before the rmmod/modprobe (see /dev/sdb): --- [root@X install]# fdisk -l
Disk /dev/sda: 146.6 GB, 146695782400 bytes 255 heads, 63 sectors/track, 17834 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17834 143147182+ 8e Linux LVM
Disk /dev/sdb: 700.0 GB, 700079669248 bytes 255 heads, 63 sectors/track, 85113 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 233.0 GB, 233001975808 bytes 255 heads, 63 sectors/track, 28327 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table ---
After:
--- [root@X install]# fdisk -l
Disk /dev/sda: 146.6 GB, 146695782400 bytes 255 heads, 63 sectors/track, 17834 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17834 143147182+ 8e Linux LVM
Disk /dev/sdd: 933.0 GB, 933081645056 bytes 255 heads, 63 sectors/track, 113440 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 233.0 GB, 233001975808 bytes 255 heads, 63 sectors/track, 28327 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table --- Should I be worried about the "non-valid partition" table message? I can use the LV (mount and work with files), so I don't know if it is a problem. Maybe the fact that there is no partition table for fdisk is why I can't make it grow? I mean, maybe I should have create first one partition (type LVM), then create a second one with the extra size ? Here is what I've done to create the logical volume: --- [root@X ~]# pvcreate /dev/sdb [root@X ~]# vgcreate VG-B /dev/sdb [root@X ~]# lvcreate -l 166911 VG-B -n LV-B // Note: 166911 is the maximum number of physical extents available before the grow [root@X ~]# mkfs.ext3 /dev/VG-B/LV-B --- from there I could use the volume.
---
[root@onyx install]# pvdisplay --- Physical volume --- PV Name /dev/sde VG Name VG-C PV Size 217.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 55551 Free PE 0 Allocated PE 55551 PV UUID w3Q4hA-ALnz-4UuH-fdBB-FGOT-Rn2t-4iG2Vv
--- Physical volume --- PV Name /dev/sdd VG Name VG-B PV Size 652.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 166911 Free PE 0 Allocated PE 166911 PV UUID AFcoa6-6eJl-AAd3-G0Vt-OTXR-niPn-3bTLXu
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup00 PV Size 136.50 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 4368 Free PE 1 Allocated PE 4367 PV UUID OjQEXi-u3F8-wCXO-1eZY-W2j0-iZyG-5PwwnB ----
kadafax kadafax@gmail.com wrote:
Quite there: the new size is reflected...but the disks (I've got two virtual disks from the SAN, who are seen like scsi disk by the system) are now described as /dev/sdd and /dev/sde (initialy /dev/sdb and sdc)
Aww crap. Yeah, forgot to mention you might need to release the previous device assignments. Kernel 2.6 (CentOS 4) is better at doing that automagically than kernel 2.4 (CentOS 3). I assume you're on CentOS 3?
(below output from a "fdisk -l", the disc which grew is initialy sdb). And a "pvdisplay" do not see any new physical extents available (perhaps I'm missing something on this, I'm totally new from this and way long to be totally cool with it)
Part of the problem might be because we probably should have taken the volume group, and its physical extents, off-line before doing the previous. I should have thought this through more, sorry. Although it probably won't be of any issue either.
You could run "vgscan" as see what happens.
Should I be worried about the "non-valid partition" table message?
Nope. Next time you boot, it will self-correct. LVM is self-contained, so the vgscan handles locating all volumes.
I can use the LV (mount and work with files), so I don't know if it is a problem. Maybe the fact that there is no partition table for fdisk is why I can't make it grow?
Hmmm, not sure. Again, maybe a vgscan would correct things?
I mean, maybe I should have create first one partition (type LVM), then create a second one with the extra size ? Here is what I've done to create the logical volume:
[root@X ~]# pvcreate /dev/sdb
Well, that won't work because the volumes aren't there on /dev/sdb.
[root@X ~]# vgcreate VG-B /dev/sdb [root@X ~]# lvcreate -l 166911 VG-B -n LV-B // Note: 166911 is the maximum number of physical extents available before the grow [root@X ~]# mkfs.ext3 /dev/VG-B/LV-B
from there I could use the volume.
[root@onyx install]# pvdisplay --- Physical volume --- PV Name /dev/sde VG Name VG-C PV Size 217.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 55551 Free PE 0 Allocated PE 55551 PV UUID w3Q4hA-ALnz-4UuH-fdBB-FGOT-Rn2t-4iG2Vv
--- Physical volume --- PV Name /dev/sdd VG Name VG-B PV Size 652.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 166911 Free PE 0 Allocated PE 166911 PV UUID AFcoa6-6eJl-AAd3-G0Vt-OTXR-niPn-3bTLXu
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup00 PV Size 136.50 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 4368 Free PE 1 Allocated PE 4367 PV UUID OjQEXi-u3F8-wCXO-1eZY-W2j0-iZyG-5PwwnB
Well, you're volumes are clearly showing up on /dev/sdd and sde. Those are probably the device(s) you have to target for expansion.
(I found that the module used was qla6320, dont know if there is an issue with this but I thought that the qla2200 would be more appropriate since the host card is a qla200.) let's go: Bryan J. Smith wrote:
kadafax kadafax@gmail.com wrote:
Quite there: the new size is reflected...but the disks (I've got two virtual disks from the SAN, who are seen like scsi disk by the system) are now described as /dev/sdd and /dev/sde (initialy /dev/sdb and sdc)
Aww crap. Yeah, forgot to mention you might need to release the previous device assignments.
how to release device assignments? I didn't find any clue on google.
Kernel 2.6 (CentOS 4) is better at doing that automagically than kernel 2.4 (CentOS 3). I assume you're on CentOS 3?
nope CentOS 4.2
(below output from a "fdisk -l", the disc which grew is initialy sdb). And a "pvdisplay" do not see any new physical extents available (perhaps I'm missing something on this, I'm totally new from this and way long to be totally cool with it)
Part of the problem might be because we probably should have taken the volume group, and its physical extents, off-line before doing the previous. I should have thought this through more, sorry.
( come on :) ) Will this take off-line the physical extents along with the VG? : --- [root@X ~]# vgchange -a n /dev/VG-B/LV-B ---
Although it probably won't be of any issue either.
You could run "vgscan" as see what happens.
after the 'vgchange -a n', 'rmmod & modprobe' then the 'vgchange -a y': --- [root@X ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "VG-C" using metadata type lvm2 Found volume group "VG-B" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2 ---
Should I be worried about the "non-valid partition" table message?
Nope. Next time you boot, it will self-correct. LVM is self-contained, so the vgscan handles locating all volumes.
I can use the LV (mount and work with files), so I don't know if it is a problem. Maybe the fact that there is no partition table for fdisk is why I can't make it grow?
Hmmm, not sure. Again, maybe a vgscan would correct things?
I mean, maybe I should have create first one partition (type LVM), then create a second one with the extra size ? Here is what I've done to create the logical volume:
[root@X ~]# pvcreate /dev/sdb
Well, that won't work because the volumes aren't there on /dev/sdb.
[root@X ~]# vgcreate VG-B /dev/sdb [root@X ~]# lvcreate -l 166911 VG-B -n LV-B // Note: 166911 is the maximum number of physical extents available before the grow [root@X ~]# mkfs.ext3 /dev/VG-B/LV-B
from there I could use the volume.
the above was what I initialy made to create the LV before adding a HD to the SAN, before trying to expand it without having to reboot. More details: In the SAN: 5 then 6 * 250GB HD in one disk pool (RAID 5) 2 Virtual disks: 1 * 250GB <-- VG-C / LV-C 3 * 250GB <-- I've initialy created the VG-B / LV-B with this capacity
then I added another 250GB HD to the pool and
assigned it to this virtual disk. Now I'm trying to
expand the VG/LV, without rebooting. (raw available capacity is lesser than 250GB for each disk because in this SAN (AX100sc) the OS (winXP...) reserve 20GB on the first 3 HD and since the others are in the same pool, I've "lost" 20GB on each disk (OT: I found this a little crappy...))
[root@onyx install]# pvdisplay --- Physical volume --- PV Name /dev/sde VG Name VG-C PV Size 217.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 55551 Free PE 0 Allocated PE 55551 PV UUID w3Q4hA-ALnz-4UuH-fdBB-FGOT-Rn2t-4iG2Vv
--- Physical volume --- PV Name /dev/sdd VG Name VG-B PV Size 652.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 166911 Free PE 0 Allocated PE 166911 PV UUID AFcoa6-6eJl-AAd3-G0Vt-OTXR-niPn-3bTLXu
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup00 PV Size 136.50 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 4368 Free PE 1 Allocated PE 4367 PV UUID OjQEXi-u3F8-wCXO-1eZY-W2j0-iZyG-5PwwnB
Well, you're volumes are clearly showing up on /dev/sdd and sde. Those are probably the device(s) you have to target for expansion.
but the /dev/sdb and sdc are gone, the PV have just followed the shift of the device assignments and the new available physical extents are not there. After putting off-line the VG (vgchange -a n) unloading then loading the qla module, the new correct size is shown by fdisk but not with LVM (no new physical extents appeared). How to gain the new extents within LVM when the HD growed up?
(It's late here in europe, I will be back tomorrow. Thanks again for support).
Quoting kadafax kadafax@gmail.com:
(I found that the module used was qla6320, dont know if there is an issue with this but I thought that the qla2200 would be more appropriate since the host card is a qla200.) let's go: Bryan J. Smith wrote:
kadafax kadafax@gmail.com wrote:
Quite there: the new size is reflected...but the disks (I've got two virtual disks from the SAN, who are seen like scsi disk by the system) are now described as /dev/sdd and /dev/sde (initialy /dev/sdb and sdc)
Aww crap. Yeah, forgot to mention you might need to release the previous device assignments.
how to release device assignments? I didn't find any clue on google.
Kernel 2.6 (CentOS 4) is better at doing that automagically than kernel 2.4 (CentOS 3). I assume you're on CentOS 3?
nope CentOS 4.2
(below output from a "fdisk -l", the disc which grew is initialy sdb). And a "pvdisplay" do not see any new physical extents available (perhaps I'm missing something on this, I'm totally new from this and way long to be totally cool with it)
Part of the problem might be because we probably should have taken the volume group, and its physical extents, off-line before doing the previous. I should have thought this through more, sorry.
( come on :) ) Will this take off-line the physical extents along with the VG? :
[root@X ~]# vgchange -a n /dev/VG-B/LV-B
Although it probably won't be of any issue either.
You could run "vgscan" as see what happens.
after the 'vgchange -a n', 'rmmod & modprobe' then the 'vgchange -a y':
[root@X ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "VG-C" using metadata type lvm2 Found volume group "VG-B" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2
Should I be worried about the "non-valid partition" table message?
Nope. Next time you boot, it will self-correct. LVM is self-contained, so the vgscan handles locating all volumes.
I can use the LV (mount and work with files), so I don't know if it is a problem. Maybe the fact that there is no partition table for fdisk is why I can't make it grow?
Hmmm, not sure. Again, maybe a vgscan would correct things?
I mean, maybe I should have create first one partition (type LVM), then create a second one with the extra size ? Here is what I've done to create the logical volume:
[root@X ~]# pvcreate /dev/sdb
Well, that won't work because the volumes aren't there on /dev/sdb.
[root@X ~]# vgcreate VG-B /dev/sdb [root@X ~]# lvcreate -l 166911 VG-B -n LV-B // Note: 166911 is the maximum number of physical extents available before the grow [root@X ~]# mkfs.ext3 /dev/VG-B/LV-B
from there I could use the volume.
the above was what I initialy made to create the LV before adding a HD to the SAN, before trying to expand it without having to reboot. More details: In the SAN: 5 then 6 * 250GB HD in one disk pool (RAID 5) 2 Virtual disks: 1 * 250GB <-- VG-C / LV-C 3 * 250GB <-- I've initialy created the VG-B / LV-B with this capacity
then I added another 250GB HD to the pool and
assigned it to this virtual disk. Now I'm trying to
expand the VG/LV, without rebooting. (raw available capacity is lesser than 250GB for each disk because in this SAN (AX100sc) the OS (winXP...) reserve 20GB on the first 3 HD and since the others are in the same pool, I've "lost" 20GB on each disk (OT: I found this a little crappy...))
[root@onyx install]# pvdisplay --- Physical volume --- PV Name /dev/sde VG Name VG-C PV Size 217.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 55551 Free PE 0 Allocated PE 55551 PV UUID w3Q4hA-ALnz-4UuH-fdBB-FGOT-Rn2t-4iG2Vv
--- Physical volume --- PV Name /dev/sdd VG Name VG-B PV Size 652.00 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 166911 Free PE 0 Allocated PE 166911 PV UUID AFcoa6-6eJl-AAd3-G0Vt-OTXR-niPn-3bTLXu
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup00 PV Size 136.50 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 4368 Free PE 1 Allocated PE 4367 PV UUID OjQEXi-u3F8-wCXO-1eZY-W2j0-iZyG-5PwwnB
Well, you're volumes are clearly showing up on /dev/sdd and sde. Those are probably the device(s) you have to target for expansion.
but the /dev/sdb and sdc are gone, the PV have just followed the shift of the device assignments and the new available physical extents are not there. After putting off-line the VG (vgchange -a n) unloading then loading the qla module, the new correct size is shown by fdisk but not with LVM (no new physical extents appeared). How to gain the new extents within LVM when the HD growed up?
(It's late here in europe, I will be back tomorrow. Thanks again for support)
I don't know if it will help your situation, but partprobe will force the kernel to reread your partition table.
kadafax kadafax@gmail.com wrote:
(I found that the module used was qla6320, dont know if there is an issue with this but I thought that the qla2200 would be more appropriate since the host card is a qla200.)
There's many variants in the QLogic 200 series. I just listed the common denominator that most people have. I figured if you were messing with SAN solutions, you might know the exact variant/driver you were already running. Sorry about that. I probably should have told you to run "lsmod" to verify first.
Okay, next time I'll make that big, long, step-by-step verbose post that makes the seasoned guys around here cringe (just stick up for me if anyone complains ;-).
how to release device assignments? I didn't find any clue on google.
You have to go into the /proc filesystem. Location may differ between kernel 2.4 and 2.6.
nope CentOS 4.2
Hmmm, interesting. I guess when we loaded the qla2200 module, it went after the same path as another module already loaded(?). I guess I'd have to see the output of your "lsmod" and look around the /proc filesystem.
Eck. It's stuff like this that I just need to be in front of the system, or I could be making things worse for you (and probably already am ;-).
( come on :) ) Will this take off-line the physical extents along with the VG? :
[root@X ~]# vgchange -a n /dev/VG-B/LV-B
I'd rather not say either way without being right at your setup. There are just so many factors. E.g., the fact that your main /dev/sda is live (is it also a SAN connection?). I'd really rather not tell you to do anything else at this point.
after the 'vgchange -a n', 'rmmod & modprobe' then the 'vgchange -a y':
[root@X ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "VG-C" using metadata type lvm2 Found volume group "VG-B" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2
Looks good. I'm just curious what we do about the SCSI mapping though -- let alone the fact that we loaded that other module.
but the /dev/sdb and sdc are gone, the PV have just followed the shift of the device assignments and the new available physical extents are not there.
Hmmm, what does the raw "fdisk -l /dev/sdd" and "fdisk -l /dev/sde" commands give you? Do you see the extra space (you should)?
I'm just trying to think how you expand the PVs (slice type 8Eh, I assume?) to the actual size of the partitions.
After putting off-line the VG (vgchange -a n) unloading then loading the qla module, the new correct size is shown by fdisk but not with LVM (no new physical extents appeared). How to gain the new extents within LVM when the HD growed up?
Okay, so you ran "fdisk -l" and see it -- good.
You probably need to add another slice (partition) to the devices then. Type it as LVM (8Eh), then add that as another PV.
(It's late here in europe, I will be back tomorrow. Thanks again for support).
I'll be here (up all night for the next 2 nights, long story).
Bryan J. Smith wrote:
kadafax kadafax@gmail.com wrote:
(I found that the module used was qla6320, dont know if there is an issue with this but I thought that the qla2200 would be more appropriate since the host card is a qla200.)
There's many variants in the QLogic 200 series. I just listed the common denominator that most people have. I figured if you were messing with SAN solutions, you might know the exact variant/driver you were already running. Sorry about that. I probably should have told you to run "lsmod" to verify first.
never too late: --- [root@X ~]# lsmod Module Size Used by qla6312 116545 0 qla2322 135745 0 qla2200 88769 0 qla2100 80961 0 qla2300 126017 0 ipt_REJECT 8641 1 ipt_state 3265 13 ip_conntrack 53657 1 ipt_state iptable_filter 4417 1 ip_tables 20289 3 ipt_REJECT,ipt_state,iptable_filter autofs4 23241 0 dcdipm 68980 2 dcdbas 50964 2 joydev 11841 0 button 9057 0 battery 11209 0 ac 6729 0 uhci_hcd 34665 0 ehci_hcd 33349 0 hw_random 7137 0 e1000 110381 0 floppy 65809 0 sg 42489 0 dm_snapshot 18561 0 dm_zero 3649 0 dm_mirror 28889 0 ext3 137681 6 jbd 68849 1 ext3 dm_mod 66433 10 dm_snapshot,dm_zero,dm_mirror qla2xxx 178849 7 qla6312,qla2322,qla2200,qla2100,qla2300 scsi_transport_fc 11201 1 qla2xxx megaraid_mbox 40017 2 megaraid_mm 15881 5 megaraid_mbox sd_mod 19392 5 scsi_mod 140177 5 sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod --- [root@X ~]# modprobe -l | grep qla /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2300.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2100.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla6312.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2200.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2322.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2xxx.ko /lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla1280.ko --- [root@X ~]# cat /proc/scsi/qla2xxx/4 QLogic PCI to Fibre Channel Host Adapter for QLA200: Firmware version 3.03.15 FLX, Driver version 8.01.00b5-rh2 ISP: ISP6312, Serial# U93681 Request Queue = 0x21980000, Response Queue = 0x33300000 Request Queue count = 2048, Response Queue count = 512 Total number of active commands = 0 Total number of interrupts = 427 Device queue depth = 0x20 Number of free request entries = 1585 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 0 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state = <READY>, flags = 0x1a13 Dpc flags = 0x80000 MBX flags = 0x0 Link down Timeout = 030 Port down retry = 030 Login retry count = 030 Commands retried with dropped frame(s) = 0 Product ID = 4953 5020 2020 0003
SCSI Device Information: scsi-qla3-adapter-node=200000e08b1f71f2; scsi-qla3-adapter-port=210000e08b1f71f2; scsi-qla3-target-0=5006016039201733;
FC Port Information: scsi-qla3-port-0=50060160b9201733:5006016039201733:0000ef:0;
SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 184, Pending reqs 0, flags 0x0, 3:0:00 00 ( 0: 1): Total reqs 183, Pending reqs 0, flags 0x0, 3:0:00 00 --- [root@X ~]# cat /proc/partitions major minor #blocks name
8 0 143257600 sda 8 1 104391 sda1 8 2 143147182 sda2 253 0 51216384 dm-0 253 1 22216704 dm-1 253 2 4096000 dm-2 253 3 51216384 dm-3 253 4 10256384 dm-4 253 5 4096000 dm-5 253 6 683667456 dm-6 253 7 227536896 dm-7 8 48 911212544 sdd 8 64 227540992 sde --- [root@X ~]# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 06 Lun: 00 Vendor: PE/PV Model: 1x6 SCSI BP Rev: 1.0 Type: Processor ANSI SCSI revision: 02 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: MegaRAID Model: LD 0 RAID1 139G Rev: 521S Type: Direct-Access ANSI SCSI revision: 02 Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: DGC Model: RAID 5 Rev: 0217 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi4 Channel: 00 Id: 00 Lun: 01 Vendor: DGC Model: RAID 5 Rev: 0217 Type: Direct-Access ANSI SCSI revision: 04 ---
[root@X ~]# ls /proc/ 1 1853 25048 2712 28654 55 9 filesystems locks stat 10 1854 25062 2721 2866 56 acpi fs mdstat swaps 11 2 2510 2738 2867 57 buddyinfo ide meminfo sys 12 227 258 2750 2868 58 bus interrupts misc sysrq-trigger 13 239 259 2790 3 59 cmdline iomem modules sysvipc 1355 23965 2596 2804 30357 6 cpuinfo ioports mounts tty 14 23968 2612 2805 30359 7 crypto irq mtrr uptime 15 2416 2631 2815 30360 75 devices kallsyms net version 152 2420 2660 2822 30386 76 diskstats kcore partitions vmstat 1666 2430 2670 2863 30387 77 dma keys pci 1850 2470 2692 2864 30433 78 driver key-users scsi 1851 2471 2702 2865 4 79 execdomains kmsg self 1852 2500 271 28653 5 8 fb loadavg slabinfo ---
Okay, next time I'll make that big, long, step-by-step verbose post that makes the seasoned guys around here cringe (just stick up for me if anyone complains ;-).
I'll make a summary once we sorted on this. I have another HD to add to the SAN, so I will reboot the server (after putting a slave ldap somewhere) and restart the procedure from the beginning. Note: the process to add a HD to the disk pool takes more than one day (near two days in fact, don't know how it is with other SAN, but I found this quite long - maybe it is normal behavior for the system to take so much time to re-create RAID 5, just don't know... hopefully I can work with the mounted volume during the process). (btw, your long technical post are precious)
how to release device assignments? I didn't find any clue on google.
You have to go into the /proc filesystem. Location may differ between kernel 2.4 and 2.6.
nope CentOS 4.2
Hmmm, interesting. I guess when we loaded the qla2200 module, it went after the same path as another module already loaded(?). I guess I'd have to see the output of your "lsmod" and look around the /proc filesystem.
Eck. It's stuff like this that I just need to be in front of the system, or I could be making things worse for you (and probably already am ;-).
I have taken some precautions. no problem :)
( come on :) ) Will this take off-line the physical extents along with the VG? :
[root@X ~]# vgchange -a n /dev/VG-B/LV-B
I'd rather not say either way without being right at your setup. There are just so many factors. E.g., the fact that your main /dev/sda is live (is it also a SAN connection?). I'd really rather not tell you to do anything else at this point.
no the /dev/sda is a RAID 1 virtual disk from 2 HD directly connected to an internal PERC-4 RAID controller card. /dev/sda1 is the /boot and sda2 is the system Volume Group.
after the 'vgchange -a n', 'rmmod & modprobe' then the 'vgchange -a y':
[root@X ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "VG-C" using metadata type lvm2 Found volume group "VG-B" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2
Looks good. I'm just curious what we do about the SCSI mapping though -- let alone the fact that we loaded that other module.
but the /dev/sdb and sdc are gone, the PV have just followed the shift of the device assignments and the new available physical extents are not there.
Hmmm, what does the raw "fdisk -l /dev/sdd" and "fdisk -l /dev/sde" commands give you? Do you see the extra space (you should)?
Yeah the extra space has showned up with fdisk (/dev/sdd, who was /dev/sdb yesterday, growed up after the module reload): --- [root@X ~]# fdisk -l /dev/sdd
Disk /dev/sdd: 933.0 GB, 933081645056 bytes 255 heads, 63 sectors/track, 113440 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table --- [root@X ~]# fdisk -l /dev/sde
Disk /dev/sde: 233.0 GB, 233001975808 bytes 255 heads, 63 sectors/track, 28327 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table ---
I'm just trying to think how you expand the PVs (slice type 8Eh, I assume?) to the actual size of the partitions.
I think it's here I missed something. I have not created a partition with fdisk before working with lvm. basically I've told lvm to use the entire /dev/sdb (now /dev/sde with extra space). Maybe if I had first created a partition (type LVM with fdisk) equal to HD size (~650GB) let's say /dev/sdb1, then I would be able to create another partition with the extra space (something like /dev/sdb2 (if there was no shifting with the device assignment ) with the extra 230GB) and then I would be able to create a new physical volume and benefit the new extra physical extents. --- [root@X ~]# pvs PV VG Fmt Attr PSize PFree ... /dev/sdd VG-B lvm2 a- 652.00G 0 ... --- BUT: --- [root@X ~]# fdisk -l /dev/sdd
Disk /dev/sdd: 933.0 GB, 933081645056 bytes 255 heads, 63 sectors/track, 113440 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table ---
After putting off-line the VG (vgchange -a n) unloading then loading the qla module, the new correct size is shown by fdisk but not with LVM (no new physical extents appeared). How to gain the new extents within LVM when the HD growed up?
Okay, so you ran "fdisk -l" and see it -- good.
You probably need to add another slice (partition) to the devices then. Type it as LVM (8Eh), then add that as another PV.
May I create a new slice without erasing the data previously created (again it is not a problem, nothing usefull yet on this volume)? I'm quite confused here, there is no partition table for fdisk because I've done all with lvm and it's working (ext3 fs). Can I create now a partition table with fdisk?
(It's late here in europe, I will be back tomorrow. Thanks again for support).
I'll be here (up all night for the next 2 nights, long story).
kadafax wrote:
Quite there: the new size is reflected...but the disks (I've got two virtual disks from the SAN, who are seen like scsi disk by the system) are now described as /dev/sdd and /dev/sde (initialy /dev/sdb and sdc) (below output from a "fdisk -l", the disc which grew is initialy sdb). And a "pvdisplay" do not see any new physical extents available (perhaps I'm missing something on this, I'm totally new from this and way long to be totally cool with it):
You need to resize physical volume manually. Either pvresize command, or if you get the message it is not implemented yet, use "dirty tricks" ;-)
kadafax wrote:
Hi list, I have a SAN attached to a CentOS 4.2 server. I have expanded the size of the virtual disk within the SAN (by adding a new HD to the disk pool) and need CentOS to see the new size (CentOS see it as /dev/sdb). I'm using LVM. Do you know a method for the Volume Group to see that one of its harddisk is now bigger, without rebooting (it's not a problem with a reboot but since the LDAP directory is on this server, it is problematic).
If you do "fdisk /dev/sdb", does it show you correct (new) size? Fdisk should query device directly (and not relay on cached copy of partition table in kernel). If yes, just extend partition where your physical volume is, save new partition table (fdisk will instruct kernel to re-read it), do pvresize, lvresize, and finally resize the file system (this can be done while file system is mounted at least for ext2 and ext3). If your version of LVM reports that pvresize is non-functional (I believe pvresize was implemented relatively recently for LVM2), there's couple of "dirty" tricks how to resize physical volume.
On Sat, 2005-11-12 at 22:28 -0600, Aleksandar Milivojevic wrote:
kadafax wrote:
Hi list, I have a SAN attached to a CentOS 4.2 server. I have expanded the size of the virtual disk within the SAN (by adding a new HD to the disk pool) and need CentOS to see the new size (CentOS see it as /dev/sdb). I'm using LVM. Do you know a method for the Volume Group to see that one of its harddisk is now bigger, without rebooting (it's not a problem with a reboot but since the LDAP directory is on this server, it is problematic).
If you do "fdisk /dev/sdb", does it show you correct (new) size? Fdisk should query device directly (and not relay on cached copy of partition table in kernel).
<snip>
A faster method is
sfdisk -l [/dev/xxxx] # If no device specification, lists all
and then
sfdisk -R /dev/xxx
will load the disk parameters to the OS without reboot.
However, not being familiar with LVM, I don't know if this causes a problem or if you just then follow the other directions given by Aleksander.
I have also used this for automated configuration. But that was without LVM being involved.
Bill
On Sat, 2005-11-12 at 22:28 -0600, Aleksandar Milivojevic wrote:
If you do "fdisk /dev/sdb", does it show you correct (new) size? Fdisk should query device directly (and not relay on cached copy of partition table in kernel).
On Sun, 2005-11-13 at 07:58 -0500, William L. Maltby wrote:
A faster method is sfdisk -l [/dev/xxxx] # If no device specification, lists all and then sfdisk -R /dev/xxx will load the disk parameters to the OS without reboot.
In both cases, assuming nothing on the disk is in use. That's the problem, you typically have to take down their PVs for LVM first.
However, not being familiar with LVM, I don't know if this causes a problem or if you just then follow the other directions given by Aleksander.
LVM can rescan partitions. The big problem here isn't LVM, it's the fact that the kernel doesn't like to load low-level disk changes.
I have also used this for automated configuration. But that was without LVM being involved.
Hi list, First thanks to Aleksandar M, William LM and Bryan JS.
How to reflect a scsi device size change without rebooting and benefit the extra space within logical volumes.
I started from over and first I've initially created partitions with fdisk (type 8e - linux lvm): Below, the interesting devices are sdb and sdc who represents the two SAN's virtual disks, sda is the system HD using internal RAID adapter. sdb and sdc will grow respectively with a few hundred MB and 230GB. --- [root@X ~]# fdisk -l
Disk /dev/sda: 146.6 GB, 146695782400 bytes 255 heads, 63 sectors/track, 17834 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17834 143147182+ 8e Linux LVM
Disk /dev/sdb: 933.0 GB, 933081645056 bytes 255 heads, 63 sectors/track, 113440 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 1 113440 911206768+ 8e Linux LVM
Disk /dev/sdc: 233.0 GB, 233001975808 bytes 255 heads, 63 sectors/track, 28327 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdc1 1 28327 227536596 8e Linux LVM --- then I've created the pv: --- [root@X ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@X ~]# pvcreate /dev/sdc1 Physical volume "/dev/sdc1" successfully created --- then the vg: --- [root@X ~]# vgcreate VG-B /dev/sdb1 Volume group "VG-B" successfully created [root@X ~]# vgcreate VG-C /dev/sdc1 Volume group "VG-C" successfully created --- then the lv with all the available physical extents: --- [root@X ~]# vgdisplay VG-C | grep "Total PE" Total PE 55550 [root@X ~]# lvcreate -l 55550 VG-C -n LV-C Logical volume "LV-C" created [root@X ~]# vgdisplay VG-B | grep "Total PE" Total PE 222462 [root@X ~]# lvcreate -l 222462 VG-B -n LV-B Logical volume "LV-B" created --- # Note from the LVM How-To from tldp: Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group. Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group. --- [root@X ~]# pvscan PV /dev/sdc1 VG VG-C lvm2 [216.99 GB / 0 free] PV /dev/sdb1 VG VG-B lvm2 [868.99 GB / 0 free] ... Total: 3 [1.19 TB] / in use: 3 [1.19 TB] / in no VG: 0 [0 ] ---
Now I've added a 250GB HD to the pool in the SAN. The goal is to extend the size of the Logical Volumes LV-C and LV-B respectively sourced with the devices /dev/sdb and /dev/sdc.
First reflect the new size to the system by unloading then loading the qla module: --- [root@X ~]# /sbin/rmmod qla6312 [root@X ~]# /sbin/modprobe qla6312 --- [root@X ~]# fdisk -l
... Disk /dev/sdd: 933.3 GB, 933355323392 bytes 255 heads, 63 sectors/track, 113473 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdd1 1 113440 911206768+ 8e Linux LVM
Disk /dev/sde: 467.0 GB, 467077693440 bytes 255 heads, 63 sectors/track, 56785 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sde1 1 28327 227536596 8e Linux LVM --- The new size is correctly reported BUT the drives assignments has shifted: sdb --> sdd and sdc --> sde. I don't know how to avoid this. Hopefully, the physical volume mechanism has followed the shift: --- [root@X ~]# pvscan PV /dev/sde1 VG VG-C lvm2 [216.99 GB / 0 free] PV /dev/sdd1 VG VG-B lvm2 [868.99 GB / 0 free] ... Total: 3 [1.19 TB] / in use: 3 [1.19 TB] / in no VG: 0 [0 ] --- then after the creation of the new partitions (type 8e) supplied with the additional space available: --- [root@X ~]# fdisk -l
... Disk /dev/sdd: 933.3 GB, 933355323392 bytes 255 heads, 63 sectors/track, 113473 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdd1 1 113440 911206768+ 8e Linux LVM /dev/sdd2 113441 113473 265072+ 8e Linux LVM
Disk /dev/sde: 467.0 GB, 467077693440 bytes 255 heads, 63 sectors/track, 56785 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sde1 1 28327 227536596 8e Linux LVM /dev/sde2 28328 56785 228588885 8e Linux LVM --- go for the lvm stuff. first the pv: --- [root@X ~]# pvcreate /dev/sdd2 Physical volume "/dev/sdd2" successfully created [root@X ~]# pvcreate /dev/sde2 Physical volume "/dev/sde2" successfully created --- then add it to the vg: --- [root@X ~]# vgextend VG-B /dev/sdd2 Volume group "VG-B" successfully extended [root@X ~]# vgextend VG-C /dev/sde2 Volume group "VG-C" successfully extended [root@X ~]# pvscan PV /dev/sde1 VG VG-C lvm2 [216.99 GB / 0 free] PV /dev/sde2 VG VG-C lvm2 [218.00 GB / 218.00 GB free] PV /dev/sdd1 VG VG-B lvm2 [868.99 GB / 0 free] PV /dev/sdd2 VG VG-B lvm2 [256.00 MB / 256.00 MB free] ... Total: 5 [1.41 TB] / in use: 5 [1.41 TB] / in no VG: 0 [0 ] --- finally the lv: --- [root@X ~]# vgdisplay VG-B | grep "Free PE" Free PE / Size 64 / 256.00 MB [root@X ~]# lvextend -l+64 /dev/VG-B/LV-B Extending logical volume LV-B to 869.24 GB Logical volume LV-B successfully resized
[root@X ~]# vgdisplay VG-C | grep "Free PE" Free PE / Size 55807 / 218.00 GB [root@X ~]# lvextend -l+55807 /dev/VG-C/LV-C Extending logical volume LV-C to 434.99 GB Logical volume LV-C successfully resized --- [root@X ~]# lvscan ACTIVE '/dev/VG-C/LV-C' [434.99 GB] inherit ACTIVE '/dev/VG-B/LV-B' [869.24 GB] inherit --- The lv have grown, now it's the turn of the ext3-fs: --- [root@X ~]# resize2fs /dev/VG-B/LV-B resize2fs 1.35 (28-Feb-2004) Resizing the filesystem on /dev/VG-B/LV-B to 227866624 (4k) blocks. The filesystem on /dev/VG-B/LV-B is now 227866624 blocks long.
[root@X ~]# resize2fs /dev/VG-C/LV-C resize2fs 1.35 (28-Feb-2004) Resizing the filesystem on /dev/VG-C/LV-C to 114029568 (4k) blocks. The filesystem on /dev/VG-C/LV-C is now 114029568 blocks long. --- From here I can mount and work with the volumes. --- [root@X ~]# df -h ... /dev/mapper/VG--C-LV--C 429G 103M 407G 1% /mount/pointC /dev/mapper/VG--B-LV--B 856G 104M 821G 1% /mount/pointB ---
Note: I haven't done this with mounted volume. However it's seem possible to do so but I won't risk it for now. After a reboot, activating the volume group with "vgchange -a y VG-name" may be necessary.
Now some fun with the 'pvresize' command: --- [root@X ~]# pvresize Command not implemented yet. ---
kfx.
Quoting kadafax kadafax@gmail.com:
Note: I haven't done this with mounted volume. However it's seem possible to do so but I won't risk it for now. After a reboot, activating the volume group with "vgchange -a y VG-name" may be necessary.
You might not be able to rmmod HBA driver with mounted volume.
Now some fun with the 'pvresize' command:
[root@X ~]# pvresize Command not implemented yet.
Yes, that's what I mentioned. I also said there is a workaround. Basically, you would do vgcfgbackup, then very carefully manually edit the resulting file (if you screw it, your data is gone bye bye to lala land), and than do vgcfgrestore. See this article from Linux LVM mailing list:
http://www.redhat.com/archives/linux-lvm/2004-December/msg00049.html
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
kadafax kadafax@gmail.com wrote:
Hi list, First thanks to Aleksandar M, William LM and Bryan JS.
Oh, I don't know about me. I didn't give the best recommendations at times, and a few omissions/lack of attention to detail might have set you back in a few cases.
Thanx for the most excellent summary post of what you found out.