Bryan J. Smith wrote:
kadafax <kadafax@gmail.com> wrote:
(I found that the module used was qla6320, dont know if
there is an issue with this but I thought that the qla2200
would be more appropriate since the host card is a qla200.)
There's many variants in the QLogic 200 series. I just
listed the common denominator that most people have. I
figured if you were messing with SAN solutions, you might
know the exact variant/driver you were already running.
Sorry about that. I probably should have told you to run
"lsmod" to verify first.
never too late:
---
[root@X ~]# lsmod
Module Size Used by
qla6312 116545 0
qla2322 135745 0
qla2200 88769 0
qla2100 80961 0
qla2300 126017 0
ipt_REJECT 8641 1
ipt_state 3265 13
ip_conntrack 53657 1 ipt_state
iptable_filter 4417 1
ip_tables 20289 3 ipt_REJECT,ipt_state,iptable_filter
autofs4 23241 0
dcdipm 68980 2
dcdbas 50964 2
joydev 11841 0
button 9057 0
battery 11209 0
ac 6729 0
uhci_hcd 34665 0
ehci_hcd 33349 0
hw_random 7137 0
e1000 110381 0
floppy 65809 0
sg 42489 0
dm_snapshot 18561 0
dm_zero 3649 0
dm_mirror 28889 0
ext3 137681 6
jbd 68849 1 ext3
dm_mod 66433 10 dm_snapshot,dm_zero,dm_mirror
qla2xxx 178849 7 qla6312,qla2322,qla2200,qla2100,qla2300
scsi_transport_fc 11201 1 qla2xxx
megaraid_mbox 40017 2
megaraid_mm 15881 5 megaraid_mbox
sd_mod 19392 5
scsi_mod 140177 5
sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod
---
[root@X ~]# modprobe -l | grep qla
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2300.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2100.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla6312.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2200.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2322.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla2xxx/qla2xxx.ko
/lib/modules/2.6.9-22.0.1.ELsmp/kernel/drivers/scsi/qla1280.ko
---
[root@X ~]# cat /proc/scsi/qla2xxx/4
QLogic PCI to Fibre Channel Host Adapter for QLA200:
Firmware version 3.03.15 FLX, Driver version 8.01.00b5-rh2
ISP: ISP6312, Serial# U93681
Request Queue = 0x21980000, Response Queue = 0x33300000
Request Queue count = 2048, Response Queue count = 512
Total number of active commands = 0
Total number of interrupts = 427
Device queue depth = 0x20
Number of free request entries = 1585
Number of mailbox timeouts = 0
Number of ISP aborts = 0
Number of loop resyncs = 0
Number of retries for empty slots = 0
Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0
Host adapter:loop state = <READY>, flags = 0x1a13
Dpc flags = 0x80000
MBX flags = 0x0
Link down Timeout = 030
Port down retry = 030
Login retry count = 030
Commands retried with dropped frame(s) = 0
Product ID = 4953 5020 2020 0003
SCSI Device Information:
scsi-qla3-adapter-node=200000e08b1f71f2;
scsi-qla3-adapter-port=210000e08b1f71f2;
scsi-qla3-target-0=5006016039201733;
FC Port Information:
scsi-qla3-port-0=50060160b9201733:5006016039201733:0000ef:0;
SCSI LUN Information:
(Id:Lun) * - indicates lun is not registered with the OS.
( 0: 0): Total reqs 184, Pending reqs 0, flags 0x0, 3:0:00 00
( 0: 1): Total reqs 183, Pending reqs 0, flags 0x0, 3:0:00 00
---
[root@X ~]# cat /proc/partitions
major minor #blocks name
8 0 143257600 sda
8 1 104391 sda1
8 2 143147182 sda2
253 0 51216384 dm-0
253 1 22216704 dm-1
253 2 4096000 dm-2
253 3 51216384 dm-3
253 4 10256384 dm-4
253 5 4096000 dm-5
253 6 683667456 dm-6
253 7 227536896 dm-7
8 48 911212544 sdd
8 64 227540992 sde
---
[root@X ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
Vendor: PE/PV Model: 1x6 SCSI BP Rev: 1.0
Type: Processor ANSI SCSI revision: 02
Host: scsi0 Channel: 02 Id: 00 Lun: 00
Vendor: MegaRAID Model: LD 0 RAID1 139G Rev: 521S
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: RAID 5 Rev: 0217
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi4 Channel: 00 Id: 00 Lun: 01
Vendor: DGC Model: RAID 5 Rev: 0217
Type: Direct-Access ANSI SCSI revision: 04
---
[root@X ~]# ls /proc/
1 1853 25048 2712 28654 55 9 filesystems
locks stat
10 1854 25062 2721 2866 56 acpi fs
mdstat swaps
11 2 2510 2738 2867 57 buddyinfo ide
meminfo sys
12 227 258 2750 2868 58 bus interrupts
misc sysrq-trigger
13 239 259 2790 3 59 cmdline iomem
modules sysvipc
1355 23965 2596 2804 30357 6 cpuinfo ioports
mounts tty
14 23968 2612 2805 30359 7 crypto irq
mtrr uptime
15 2416 2631 2815 30360 75 devices kallsyms
net version
152 2420 2660 2822 30386 76 diskstats kcore
partitions vmstat
1666 2430 2670 2863 30387 77 dma keys pci
1850 2470 2692 2864 30433 78 driver key-users scsi
1851 2471 2702 2865 4 79 execdomains kmsg self
1852 2500 271 28653 5 8 fb loadavg slabinfo
---
Okay, next time I'll make that big, long, step-by-step
verbose post that makes the seasoned guys around here cringe
(just stick up for me if anyone complains ;-).
I'll make a summary once we sorted on this. I have another HD to add to
the SAN, so I will reboot the server (after putting a slave ldap
somewhere) and restart the procedure from the beginning. Note: the
process to add a HD to the disk pool takes more than one day (near two
days in fact, don't know how it is with other SAN, but I found this
quite long - maybe it is normal behavior for the system to take so much
time to re-create RAID 5, just don't know... hopefully I can work with
the mounted volume during the process).
(btw, your long technical post are precious)
how to release device assignments? I didn't find any clue
on google.
You have to go into the /proc filesystem. Location may
differ between kernel 2.4 and 2.6.
nope CentOS 4.2
Hmmm, interesting. I guess when we loaded the qla2200
module, it went after the same path as another module already
loaded(?). I guess I'd have to see the output of your
"lsmod" and look around the /proc filesystem.
Eck. It's stuff like this that I just need to be in front of
the system, or I could be making things worse for you (and
probably already am ;-).
I have taken some precautions. no problem :)
( come on :) )
Will this take off-line the physical extents along with the
VG? :
---
[root@X ~]# vgchange -a n /dev/VG-B/LV-B
---
I'd rather not say either way without being right at your
setup. There are just so many factors. E.g., the fact that
your main /dev/sda is live (is it also a SAN connection?).
I'd really rather not tell you to do anything else at this
point.
no the /dev/sda is a RAID 1 virtual disk from 2 HD directly connected
to an internal PERC-4 RAID controller card.
/dev/sda1 is the /boot and sda2 is the system Volume Group.
after the 'vgchange -a n', 'rmmod & modprobe' then the
'vgchange -a y':
---
[root@X ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VG-C" using metadata type lvm2
Found volume group "VG-B" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
---
Looks good. I'm just curious what we do about the SCSI
mapping though -- let alone the fact that we loaded that
other module.
but the /dev/sdb and sdc are gone, the PV have just
followed the shift of the device assignments and the new
available physical extents are not there.
Hmmm, what does the raw "fdisk -l /dev/sdd" and "fdisk -l
/dev/sde" commands give you? Do you see the extra space (you
should)?
Yeah the extra space has showned up with fdisk (/dev/sdd, who was
/dev/sdb yesterday, growed up after the module reload):
---
[root@X ~]# fdisk -l /dev/sdd
Disk /dev/sdd: 933.0 GB, 933081645056 bytes
255 heads, 63 sectors/track, 113440 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
---
[root@X ~]# fdisk -l /dev/sde
Disk /dev/sde: 233.0 GB, 233001975808 bytes
255 heads, 63 sectors/track, 28327 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
---
I'm just trying to think how you expand the PVs (slice type
8Eh, I assume?) to the actual size of the partitions.
I think it's here I missed something. I have not created a partition
with fdisk before working with lvm. basically I've told lvm to use the
entire /dev/sdb (now /dev/sde with extra space). Maybe if I had first
created a partition (type LVM with fdisk) equal to HD size (~650GB)
let's say /dev/sdb1, then I would be able to create another partition
with the extra space (something like /dev/sdb2 (if there was no
shifting with the device assignment ) with the extra 230GB) and then I
would be able to create a new physical volume and benefit the new extra
physical extents.
---
[root@X ~]# pvs
PV VG Fmt Attr PSize PFree
...
/dev/sdd VG-B lvm2 a- 652.00G 0
...
---
BUT:
---
[root@X ~]# fdisk -l /dev/sdd
Disk /dev/sdd: 933.0 GB, 933081645056 bytes
255 heads, 63 sectors/track, 113440 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
---
After putting off-line the VG (vgchange -a n)
unloading then loading the qla module, the new correct
size is shown by fdisk but not with LVM (no new physical
extents appeared). How to gain the new extents
within LVM when the HD growed up?
Okay, so you ran "fdisk -l" and see it -- good.
You probably need to add another slice (partition) to the
devices then. Type it as LVM (8Eh), then add that as another
PV.
May I create a new slice without erasing the data previously created
(again it is not a problem, nothing usefull yet on this volume)? I'm
quite confused here, there is no partition table for fdisk because I've
done all with lvm and it's working (ext3 fs). Can I create now a
partition table with fdisk?
(It's late here in europe, I will be back tomorrow. Thanks
again for support).
I'll be here (up all night for the next 2 nights, long
story).