Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
-geoff
Geoff Galitz Blankenheim NRW, Deutschland http://www.galitz.org
On Wed, 25 Jun 2008 at 7:49pm, Geoff Galitz wrote
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
I am in the midst of setting this up on C-5 attached to a MSA1000 running active/passive. The documentation is... sparse, to say the least. There's more than a bit of guesswork in my setup, and I have yet to actually test the failover. I certainly think it'd be worthwhile for you to document your experience somewhere.
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
-geoff
Geoff Galitz Blankenheim NRW, Deutschland http://www.galitz.org
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi,
I'm running ~45 machines that boot from SAN/or are connected to our SAN fabrics on CentOS 4.x/5.x (And about 20 more running RHEL5) without problems at all. The storage is IBM SVC (2145).
Just make sure multipathd is running on your system, otherwise it will not recover paths correctly.
Thanks, Finnur
On 26/06/2008, at 3:57 AM, Finnur Örn Guðmundsson wrote:
Hi,
I'm running ~45 machines that boot from SAN/or are connected to our SAN fabrics on CentOS 4.x/5.x (And about 20 more running RHEL5) without problems at all. The storage is IBM SVC (2145).
Just make sure multipathd is running on your system, otherwise it will not recover paths correctly.
Thanks, Finnur
Same here on a similar number of systems connected to two SANs. active- active over FC, four-paths running multibus, all works fine.
On Wed, Jun 25, 2008 at 11:19 PM, Geoff Galitz geoff@galitz.org wrote:
Are folks in the Centos community succesfully using device-mapper-multipath?
I have a couple of CentOS 5.1 servers connected to a dual controller Hitachi SMS 100 array. Both iscsi and multipath are with failover are all working great.
- Raja
Geoff Galitz wrote:
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
I've used it on CentOS 4.6 and it worked fine in an active-active configuration connected to a 3PAR E200 storage system(I had both iSCSI and Fiber channel systems).
I also used it with an OpenFiler running iSCSI as well, though there was only 1 path(used it for consistency)
nate
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
-geoff
I'm using it on RHEL 5 (close enough for the purposes of your query), connecting to an HP EVA 6000 SAN. The RHEL documentation (http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/DM_Multip...) certainly covers the basics adequately, and was enough to get me going. I'm using LVM over the top of that, so I found it worthwhile to tweak /etc/lvm/lvm.conf to filter out all the various aliases for the disks that show up in /dev. My filter line is currently: filter = [ "r/sd.*/", "r:disk/by.*:", "a/.*/" ] which works well for me, but YMMV, particularly with the filtering out of "sd.*" (That works here because our main OS disks are on /dev/cciss)
You've also got to be a little careful when unpresenting disks (SAN terminology, may not apply to ISCSI). From our internal documentation (some notes I wrote at the time, and with subsequent experience): ********************* Removing is trickier; you need to ensure no-one is trying to still use the disk. Particularly watch out for lvm. If the disk is part of a volume group, you have to run #vgchange -an <VGNAME> first, otherwise LVM still thinks the disk is there, and things like lvmdiskscan/pvdisplay etc start hanging when the disk has gone away. Once the disk is unused, unpresent the disk from the SAN, rescan to remove no-longer existing disks, then restart multipathd (/etc/init.d/multipathd restart). Running #multipath -F may also be sufficient, but I've found restarting multipathd entirely a smidgen more reliable (but I may have been doing things wrong before that).
If things get really stuck, then you might have some luck with dmsetup. If "multipath -ll" shows failed disks (that have been unpresented properly), use dmsetup to remove the failed disk with the command: #dmsetup remove <device> where <device> is "mpath<num>". Find the stuck one from the output of multipath -ll; be sure you've got the right mpath device. Optionally, if you've got stuck lvmdiskscan or pvdisplay type processes (trying to access the missing disk), then the "remove" will fail, claiming the device is in use (which, in some senses, it is). In this case, double check you've got the right mpath device (otherwise you'll fsck your system), and run: #dmsetup remove --force <device> This will claim failure (device-mapper: remove ioctl failed: Device or resource busy), but if you now run #dmsetup info <device> then you'll see the "Open count" has gone to zero. You can now run the plain remove one more time: #dmsetup remove <device> and it will be removed. Your hung processes will finally die the death they deserve, and the unpresented disk will be unknown to the system any longer. *********************
It has worked well in real life, except for one day when one of our EVA SAN Controllers died; one host survived, another had multipathd itself die with a double free error (which I bugzilla'd upstream). Disks went away, but came back on restarting multipathd. Odd, but survivable, and not indicative of a general problem (probably something I did early on in the setup that hung around).
And one other word of advice: Play with it a lot in a test system first. It should go without saying, but this is really one of those times. There are many things you can learn safely on a production device; this isn't one of them. Get really comfortable with adding/removing/munging before you go live. And you will break it at least once during your preparation, if not more ;-).
Craig Miskell
======================================================================= Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. =======================================================================
Geoff Galitz wrote:
Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources.
-geoff
Here's my notes from working with our SAN. Please check thoroughly as these instructions worked for me but I had to learn this from scratch and there maybe mistakes. One thing I never figured how to achieve was to rescan an existing ISCSI device for changes if you resize a partition on the SAN. I have always had to reboot to get the new partition size to be seen.
############################### ISCSI notes.
# yum -y install iscsi-initiator-utils lsscsi device-mapper-multipath
# service iscsi start
Add the ISCSI targets
iscsiadm -m discovery -t sendtargets -p 192.168.100.6 iscsiadm -m discovery -t sendtargets -p 192.168.100.2 iscsiadm -m discovery -t sendtargets -p 192.168.100.8 iscsiadm -m discovery -t sendtargets -p 192.168.100.4
# lsscsi [0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda [1:0:0:0] disk COMPELNT Compellent Vol 0306 - [2:0:0:0] disk COMPELNT Compellent Vol 0306 - [3:0:0:0] disk COMPELNT Compellent Vol 0306 - [4:0:0:0] disk COMPELNT Compellent Vol 0306 -
service multipathd start chkconfig multipathd on
Configuring SAN volumes without reboot
Rescan for ISCSI devices
# iscsiadm -m session -R
Add your partitions For this example I created 2 partitions
Let the running kernel see the new partitions
# partprobe
fdisk /dev/mapper/mpath2
We need to tell the mapper about the new partitions
# ls -l /dev/mapper/mpath2* brw-rw---- 1 root disk 253, 7 Jan 30 15:42 /dev/mapper/mpath2
# kpartx -l /dev/mapper/mpath2
# ls -l /dev/mapper/mpath2* brw-rw---- 1 root disk 253, 7 Jan 30 15:42 /dev/mapper/mpath2 brw-rw---- 1 root disk 253, 8 Jan 30 15:43 /dev/mapper/mpath2p1 brw-rw---- 1 root disk 253, 9 Jan 30 15:43 /dev/mapper/mpath2p2
# mke2fs -j /dev/mapper/mpath2p1 # mke2fs -j /dev/mapper/mpath2p2
# iscsiadm -m node 192.168.100.6:3260,0 iqn.2002-03.com.compellent:5000d310000a630a 192.168.100.2:3260,0 iqn.2002-03.com.compellent:5000d310000a6302 192.168.100.4:3260,0 iqn.2002-03.com.compellent:5000d310000a6304 192.168.100.8:3260,0 iqn.2002-03.com.compellent:5000d310000a630c
[root@test2 ~]# iscsiadm -m session tcp: [12] 192.168.100.6:3260,0 iqn.2002-03.com.compellent:5000d310000a630a tcp: [13] 192.168.100.2:3260,0 iqn.2002-03.com.compellent:5000d310000a6302 tcp: [14] 192.168.100.4:3260,0 iqn.2002-03.com.compellent:5000d310000a6304 tcp: [15] 192.168.100.8:3260,0 iqn.2002-03.com.compellent:5000d310000a630c
# iscsiadm -m discovery 192.168.100.2:3260 via sendtargets 192.168.100.6:3260 via sendtargets 192.168.100.4:3260 via sendtargets 192.168.100.8:3260 via sendtargets
To automatically mount a file system during startup you must have the partition entry in /etc/fstab marked with the "_netdev" option. For example this would mount a iscsi disk sdb:
/dev/sdb /mnt/iscsi ext3 _netdev 0 0
NOTES WHEN USING LVM WITH MULTIPATH
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL51 0/DM_Multipath/multipath_logical_volumes.html
When you create an LVM logical volume that uses active/passive multipath arrays as the underlying physical devices, you should include filters in the lvm.conf to exclude the disks that underlie the multipath devices. This is because if the array automatically changes the active path to the passive path when it receives I/O, multipath will failover and failback whenever LVM scans the passive path if these devices are not filtered. For active/passive arrays that require a command to make the passive path active, LVM prints a warning message when this occurs.
To filter all SCSI devices in the multipath configuration file (lvm.conf), include the following filter in the devices section of the file.
filter = [ "r/disk/", "r/sd.*/", "a/.*/" ]
A filter to allow sda but disallow all other sd* drives
filter = [ "a|/dev/sda|","r/disk/", "r/sd.*/", "a/.*/" ]
CREATE LVM on top of a SAN MULTI PATH
vi /etc/lvm/lvm.conf
# preferred_names = [ ] # DAP preferred_names = [ "^/dev/mpath/", "^/dev/[hs]d" ]
# filter = [ "a/.*/" ] # DAP filter = [ "a|/dev/sda|","r/disk/", "r/sd.*/", "a/.*/" ]
pvcreate /dev/mpath/mpath2p1
# pvdisplay /dev/mpath/mpath2p1 --- Physical volume --- PV Name /dev/mpath/mpath2p1 VG Name VGSAN00 PV Size 1019.72 MB / not usable 3.72 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 254 Free PE 0 Allocated PE 254 PV UUID ZjqvDp-mxMh-xbuV-CFql-QzaB-cC4l-Eo7RNl
vgcreate VGSAN00 /dev/mpath/mpath2p1
# vgdisplay VGSAN00 --- Volume group --- VG Name VGSAN00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1016.00 MB PE Size 4.00 MB Total PE 254 Alloc PE / Size 0 / 0 Free PE / Size 254 / 1016.00 MB VG UUID bXv8IW-3eJT-RUXB-MLEo-gnfV-4VJv-j598PE
# lvcreate -l 254 -n data VGSAN00
# lvdisplay /dev/VGSAN00/data --- Logical volume --- LV Name /dev/VGSAN00/data VG Name VGSAN00 LV UUID UorNsT-0NuN-kj86-2YGq-gVvG-aJYu-m04bhj LV Write Access read/write LV Status available # open 0 LV Size 1016.00 MB Current LE 254 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:9
------------------------------------------------ vi /etc/multipath.conf
defaults { user_friendly_names yes } defaults { udev_dir /dev polling_interval 10 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout /bin/true path_checker readsector0 rr_min_io 100 rr_weight priorities failback immediate no_path_retry fail user_friendly_name yes }
devices { device { vendor "COMPELNT" product "Compellent Vol" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" path_checker readsector0 path_selector "round-robin 0" hardware_handler "0" failback 15 rr_weight priorities no_path_retry queue } } ------------------------------------------------
# iscsiadm -m session -i iscsiadm version 2.0-742 ************************************ Session (sid 12) using module tcp: ************************************ TargetName: iqn.2002-03.com.compellent:5000d310000a630a Portal Group Tag: 0 Network Portal: 192.168.100.6:3260 iSCSI Connection State: LOGGED IN Internal iscsid Session State: NO CHANGE
************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 131072 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: Yes MaxOutstandingR2T: 1
************************ Attached SCSI devices: ************************ Host Number: 12 State: running
scsi12 Channel 00 Id 0 Lun: 0 Attached scsi disk sda State: running
************************************ Session (sid 13) using module tcp: ************************************ TargetName: iqn.2002-03.com.compellent:5000d310000a6302 Portal Group Tag: 0 Network Portal: 192.168.100.2:3260 iSCSI Connection State: LOGGED IN Internal iscsid Session State: NO CHANGE
************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 131072 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: Yes MaxOutstandingR2T: 1
************************ Attached SCSI devices: ************************ Host Number: 13 State: running
scsi13 Channel 00 Id 0 Lun: 0
************************************ Session (sid 14) using module tcp: ************************************ TargetName: iqn.2002-03.com.compellent:5000d310000a6304 Portal Group Tag: 0 Network Portal: 192.168.100.4:3260 iSCSI Connection State: LOGGED IN Internal iscsid Session State: NO CHANGE
************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 131072 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: Yes MaxOutstandingR2T: 1
************************ Attached SCSI devices: ************************ Host Number: 14 State: running
scsi14 Channel 00 Id 0 Lun: 0
************************************ Session (sid 15) using module tcp: ************************************ TargetName: iqn.2002-03.com.compellent:5000d310000a630c Portal Group Tag: 0 Network Portal: 192.168.100.8:3260 iSCSI Connection State: LOGGED IN Internal iscsid Session State: NO CHANGE
************************ Negotiated iSCSI params: ************************ HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 131072 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: Yes MaxOutstandingR2T: 1
************************ Attached SCSI devices: ************************ Host Number: 15 State: running
scsi15 Channel 00 Id 0 Lun: 0 Attached scsi disk sdb State: running
Quick LVM
pvcreate /dev/mapper/mpath3p1 pvcreate /dev/mapper/mpath4p1 vgcreate vg1 /dev/mapper/mpath3p1 lvcreate -l 2559 -n data vg1 mke2fs -j /dev/mapper/vg1-data mount /dev/mapper/vg1-data /test vgextend vg1 /dev/mapper/mpath4p1 lvextend -l +100%FREE /dev/vg1/data resize2fs /dev/vg1/data
LVm cheatsheat
# pvs PV VG Fmt Attr PSize PFree /dev/mpath/mpath3p1 vg1 lvm2 a- 10.00G 0 /dev/mpath/mpath4p1 vg1 lvm2 a- 4.99G 0 /dev/sda2 VolGroup00 lvm2 a- 3.88G 0 # vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 3.88G 0 vg1 2 1 0 wz--n- 14.99G 0 # lvs LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-ao 3.38G LogVol01 VolGroup00 -wi-ao 512.00M data vg1 -wi-ao 14.99G # pvscan PV /dev/mpath/mpath3p1 VG vg1 lvm2 [10.00 GB / 0 free] PV /dev/mpath/mpath4p1 VG vg1 lvm2 [4.99 GB / 0 free] PV /dev/sda2 VG VolGroup00 lvm2 [3.88 GB / 0 free] Total: 3 [18.86 GB] / in use: 3 [18.86 GB] / in no VG: 0 [0 ] # vgscan Reading all physical volumes. This may take a while... Found volume group "vg1" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2 # lvscan ACTIVE '/dev/vg1/data' [14.99 GB] inherit ACTIVE '/dev/VolGroup00/LogVol00' [3.38 GB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [512.00 MB] inherit