Hi
For sure no problems, I see this about everyday, the major reason is that the rh driver is not certified for use with Dell,HP or IBM for that matter, if we get a support request where I work and the RH driver is used for the qlogic card then customer must install the certified driver from the vendor, this has a lot to do with compabillity with the bios etc, I am not saying that the RH driver is bad it is just unsupported by most vendors.
Regards Per
At Fredag, 08-01-2010 on 11:06 "Alexander Dalloz" wrote:
Hi
Not sure if you have done this already but install multipath as the
os
sees multiple presented luns which needs to be tied up to one by multipath also do not use the rh drivers for you qlogic it's not recommended at all for any storage solutions.
Per
Dear Per,
can you please elaborate why the QLogic kernel module shipping with CentOS 5 / RHEL 5 is not recommended? It for sure is documented to be used at least in the NetApp host utilities guide, without any warning to prefer the driver package provided on the QLogic site. And I do not see any problem using it with CentOS / RHEL 5 device-mapper-multipath on a lot of systems.
Regards
Alexander
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Per,
Yes I have a device mapper out of my detected LUNS. My storage is Hitachi OpenV . My multipath.conf files looks like below. Do I need to add anything to it since failover is not working. When I unplugg the cable in the 1st port of my HBA... my server reboots and I don't know what is causing this reboot. And after reboot when I plug the cable back, no recovery is done and to make it work I have to reboot the server once again with cable plugged.
-- blacklist { devnode "^sda" }
defaults { user_friendly_names yes failback immediate } -------
Thanks Paras
On Fri, Jan 8, 2010 at 4:42 AM, Per Qvindesland per@norhex.com wrote:
Hi
For sure no problems, I see this about everyday, the major reason is that the rh driver is not certified for use with Dell,HP or IBM for that matter, if we get a support request where I work and the RH driver is used for the qlogic card then customer must install the certified driver from the vendor, this has a lot to do with compabillity with the bios etc, I am not saying that the RH driver is bad it is just unsupported by most vendors.
Regards Per
At Fredag, 08-01-2010 on 11:06 "Alexander Dalloz" <ad+lists@uni-x.orgad%2Blists@uni-x.org> wrote:
Hi
Not sure if you have done this already but install multipath as the os sees multiple presented luns which needs to be tied up to one by multipath also do not use the rh drivers for you qlogic it's not recommended at all for any storage solutions.
Per
Dear Per,
can you please elaborate why the QLogic kernel module shipping with CentOS 5 / RHEL 5 is not recommended? It for sure is documented to be used at least in the NetApp host utilities guide, without any warning to prefer the driver package provided on the QLogic site. And I do not see any problem using it with CentOS / RHEL 5 device-mapper-multipath on a lot of systems.
Regards
Alexander
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Jan 08, 2010 at 09:37:25AM -0600, Paras pradhan wrote:
Per,
Yes I have a device mapper out of my detected LUNS. My storage is Hitachi OpenV . My multipath.conf files looks like below. Do I need to add anything to it since failover is not working. When I unplugg the cable in the 1st port of my HBA... my server reboots and I don't know what is causing this reboot. And after reboot when I plug the cable back, no recovery is done and to make it work I have to reboot the server once again with cable plugged.
-- blacklist { devnode "^sda" }
defaults { user_friendly_names yes failback immediate }
Seems like you need a devices {} section to define how multipathd interacts with your storage, identifies problems and chooses a path to use...
Ray
Since I see the following entry at /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults I am assuming I do not need to add it to multipath.conf but I do not know exaclty. Also my storage is Hitachi openv and the default one has Product name as "DF.*" as you can see below.
So I am confused in here.
device { # vendor "HITACHI" # product "DF.*" # getuid_callout "/sbin/scsi_id -g -u -s /block/%n" # prio_callout "/sbin/mpath_prio_hds_modular %d" # features "0" # hardware_handler "0" # path_grouping_policy group_by_prio # failback immediate # rr_weight uniform # rr_min_io 1000 # path_checker readsector0 # }
Thanks Paras.
On Fri, Jan 8, 2010 at 9:39 AM, Ray Van Dolson rayvd@bludgeon.org wrote:
On Fri, Jan 08, 2010 at 09:37:25AM -0600, Paras pradhan wrote:
Per,
Yes I have a device mapper out of my detected LUNS. My storage is Hitachi OpenV . My multipath.conf files looks like below. Do I need to add
anything
to it since failover is not working. When I unplugg the cable in the 1st port of my HBA... my server reboots and I don't know what is causing this reboot. And after reboot when I plug the cable back, no recovery is done
and
to make it work I have to reboot the server once again with cable
plugged.
-- blacklist { devnode "^sda" }
defaults { user_friendly_names yes failback immediate }
Seems like you need a devices {} section to define how multipathd interacts with your storage, identifies problems and chooses a path to use...
Ray _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Jan 08, 2010 at 09:45:14AM -0600, Paras pradhan wrote:
Since I see the following entry at /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults I am assuming I do not need to add it to multipath.conf but I do not know exaclty. Also my storage is Hitachi openv and the default one has Product name as "DF.*" as you can see below.
So I am confused in here.
device { # vendor "HITACHI" # product "DF.*" # getuid_callout "/sbin/scsi_id -g -u -s /block/%n" # prio_callout "/sbin/mpath_prio_hds_modular %d" # features "0" # hardware_handler "0" # path_grouping_policy group_by_prio # failback immediate # rr_weight uniform # rr_min_io 1000 # path_checker readsector0 # }
I'm not familiar with this SAN, so can't suggest to you a better device configuration snippet.
Google may help you out: search for your device type and "multipath.conf". Alternately, if your vendor supports RHEL5 (I'm sure they do), you could contact them and ask them to provide you with a known good configuration.
Actually, a quick search yields this:
http://www.calivia.com/book/export/html/74
Which seems to have what you need (verify of course).
Ray
Thanks for the Link Ray.
One thing I am confused is... "failover manual" means whenever there is a link failure this will case the host to be rebooted?
Paras.
On Fri, Jan 8, 2010 at 9:53 AM, Ray Van Dolson rayvd@bludgeon.org wrote:
On Fri, Jan 08, 2010 at 09:45:14AM -0600, Paras pradhan wrote:
Since I see the following entry at /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults I am assuming I do not need to add it to multipath.conf but I do not know exaclty. Also my storage is Hitachi openv and the default one has Product name as "DF.*" as you can see below.
So I am confused in here.
device { # vendor "HITACHI" # product "DF.*" # getuid_callout "/sbin/scsi_id -g -u -s /block/%n" # prio_callout "/sbin/mpath_prio_hds_modular %d" # features "0" # hardware_handler "0" # path_grouping_policy group_by_prio # failback immediate # rr_weight uniform # rr_min_io 1000 # path_checker readsector0 # }
I'm not familiar with this SAN, so can't suggest to you a better device configuration snippet.
Google may help you out: search for your device type and "multipath.conf". Alternately, if your vendor supports RHEL5 (I'm sure they do), you could contact them and ask them to provide you with a known good configuration.
Actually, a quick search yields this:
http://www.calivia.com/book/export/html/74
Which seems to have what you need (verify of course).
Ray _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi
No if you have a dual hba then the host should in worst case scenario become un-responsive for a second or two but then continue as nothing had happen.
If it fails then it may be your multipath config but a lot of the time it is sitting at your switch level or san.
Regards Per
On Fri, 2010-01-08 at 13:50 -0600, Paras pradhan wrote:
Thanks for the Link Ray.
One thing I am confused is... "failover manual" means whenever there is a link failure this will case the host to be rebooted?
Paras.
On Fri, Jan 8, 2010 at 9:53 AM, Ray Van Dolson rayvd@bludgeon.org wrote: On Fri, Jan 08, 2010 at 09:45:14AM -0600, Paras pradhan wrote: > Since I see the following entry > at /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults > I am assuming I do not need to add it to multipath.conf but I do not know > exaclty. Also my storage is Hitachi openv and the default one has Product > name as "DF.*" as you can see below. > > So I am confused in here. > > device { > # vendor "HITACHI" > # product "DF.*" > # getuid_callout "/sbin/scsi_id -g -u -s /block/%n" > # prio_callout "/sbin/mpath_prio_hds_modular %d" > # features "0" > # hardware_handler "0" > # path_grouping_policy group_by_prio > # failback immediate > # rr_weight uniform > # rr_min_io 1000 > # path_checker readsector0 > # } >
I'm not familiar with this SAN, so can't suggest to you a better device configuration snippet. Google may help you out: search for your device type and "multipath.conf". Alternately, if your vendor supports RHEL5 (I'm sure they do), you could contact them and ask them to provide you with a known good configuration. Actually, a quick search yields this: http://www.calivia.com/book/export/html/74 Which seems to have what you need (verify of course). Ray _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hmm it could be worth checking the fc switch log also if you get stuck.
Per
Sent from my iPhone
On 8 Jan 2010, at 15:37, Paras pradhan pradhanparas@gmail.com wrote:
Per,
Yes I have a device mapper out of my detected LUNS. My storage is Hitachi OpenV . My multipath.conf files looks like below. Do I need to add anything to it since failover is not working. When I unplugg the cable in the 1st port of my HBA... my server reboots and I don't know what is causing this reboot. And after reboot when I plug the cable back, no recovery is done and to make it work I have to reboot the server once again with cable plugged.
-- blacklist { devnode "^sda" }
defaults { user_friendly_names yes failback immediate }
Thanks Paras
On Fri, Jan 8, 2010 at 4:42 AM, Per Qvindesland per@norhex.com wrote: Hi
For sure no problems, I see this about everyday, the major reason is that the rh driver is not certified for use with Dell,HP or IBM for that matter, if we get a support request where I work and the RH driver is used for the qlogic card then customer must install the certified driver from the vendor, this has a lot to do with compabillity with the bios etc, I am not saying that the RH driver is bad it is just unsupported by most vendors.
Regards Per
At Fredag, 08-01-2010 on 11:06 "Alexander Dalloz" <ad+lists@uni- x.org> wrote:
Hi
Not sure if you have done this already but install multipath as
the os
sees multiple presented luns which needs to be tied up to one by multipath also do not use the rh drivers for you qlogic it's not recommended at all for any storage solutions.
Per
Dear Per,
can you please elaborate why the QLogic kernel module shipping with CentOS 5 / RHEL 5 is not recommended? It for sure is documented to be used at least in the NetApp host utilities guide, without any warning to prefer the driver package provided on the QLogic site. And I do not see any problem using it with CentOS / RHEL 5 device-mapper-multipath on a lot of systems.
Regards
Alexander
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos