SFHACFS vs Hitachi-VSP & Host mode option 22
Hi! I'm installing several new SFHACFS clusters and during the failover testing I run into annoying problem - when I fence one node of the cluster, DMP logs high number of path down/path up events which in the end causes the disk to disconnect even on other active nodes. We've found out that our disks were exported without the host mode option 22, so we fixed this on storage. Even after this the clusters bahaved the same. Later I've read somewhere on internet, that it's good idea to relabel the disks, so I've requested new disks from storage and did vxevac to new disks. This fixed just two clusters we have, but the other two are behaving still the same. Have anybody experienced anything similar? Do you know anything I can test / check on the servers to determine the difference? The only difference between the enviroment is that the not-working clusters have the disks mirrored from two storage systems, while the working ones have the data disks only from one storage.SolvedSF5.1 VxDMP caches LUN size?
I recently recrated a LUN with a different size but the same LUN ID (Linux SLES11) and when trying to initialize it as a new disk to VxVM I keeps sticking to the old size, whatever I do (I tried to destroy and reinitialize in different formats and grow many times): # vxdisk list isar2_sas_1 Device: isar2_sas_1 public: slice=5 offset=65792 len=167700688 disk_offset=315 Multipathing information: numpaths: 8 sdu state=disabled sdt state=enabled sds state=disabled sdam state=disabled sdan state=enabled sdb state=enabled sda state=disabled sdv state=enabled ... but the main reasons seems to be DMP is somehow sticking to the old size: # fdisk -l /dev/vx/dmp/isar2_sas_1 Disk /dev/vx/dmp/isar2_sas_1: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b pepper:/etc/vx # fdisk -l /dev/sdt Disk /dev/sdt: 128.8 GB, 128849018880 bytes 255 heads, 63 sectors/track, 15665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b Any hints are greatly appreciated :)Solvedmissing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedWhy does 1 subpath in multipathing use slice 2 and not the other??
Wondering why disk 2 show slice 2 whereas the other subpaths show only disk? Does this mean that this disk has been formatted/labeled differently? They all should be labeled EFI. [2250]$ vxdisk path SUBPATH DANAME DMNAME GROUP STATE c0t0d0s2 disk_0 - - ENABLED c3t500601683EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d0 storageunit1 - - ENABLED c2t500601603EA04599d0 storageunit1 - - ENABLED c2t500601693EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d2s2 storageunit2 - - ENABLED c3t500601683EA04599d2s2 storageunit2 - - ENABLED c2t500601693EA04599d2s2 storageunit2 - - ENABLED c2t500601603EA04599d2s2 storageunit2 - - ENABLED c3t500601613EA04599d1 storageunit3 - - ENABLED c3t500601683EA04599d1 storageunit3 - - ENABLED c2t500601603EA04599d1 storageunit3 - - ENABLED c2t500601693EA04599d1 storageunit3 - - ENABLED When I attempt to initialise the LUN I get this error. [2318]$ vxdisksetup -i storageunit2 vxedvtoc: No such device or address I can see from this output that paths are disabled. [2303]]$ vxdmpadm -v getdmpnode all NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME SERIAL-NO ARRAY_VOL_ID ======================================================================================================== disk_0 ENABLED Disk 1 1 0 disk 600508B1001C0CB883CB65D7A794AC54 - storageunit1 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F004C1C9AB085F2E111 1 storageunit2 ENABLED EMC_CLARiiON 4 2 2 emc_clariion0 60060160D2202F00D6FACBE385F2E111 5 storageunit3 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F00EE397F2986F2E111 9SolvedDynamic multipath using EMC storage Veritas 3.5
I am trying to setup multipathing with EMC Clariion. The problem is that vxdisk list fabric_0 only shows one path. The EMC array is in auto-trespass mode. This is solaris 8 and format shows two paths. # vxdisk list fabric_2 Device: fabric_2 devicetag: fabric_2 type: sliced hostid: ncsun1 disk: name=disk05 id=1302111549.6037.ncsun1 group: name=rootdg id=1072877341.1025.nc1 info: privoffset=1 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/fabric_2s4 char=/dev/vx/rdmp/fabric_2s4 privpaths: block=/dev/vx/dmp/fabric_2s3 char=/dev/vx/rdmp/fabric_2s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=1048494080 private: slice=3 offset=1 len=32511 update: time=1302111558 seqno=0.5 headers: 0 248 configs: count=1 len=23969 logs: count=1 len=3631 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-023986[023738]: copy=01 offset=000231 enabled log priv 023987-027617[003631]: copy=01 offset=000000 enabled Multipathing information: numpaths: 1 c10t500601613B241045d5s2 state=enabled formt 8. c10t500601613B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0/ssd@w500601613b241045,0 16. c16t500601603B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0/ssd@w500601603b241045,0 vxdisk -o alldgs list show both paths. Two things here it should only show one of the paths and also the second path it shows with diskgroup in ( ). Another issue is why the disk dont show up as EMC_0 or similiar. *****The server has T3 connect and EMC which we are migrating from T3 to EMC. The EMC is the fabric naming convention. # vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS T30_0 sliced disk01 rootdg online T30_1 sliced disk02 rootdg online T31_0 sliced disk03 rootdg online T31_1 sliced disk04 rootdg online T32_0 sliced rootdg00 rootdg online T32_1 sliced rootdg01 rootdg online c1t0d0s2 sliced - - error c1t1d0s2 sliced - - error fabric_0 sliced - - error fabric_1 sliced - - error fabric_2 sliced disk05 rootdg online fabric_3 sliced disk06 rootdg online fabric_4 sliced disk07 rootdg online fabric_5 sliced disk08 rootdg online fabric_6 sliced disk09 rootdg online fabric_7 sliced disk10 rootdg online fabric_8 sliced - - error fabric_9 sliced - - error fabric_10 sliced - (rootdg) online fabric_11 sliced - (rootdg) online fabric_12 sliced - (rootdg) online fabric_13 sliced - (rootdg) online fabric_14 sliced - (rootdg) online fabric_15 sliced - (rootdg) online Here is the ASL...There is no APM prior to version Veritas 4.0. vxddladm listsupport snippet for brevity..... libvxDGCclariion.so A/P DGC CLARiiON The c10 and c16 are the paths for the EMC # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER_DISKS ENABLED OTHER_DISKS c10 OTHER_DISKS ENABLED OTHER_DISKS c16 OTHER_DISKS ENABLED OTHER_DISKS # vxdmpadm getsubpaths ctlr=c10 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c10t500601613B241045d7s2 ENABLED - fabric_0 OTHER_DISKS OTHER_DISKS c10t500601613B241045d6s2 ENABLED - fabric_1 OTHER_DISKS OTHER_DISKS c10t500601613B241045d5s2 ENABLED - fabric_2 OTHER_DISKS OTHER_DISKS c10t500601613B241045d4s2 ENABLED - fabric_3 OTHER_DISKS OTHER_DISKS c10t500601613B241045d3s2 ENABLED - fabric_4 OTHER_DISKS OTHER_DISKS c10t500601613B241045d2s2 ENABLED - fabric_5 OTHER_DISKS OTHER_DISKS c10t500601613B241045d1s2 ENABLED - fabric_6 OTHER_DISKS OTHER_DISKS c10t500601613B241045d0s2 ENABLED - fabric_7 OTHER_DISKS OTHER_DISKS # vxdmpadm getsubpaths ctlr=c16 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c16t500601603B241045d7s2 ENABLED - fabric_8 OTHER_DISKS OTHER_DISKS c16t500601603B241045d6s2 ENABLED - fabric_9 OTHER_DISKS OTHER_DISKS c16t500601603B241045d5s2 ENABLED - fabric_10 OTHER_DISKS OTHER_DISKS c16t500601603B241045d4s2 ENABLED - fabric_11 OTHER_DISKS OTHER_DISKS c16t500601603B241045d3s2 ENABLED - fabric_12 OTHER_DISKS OTHER_DISKS c16t500601603B241045d2s2 ENABLED - fabric_13 OTHER_DISKS OTHER_DISKS c16t500601603B241045d1s2 ENABLED - fabric_14 OTHER_DISKS OTHER_DISKS c16t500601603B241045d0s2 ENABLED - fabric_15 OTHER_DISKS OTHER_DISKS Thanks for any helpSolvedDMP - vxdmpadm reporting extra paths
Hi, Got a bit of a funny on a number of servers after a misconfiguration as part of a SAN migration where a cable was incorrectly plugged, removed, and then plugged in where it was supposed to be. All servers are running Solaris 10 with Storage Foundation 4.1 MP2. Below are details of what's been seen and actions taken to try and tidy up. In format there are a number of devices that don't exist. These were created when the cable was incorrectly plugged. c4t500507630513453Ad8 There's also this unwanted ap_id for that connection which currently shows as being unusable # cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 scsi-bus connected configured unknown c0::dsk/c0t0d0 disk connected configured unknown c0::dsk/c0t1d0 disk connected configured unknown c1 scsi-bus connected configured unknown c1::dsk/c1t4d0 CD-ROM connected configured unknown c4 fc-fabric connected configured unknown c4::500507630513453a disk connected configured unusable <--- This one c4::50050763051882c4 disk connected configured unknown c4::500507630518853a disk connected configured unknown c5 fc connected unconfigured unknown c6 fc-fabric connected configured unknown c6::50050763050882c4 disk connected configured unknown c6::500507630508853a disk connected configured unknown c7 fc connected unconfigured unknown # Querying the enclosure it shows that there are 3 paths to the devices with two active and one disabled # vxdmpadm getdmpnode enclosure=IBM_DS8x000 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ========================================================================= c6t500507630508853Ad5s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad2s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad16s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad18s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad23s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad0s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad7s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad8s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad15s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c4t500507630518853Ad21s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad13s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad11s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad9s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad1s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad17s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad19s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad14s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad22s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad4s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c4t500507630518853Ad20s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad3s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad24s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad12s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad10s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad6s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 c6t500507630508853Ad25s2 ENABLED IBM_DS8x00 3 2 1 IBM_DS8x000 # Querying one of the devices shows that it thinks it has three paths, with the incorrect one being the one that shows as being disabled # vxdisk list c6t500507630508853Ad8s2 Device: c6t500507630508853Ad8s2 devicetag: c6t500507630508853Ad8 type: auto hostid: fuj409 disk: name=b05_1423 id=1183556373.64.fuj409 group: name=qvdg00 id=1183557746.95.fuj409 info: format=sliced,privoffset=1,pubslice=4,privslice=3 flags: online ready private autoconfig noautoimport imported pubpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s4 char=/dev/vx/rdmp/c6t500507630508853Ad8s4 privpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s3 char=/dev/vx/rdmp/c6t500507630508853Ad8s3 version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=31447680 disk_offset=5760 private: slice=3 offset=1 len=3583 disk_offset=1920 update: time=1260157705 seqno=0.222 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=2615 logs: count=1 len=396 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-002632[002384]: copy=01 offset=000231 enabled log priv 002633-003028[000396]: copy=01 offset=000000 enabled Multipathing information: numpaths: 3 c6t500507630508853Ad8s2 state=enabled c4t500507630518853Ad8s2 state=enabled c4t500507630513453Ad8s2 state=disabled <--- This is the unavailable drive from the output of the format command above # Removed the unwanted ap_id by unconfiguring it, which cleaned up all the unavailable devices in format # cfgadm -c unconfigure c4::500507630513453a # cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 scsi-bus connected configured unknown c0::dsk/c0t0d0 disk connected configured unknown c0::dsk/c0t1d0 disk connected configured unknown c1 scsi-bus connected configured unknown c1::dsk/c1t4d0 CD-ROM connected configured unknown c4 fc-fabric connected configured unknown c4::50050763051882c4 disk connected configured unknown c4::500507630518853a disk connected configured unknown c5 fc connected unconfigured unknown c6 fc-fabric connected configured unknown c6::50050763050882c4 disk connected configured unknown c6::500507630508853a disk connected configured unknown c7 fc connected unconfigured unknown # Device still thinks it has three paths # vxdisk list c6t500507630508853Ad8s2 Device: c6t500507630508853Ad8s2 devicetag: c6t500507630508853Ad8 type: auto hostid: fuj409 disk: name=b05_1423 id=1183556373.64.fuj409 group: name=qvdg00 id=1183557746.95.fuj409 info: format=sliced,privoffset=1,pubslice=4,privslice=3 flags: online ready private autoconfig noautoimport imported pubpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s4 char=/dev/vx/rdmp/c6t500507630508853Ad8s4 privpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s3 char=/dev/vx/rdmp/c6t500507630508853Ad8s3 version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=31447680 disk_offset=5760 private: slice=3 offset=1 len=3583 disk_offset=1920 update: time=1260157705 seqno=0.222 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=2615 logs: count=1 len=396 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-002632[002384]: copy=01 offset=000231 enabled log priv 002633-003028[000396]: copy=01 offset=000000 enabled Multipathing information: numpaths: 3 c6t500507630508853Ad8s2 state=enabled c4t500507630518853Ad8s2 state=enabled c4t500507630513453Ad8s2 state=disabled # # vxdmpadm getdmpnode nodename=c6t500507630508853Ad8s2 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ========================================================================= c6t500507630508853Ad8s2 ENABLED IBM_DS8x00 3 1 2 IBM_DS8x000 # Ran 'vxdctl enable' and now only see the two paths # vxdisk list c6t500507630508853Ad8s2 Device: c6t500507630508853Ad8s2 devicetag: c6t500507630508853Ad8 type: auto hostid: fuj409 disk: name=b05_1423 id=1183556373.64.fuj409 group: name=qvdg00 id=1183557746.95.fuj409 info: format=sliced,privoffset=1,pubslice=4,privslice=3 flags: online ready private autoconfig noautoimport imported pubpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s4 char=/dev/vx/rdmp/c6t500507630508853Ad8s4 privpaths: block=/dev/vx/dmp/c6t500507630508853Ad8s3 char=/dev/vx/rdmp/c6t500507630508853Ad8s3 version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=31447680 disk_offset=5760 private: slice=3 offset=1 len=3583 disk_offset=1920 update: time=1260157705 seqno=0.222 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=2615 logs: count=1 len=396 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-002632[002384]: copy=01 offset=000231 enabled log priv 002633-003028[000396]: copy=01 offset=000000 enabled Multipathing information: numpaths: 2 c6t500507630508853Ad8s2 state=enabled c4t500507630518853Ad8s2 state=enabled However via vxdmpadm I still see three paths and it now thinks two of them are disabled # vxdmpadm getdmpnode nodename=c6t500507630508853Ad8s2 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ========================================================================= c6t500507630508853Ad8s2 ENABLED IBM_DS8x00 3 1 2 IBM_DS8x000 # I've tried restarting vxconfigd, running 'vxdisk scandisks' and 'vxdctl initdmp' but with no luck. There are no extra devices lurking around under /dev/vx/dmp or /dev/vx/rdmp. The following is in /etc/vx/dmpevents.log. The events at 11:00 would be from when the cable was incorrectly added and then removed. Those at 15:00 will be from the action taken to try and tidy things up Fri Apr 9 11:17:06.000: Reconfiguration is in progress Fri Apr 9 11:17:06.000: Reconfiguration has finished Fri Apr 9 11:19:31.071: I/O error occured on Path c4t500507630513453Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:19:31.072: I/O error occured on Path c4t500507630513453Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:19:31.072: Disabled Path c4t500507630513453Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:19:31.072: I/O error occured on Path c4t500507630513453Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:19:31.072: I/O analysis done on Path c4t500507630513453Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:19:31.072: I/O error occured on Path c4t500507630513453Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:19:31.072: Disabled Path c4t500507630513453Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:19:31.072: Disabled Path c4t500507630513453Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:19:31.073: I/O analysis done on Path c4t500507630513453Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:19:31.073: Disabled Path c4t500507630513453Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:19:31.073: I/O analysis done on Path c4t500507630513453Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:19:31.074: I/O error occured on Path c4t500507630513453Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:19:31.074: I/O analysis done on Path c4t500507630513453Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:19:31.074: I/O error occured on Path c4t500507630513453Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:19:31.074: Disabled Path c4t500507630513453Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:19:31.074: I/O analysis done on Path c4t500507630513453Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:19:31.074: Disabled Path c4t500507630513453Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:19:31.075: I/O analysis done on Path c4t500507630513453Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:20:12.041: I/O error occured on Path c4t500507630513453Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:20:12.042: Disabled Path c4t500507630513453Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:20:12.042: I/O analysis done on Path c4t500507630513453Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:21:28.071: I/O error occured on Path c4t500507630518853Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:21:28.071: I/O error occured on Path c4t500507630518853Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:21:28.071: Disabled Path c4t500507630518853Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:21:28.072: I/O analysis done on Path c4t500507630518853Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:21:28.072: I/O error occured on Path c4t500507630518853Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:21:28.072: Disabled Path c4t500507630518853Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:21:28.072: Disabled Path c4t500507630518853Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:21:28.072: I/O error occured on Path c4t500507630518853Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:21:28.073: I/O analysis done on Path c4t500507630518853Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:21:28.073: Disabled Path c4t500507630518853Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:21:28.075: I/O error occured on Path c4t500507630518853Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:21:28.076: I/O analysis done on Path c4t500507630518853Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:21:28.076: Disabled Path c4t500507630518853Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:21:28.077: I/O analysis done on Path c4t500507630518853Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:21:28.077: I/O error occured on Path c4t500507630518853Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:21:28.077: Disabled Path c4t500507630518853Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:21:28.078: I/O analysis done on Path c4t500507630518853Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:21:28.078: I/O analysis done on Path c4t500507630518853Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:21:37.041: I/O error occured on Path c4t500507630518853Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:21:37.042: Disabled Path c4t500507630518853Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:21:37.042: I/O analysis done on Path c4t500507630518853Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:21:46.452: I/O error occured on Path c4t500507630518853Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 11:21:46.452: Disabled Path c4t500507630518853Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 11:21:46.453: I/O analysis done on Path c4t500507630518853Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 11:22:47.623: Disabled Path c4t500507630518853Ad18s2 belonging to Dmpnode c6t500507630508853Ad18s2 Fri Apr 9 11:22:47.623: Disabled Path c4t500507630518853Ad23s2 belonging to Dmpnode c6t500507630508853Ad23s2 Fri Apr 9 11:22:47.623: Disabled Path c4t500507630518853Ad7s2 belonging to Dmpnode c6t500507630508853Ad7s2 Fri Apr 9 11:22:47.623: Disabled Path c4t500507630518853Ad9s2 belonging to Dmpnode c6t500507630508853Ad9s2 Fri Apr 9 11:22:47.624: Disabled Path c4t500507630518853Ad22s2 belonging to Dmpnode c6t500507630508853Ad22s2 Fri Apr 9 11:22:47.624: Disabled Path c4t500507630518853Ad24s2 belonging to Dmpnode c6t500507630508853Ad24s2 Fri Apr 9 11:22:47.624: Disabled Path c4t500507630518853Ad12s2 belonging to Dmpnode c6t500507630508853Ad12s2 Fri Apr 9 11:27:47.624: Enabled Path c4t500507630518853Ad18s2 belonging to Dmpnode c6t500507630508853Ad18s2 Fri Apr 9 11:27:47.624: Enabled Path c4t500507630518853Ad23s2 belonging to Dmpnode c6t500507630508853Ad23s2 Fri Apr 9 11:27:47.624: Enabled Path c4t500507630518853Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 11:27:47.624: Enabled Path c4t500507630518853Ad7s2 belonging to Dmpnode c6t500507630508853Ad7s2 Fri Apr 9 11:27:47.625: Enabled Path c4t500507630518853Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 11:27:47.625: Enabled Path c4t500507630518853Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 11:27:47.625: Enabled Path c4t500507630518853Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 11:27:47.626: Enabled Path c4t500507630518853Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 11:27:47.626: Enabled Path c4t500507630518853Ad22s2 belonging to Dmpnode c6t500507630508853Ad22s2 Fri Apr 9 11:27:47.626: Enabled Path c4t500507630518853Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 11:27:47.626: Enabled Path c4t500507630518853Ad24s2 belonging to Dmpnode c6t500507630508853Ad24s2 Fri Apr 9 11:27:47.626: Enabled Path c4t500507630518853Ad12s2 belonging to Dmpnode c6t500507630508853Ad12s2 Fri Apr 9 11:27:47.627: Enabled Path c4t500507630518853Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 11:27:47.627: Enabled Path c4t500507630518853Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 11:27:47.628: Enabled Path c4t500507630518853Ad9s2 belonging to Dmpnode c6t500507630508853Ad9s2 Fri Apr 9 11:42:55.000: Reconfiguration is in progress Fri Apr 9 11:42:55.000: Reconfiguration has finished Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad16s2 belonging to Dmpnode c6t500507630508853Ad16s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad22s2 belonging to Dmpnode c6t500507630508853Ad22s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad1s2 belonging to Dmpnode c6t500507630508853Ad1s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad5s2 belonging to Dmpnode c6t500507630508853Ad5s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad9s2 belonging to Dmpnode c6t500507630508853Ad9s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad7s2 belonging to Dmpnode c6t500507630508853Ad7s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad21s2 belonging to Dmpnode c4t500507630518853Ad21s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad10s2 belonging to Dmpnode c6t500507630508853Ad10s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad18s2 belonging to Dmpnode c6t500507630508853Ad18s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad2s2 belonging to Dmpnode c6t500507630508853Ad2s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad4s2 belonging to Dmpnode c6t500507630508853Ad4s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad24s2 belonging to Dmpnode c6t500507630508853Ad24s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad23s2 belonging to Dmpnode c6t500507630508853Ad23s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad12s2 belonging to Dmpnode c6t500507630508853Ad12s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad6s2 belonging to Dmpnode c6t500507630508853Ad6s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad13s2 belonging to Dmpnode c6t500507630508853Ad13s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad11s2 belonging to Dmpnode c6t500507630508853Ad11s2 Fri Apr 9 11:42:52.202: Disabled Path c4t500507630513453Ad17s2 belonging to Dmpnode c6t500507630508853Ad17s2 Fri Apr 9 15:48:32.873: Disabled Path c4t500507630513453Ad25s2 belonging to Dmpnode c6t500507630508853Ad25s2 Fri Apr 9 15:48:32.921: Disabled Path c4t500507630513453Ad20s2 belonging to Dmpnode c4t500507630518853Ad20s2 Fri Apr 9 15:48:32.968: Disabled Path c4t500507630513453Ad19s2 belonging to Dmpnode c6t500507630508853Ad19s2 Fri Apr 9 15:48:33.015: Disabled Path c4t500507630513453Ad15s2 belonging to Dmpnode c6t500507630508853Ad15s2 Fri Apr 9 15:48:33.063: Disabled Path c4t500507630513453Ad14s2 belonging to Dmpnode c6t500507630508853Ad14s2 Fri Apr 9 15:48:33.111: Disabled Path c4t500507630513453Ad8s2 belonging to Dmpnode c6t500507630508853Ad8s2 Fri Apr 9 15:48:33.160: Disabled Path c4t500507630513453Ad3s2 belonging to Dmpnode c6t500507630508853Ad3s2 Fri Apr 9 15:48:33.207: Disabled Path c4t500507630513453Ad0s2 belonging to Dmpnode c6t500507630508853Ad0s2 Fri Apr 9 15:50:54.000: Reconfiguration is in progress Fri Apr 9 15:50:54.000: Reconfiguration has finished Fri Apr 9 15:52:39.000: Reconfiguration is in progress Fri Apr 9 15:52:39.000: Reconfiguration has finished I'd prefer not to have to reboot so if anyone has any good ideas on how to get DMP to clear out what it knows properly and rescan it would be helpeful. Thanks in advance Phil.SolvedDMP default IOPOLICY is different for different kind of storage
[root@hostname]/> vxdmpadm getattr enclosure EMC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EMC0 MinimumQ Adaptive [root@hostname]/> vxdmpadm getattr enclosure HDS9500-ALUA0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ HDS9500-ALUA0 Round-Robin Single-Active [root@hostname]/> When I look at the DMP iopolicy for different storages I see that the default DMP mechanism is different. How is this default value set and is there any documentation on this. Also is there any recomendations from the storage vendor depending on the storage's host usage and throughput.Solved