Hostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedvxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVM version 51 Array Types Supporred: A/PF-LSI Depending Array Types A/P State : Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! RemcoDoes setting the DMP IOPOLICY require a reboot?
We are going to change the DMP IOPOLICY for our IBM XIV storage from Balanced to Round-Robin. We are also going to set use_all_path=yes. Do these commands require a reboot (Solaris). I checked the VXVM Admin guide and can't see anything about this being disruptive or requiring a reboot. Just wondering if anyone has experience with this and can elaborate.SolvedDynamic multipath using EMC storage Veritas 3.5
I am trying to setup multipathing with EMC Clariion. The problem is that vxdisk list fabric_0 only shows one path. The EMC array is in auto-trespass mode. This is solaris 8 and format shows two paths. # vxdisk list fabric_2 Device: fabric_2 devicetag: fabric_2 type: sliced hostid: ncsun1 disk: name=disk05 id=1302111549.6037.ncsun1 group: name=rootdg id=1072877341.1025.nc1 info: privoffset=1 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/fabric_2s4 char=/dev/vx/rdmp/fabric_2s4 privpaths: block=/dev/vx/dmp/fabric_2s3 char=/dev/vx/rdmp/fabric_2s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=1048494080 private: slice=3 offset=1 len=32511 update: time=1302111558 seqno=0.5 headers: 0 248 configs: count=1 len=23969 logs: count=1 len=3631 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-023986[023738]: copy=01 offset=000231 enabled log priv 023987-027617[003631]: copy=01 offset=000000 enabled Multipathing information: numpaths: 1 c10t500601613B241045d5s2 state=enabled formt 8. c10t500601613B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0/ssd@w500601613b241045,0 16. c16t500601603B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0/ssd@w500601603b241045,0 vxdisk -o alldgs list show both paths. Two things here it should only show one of the paths and also the second path it shows with diskgroup in ( ). Another issue is why the disk dont show up as EMC_0 or similiar. *****The server has T3 connect and EMC which we are migrating from T3 to EMC. The EMC is the fabric naming convention. # vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS T30_0 sliced disk01 rootdg online T30_1 sliced disk02 rootdg online T31_0 sliced disk03 rootdg online T31_1 sliced disk04 rootdg online T32_0 sliced rootdg00 rootdg online T32_1 sliced rootdg01 rootdg online c1t0d0s2 sliced - - error c1t1d0s2 sliced - - error fabric_0 sliced - - error fabric_1 sliced - - error fabric_2 sliced disk05 rootdg online fabric_3 sliced disk06 rootdg online fabric_4 sliced disk07 rootdg online fabric_5 sliced disk08 rootdg online fabric_6 sliced disk09 rootdg online fabric_7 sliced disk10 rootdg online fabric_8 sliced - - error fabric_9 sliced - - error fabric_10 sliced - (rootdg) online fabric_11 sliced - (rootdg) online fabric_12 sliced - (rootdg) online fabric_13 sliced - (rootdg) online fabric_14 sliced - (rootdg) online fabric_15 sliced - (rootdg) online Here is the ASL...There is no APM prior to version Veritas 4.0. vxddladm listsupport snippet for brevity..... libvxDGCclariion.so A/P DGC CLARiiON The c10 and c16 are the paths for the EMC # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER_DISKS ENABLED OTHER_DISKS c10 OTHER_DISKS ENABLED OTHER_DISKS c16 OTHER_DISKS ENABLED OTHER_DISKS # vxdmpadm getsubpaths ctlr=c10 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c10t500601613B241045d7s2 ENABLED - fabric_0 OTHER_DISKS OTHER_DISKS c10t500601613B241045d6s2 ENABLED - fabric_1 OTHER_DISKS OTHER_DISKS c10t500601613B241045d5s2 ENABLED - fabric_2 OTHER_DISKS OTHER_DISKS c10t500601613B241045d4s2 ENABLED - fabric_3 OTHER_DISKS OTHER_DISKS c10t500601613B241045d3s2 ENABLED - fabric_4 OTHER_DISKS OTHER_DISKS c10t500601613B241045d2s2 ENABLED - fabric_5 OTHER_DISKS OTHER_DISKS c10t500601613B241045d1s2 ENABLED - fabric_6 OTHER_DISKS OTHER_DISKS c10t500601613B241045d0s2 ENABLED - fabric_7 OTHER_DISKS OTHER_DISKS # vxdmpadm getsubpaths ctlr=c16 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c16t500601603B241045d7s2 ENABLED - fabric_8 OTHER_DISKS OTHER_DISKS c16t500601603B241045d6s2 ENABLED - fabric_9 OTHER_DISKS OTHER_DISKS c16t500601603B241045d5s2 ENABLED - fabric_10 OTHER_DISKS OTHER_DISKS c16t500601603B241045d4s2 ENABLED - fabric_11 OTHER_DISKS OTHER_DISKS c16t500601603B241045d3s2 ENABLED - fabric_12 OTHER_DISKS OTHER_DISKS c16t500601603B241045d2s2 ENABLED - fabric_13 OTHER_DISKS OTHER_DISKS c16t500601603B241045d1s2 ENABLED - fabric_14 OTHER_DISKS OTHER_DISKS c16t500601603B241045d0s2 ENABLED - fabric_15 OTHER_DISKS OTHER_DISKS Thanks for any helpSolvedformat display The current rpm value 0 is invalid, adjusting it to 3600 using dmp with EMC CX-4 in Solaris 10
I am currently working in a project using VxVM. The dmp is enabled for Mutlipath. However, when I type 'format' in command prompt. Searching for disks... The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 The current rpm value 0 is invalid, adjusting it to 3600 done c2t5006016141E0CABEd0: configured with capacity of 99.98GB c2t5006016841E0CABEd1: configured with capacity of 99.98GB c2t5006016841E0CABEd2: configured with capacity of 99.98GB c2t5006016141E0CABEd3: configured with capacity of 99.98GB c2t5006016841E0CABEd4: configured with capacity of 99.98GB c2t5006016141E0CABEd5: configured with capacity of 99.98GB c2t5006016841E0CABEd6: configured with capacity of 99.98GB c2t5006016141E0CABEd7: configured with capacity of 99.98GB c2t5006016841E0CABEd8: configured with capacity of 99.98GB c2t5006016141E0CABEd9: configured with capacity of 99.98GB c2t5006016841E0CABEd10: configured with capacity of 99.98GB c2t5006016141E0CABEd11: configured with capacity of 99.98GB c2t5006016841E0CABEd12: configured with capacity of 99.98GB c2t5006016141E0CABEd13: configured with capacity of 99.98GB c2t5006016841E0CABEd14: configured with capacity of 99.98GB c2t5006016141E0CABEd15: configured with capacity of 99.98GB c2t5006016144601064d0: configured with capacity of 99.98GB c2t5006016844601064d1: configured with capacity of 99.98GB c2t5006016144601064d2: configured with capacity of 99.98GB c4t5006016041E0CABEd0: configured with capacity of 99.98GB c4t5006016941E0CABEd1: configured with capacity of 99.98GB c4t5006016941E0CABEd2: configured with capacity of 99.98GB c4t5006016041E0CABEd3: configured with capacity of 99.98GB c4t5006016941E0CABEd4: configured with capacity of 99.98GB c4t5006016041E0CABEd5: configured with capacity of 99.98GB c4t5006016941E0CABEd6: configured with capacity of 99.98GB c4t5006016041E0CABEd7: configured with capacity of 99.98GB c4t5006016941E0CABEd8: configured with capacity of 99.98GB c4t5006016041E0CABEd9: configured with capacity of 99.98GB c4t5006016941E0CABEd10: configured with capacity of 99.98GB c4t5006016041E0CABEd11: configured with capacity of 99.98GB c4t5006016941E0CABEd12: configured with capacity of 99.98GB c4t5006016041E0CABEd13: configured with capacity of 99.98GB c4t5006016941E0CABEd14: configured with capacity of 99.98GB c4t5006016041E0CABEd15: configured with capacity of 99.98GB c4t5006016044601064d0: configured with capacity of 99.98GB c4t5006016944601064d1: configured with capacity of 99.98GB c4t5006016044601064d2: configured with capacity of 99.98GB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@0,600000/pci@0/scsi@1/sd@0,0 1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@0,600000/pci@0/scsi@1/sd@1,0 2. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@10,600000/pci@0/scsi@1/sd@0,0 3. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@10,600000/pci@0/scsi@1/sd@1,0 4. c2t5006016141E0CABEd0 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,0 5. c2t5006016841E0CABEd0 <DGC-VRAID-0326 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,0 6. c2t5006016841E0CABEd1 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,1 7. c2t5006016141E0CABEd1 <DGC-VRAID-0326 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,1 8. c2t5006016141E0CABEd2 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,2 9. c2t5006016841E0CABEd2 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,2 10. c2t5006016141E0CABEd3 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,3 11. c2t5006016841E0CABEd3 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,3 12. c2t5006016141E0CABEd4 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,4 13. c2t5006016841E0CABEd4 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,4 14. c2t5006016841E0CABEd5 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,5 15. c2t5006016141E0CABEd5 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,5 16. c2t5006016841E0CABEd6 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,6 17. c2t5006016141E0CABEd6 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,6 18. c2t5006016841E0CABEd7 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,7 19. c2t5006016141E0CABEd7 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,7 20. c2t5006016841E0CABEd8 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,8 21. c2t5006016141E0CABEd8 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,8 22. c2t5006016841E0CABEd9 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,9 23. c2t5006016141E0CABEd9 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,9 24. c2t5006016841E0CABEd10 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,a 25. c2t5006016141E0CABEd10 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,a 26. c2t5006016841E0CABEd11 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,b 27. c2t5006016141E0CABEd11 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,b 28. c2t5006016141E0CABEd12 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,c 29. c2t5006016841E0CABEd12 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,c 30. c2t5006016841E0CABEd13 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,d 31. c2t5006016141E0CABEd13 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,d 32. c2t5006016141E0CABEd14 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,e 33. c2t5006016841E0CABEd14 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,e 34. c2t5006016841E0CABEd15 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,f 35. c2t5006016141E0CABEd15 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,f 36. c2t5006016144601064d0 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016144601064,0 37. c2t5006016844601064d0 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016844601064,0 38. c2t5006016144601064d1 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016144601064,1 39. c2t5006016844601064d1 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016844601064,1 40. c2t5006016844601064d2 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016844601064,2 41. c2t5006016144601064d2 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@1,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016144601064,2 42. c4t5006016041E0CABEd0 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,0 43. c4t5006016941E0CABEd0 <DGC-VRAID-0326 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,0 44. c4t5006016941E0CABEd1 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,1 45. c4t5006016041E0CABEd1 <DGC-VRAID-0326 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,1 46. c4t5006016941E0CABEd2 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,2 47. c4t5006016041E0CABEd2 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,2 48. c4t5006016041E0CABEd3 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,3 49. c4t5006016941E0CABEd3 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,3 50. c4t5006016041E0CABEd4 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,4 51. c4t5006016941E0CABEd4 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,4 52. c4t5006016041E0CABEd5 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,5 53. c4t5006016941E0CABEd5 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,5 54. c4t5006016041E0CABEd6 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,6 55. c4t5006016941E0CABEd6 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,6 56. c4t5006016041E0CABEd7 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,7 57. c4t5006016941E0CABEd7 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,7 58. c4t5006016041E0CABEd8 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,8 59. c4t5006016941E0CABEd8 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,8 60. c4t5006016041E0CABEd9 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,9 61. c4t5006016941E0CABEd9 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,9 62. c4t5006016041E0CABEd10 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,a 63. c4t5006016941E0CABEd10 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,a 64. c4t5006016041E0CABEd11 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,b 65. c4t5006016941E0CABEd11 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,b 66. c4t5006016041E0CABEd12 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,c 67. c4t5006016941E0CABEd12 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,c 68. c4t5006016041E0CABEd13 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,d 69. c4t5006016941E0CABEd13 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,d 70. c4t5006016041E0CABEd14 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,e 71. c4t5006016941E0CABEd14 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,e 72. c4t5006016941E0CABEd15 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016941e0cabe,f 73. c4t5006016041E0CABEd15 <DGC-VRAID-0326 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0cabe,f 74. c4t5006016944601064d0 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016944601064,0 75. c4t5006016044601064d0 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016044601064,0 76. c4t5006016044601064d1 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016044601064,1 77. c4t5006016944601064d1 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016944601064,1 78. c4t5006016944601064d2 <DGC-VRAID-0428 cyl 51198 alt 2 hd 256 sec 16> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016944601064,2 79. c4t5006016044601064d2 <DGC-VRAID-0428 cyl 13052 alt 2 hd 255 sec 63> /pci@11,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016044601064,2 80. c8t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@34,600000/pci@0/scsi@1/sd@0,0 81. c8t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@34,600000/pci@0/scsi@1/sd@1,0 82. c10t5006016841E0CABEd0 <drive type unknown> /pci@35,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0cabe,0 83. c10t5006016141E0CABEd0 <drive type unknown> /pci@35,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016141e0cabe,0 84. c10t5006016144601064d0 <drive type unknown> /pci@35,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016144601064,0 85. c10t5006016844601064d0 <drive type unknown> /pci@35,700000/SUNW,qlc@0/fp@0,0/ssd@w5006016844601064,0 How can I solve this problem? the host is not a cluster host. Thank you for advanced. Johnston chanHow Can I Disable DMP?
I have a customer who installed HDLM along with Storage Foundation 5 on a Solaris 10 server. Not surprisingly, they had issues. Now they want to keep HDLM and Volume Manager but disable DMP. While I do not agree with this approach, I've been given the task to disable DMP. So, I know I can use vxdmpadm exclude vxdmp to exclude controllers from being considered by DMP. My question is, is there a way to completely disable DMP altogether, or is excluding disks the best way to go? I've found references on the Symantec support site that discuss disabling DMP for earlier versions of Storage Foundation but nothing that relates to SF 5.Solved