Recent Discussions
vxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVM version 51 Array Types Supporred: A/PF-LSI Depending Array Types A/P State : Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! RemcoSF5.1 VxDMP caches LUN size?
I recently recrated a LUN with a different size but the same LUN ID (Linux SLES11) and when trying to initialize it as a new disk to VxVM I keeps sticking to the old size, whatever I do (I tried to destroy and reinitialize in different formats and grow many times): # vxdisk list isar2_sas_1 Device: isar2_sas_1 public: slice=5 offset=65792 len=167700688 disk_offset=315 Multipathing information: numpaths: 8 sdu state=disabled sdt state=enabled sds state=disabled sdam state=disabled sdan state=enabled sdb state=enabled sda state=disabled sdv state=enabled ... but the main reasons seems to be DMP is somehow sticking to the old size: # fdisk -l /dev/vx/dmp/isar2_sas_1 Disk /dev/vx/dmp/isar2_sas_1: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b pepper:/etc/vx # fdisk -l /dev/sdt Disk /dev/sdt: 128.8 GB, 128849018880 bytes 255 heads, 63 sectors/track, 15665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b Any hints are greatly appreciated :)Solvedursi13 years agoLevel 45.9KViews2likes8CommentsSF4.1 VxDMP disables dmpnode on single path failure
This is more like an informational question, since I do not assume anyone has a solution, but just in case I would be thankfull for some enlightment: I am forced to use an old Version SF4.1MP4 in this case on Linux SLES9. For whatever reason DMP does not work with the JBOD I have added. The JBOD (Promise VTrak 610fD) is ALUA, so half of all the available paths are always standy and that is ok. But the DMP in 4.1 when seeing one of 4 paths not working diables the whole DMP Node, rendering the disk unusable: Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-148 i/o error occured on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-148 i/o error anal ysis done on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-0 SCSI error opcode=0x28 returned status=0x1 key=0x2 asc=0x4 ascq=0xb on path 8/0x70 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x50 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x70 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x10 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x30 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-111 disabled dmpnode 201/0x10 Jun 12 15:35:01 kernel: Buffer I/O error on device VxDMP2, logical block 0 Currently my only solutions seems to stick with Linux DM-Multipathing an add the disks as foreign devices.SolvedDynamic multipath using EMC storage Veritas 3.5
I am trying to setup multipathing with EMC Clariion. The problem is that vxdisk list fabric_0 only shows one path. The EMC array is in auto-trespass mode. This is solaris 8 and format shows two paths. # vxdisk list fabric_2 Device: fabric_2 devicetag: fabric_2 type: sliced hostid: ncsun1 disk: name=disk05 id=1302111549.6037.ncsun1 group: name=rootdg id=1072877341.1025.nc1 info: privoffset=1 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/fabric_2s4 char=/dev/vx/rdmp/fabric_2s4 privpaths: block=/dev/vx/dmp/fabric_2s3 char=/dev/vx/rdmp/fabric_2s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=1048494080 private: slice=3 offset=1 len=32511 update: time=1302111558 seqno=0.5 headers: 0 248 configs: count=1 len=23969 logs: count=1 len=3631 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-023986[023738]: copy=01 offset=000231 enabled log priv 023987-027617[003631]: copy=01 offset=000000 enabled Multipathing information: numpaths: 1 c10t500601613B241045d5s2 state=enabled formt 8. c10t500601613B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0/ssd@w500601613b241045,0 16. c16t500601603B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0/ssd@w500601603b241045,0 vxdisk -o alldgs list show both paths. Two things here it should only show one of the paths and also the second path it shows with diskgroup in ( ). Another issue is why the disk dont show up as EMC_0 or similiar. *****The server has T3 connect and EMC which we are migrating from T3 to EMC. The EMC is the fabric naming convention. # vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS T30_0 sliced disk01 rootdg online T30_1 sliced disk02 rootdg online T31_0 sliced disk03 rootdg online T31_1 sliced disk04 rootdg online T32_0 sliced rootdg00 rootdg online T32_1 sliced rootdg01 rootdg online c1t0d0s2 sliced - - error c1t1d0s2 sliced - - error fabric_0 sliced - - error fabric_1 sliced - - error fabric_2 sliced disk05 rootdg online fabric_3 sliced disk06 rootdg online fabric_4 sliced disk07 rootdg online fabric_5 sliced disk08 rootdg online fabric_6 sliced disk09 rootdg online fabric_7 sliced disk10 rootdg online fabric_8 sliced - - error fabric_9 sliced - - error fabric_10 sliced - (rootdg) online fabric_11 sliced - (rootdg) online fabric_12 sliced - (rootdg) online fabric_13 sliced - (rootdg) online fabric_14 sliced - (rootdg) online fabric_15 sliced - (rootdg) online Here is the ASL...There is no APM prior to version Veritas 4.0. vxddladm listsupport snippet for brevity..... libvxDGCclariion.so A/P DGC CLARiiON The c10 and c16 are the paths for the EMC # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER_DISKS ENABLED OTHER_DISKS c10 OTHER_DISKS ENABLED OTHER_DISKS c16 OTHER_DISKS ENABLED OTHER_DISKS # vxdmpadm getsubpaths ctlr=c10 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c10t500601613B241045d7s2 ENABLED - fabric_0 OTHER_DISKS OTHER_DISKS c10t500601613B241045d6s2 ENABLED - fabric_1 OTHER_DISKS OTHER_DISKS c10t500601613B241045d5s2 ENABLED - fabric_2 OTHER_DISKS OTHER_DISKS c10t500601613B241045d4s2 ENABLED - fabric_3 OTHER_DISKS OTHER_DISKS c10t500601613B241045d3s2 ENABLED - fabric_4 OTHER_DISKS OTHER_DISKS c10t500601613B241045d2s2 ENABLED - fabric_5 OTHER_DISKS OTHER_DISKS c10t500601613B241045d1s2 ENABLED - fabric_6 OTHER_DISKS OTHER_DISKS c10t500601613B241045d0s2 ENABLED - fabric_7 OTHER_DISKS OTHER_DISKS # vxdmpadm getsubpaths ctlr=c16 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c16t500601603B241045d7s2 ENABLED - fabric_8 OTHER_DISKS OTHER_DISKS c16t500601603B241045d6s2 ENABLED - fabric_9 OTHER_DISKS OTHER_DISKS c16t500601603B241045d5s2 ENABLED - fabric_10 OTHER_DISKS OTHER_DISKS c16t500601603B241045d4s2 ENABLED - fabric_11 OTHER_DISKS OTHER_DISKS c16t500601603B241045d3s2 ENABLED - fabric_12 OTHER_DISKS OTHER_DISKS c16t500601603B241045d2s2 ENABLED - fabric_13 OTHER_DISKS OTHER_DISKS c16t500601603B241045d1s2 ENABLED - fabric_14 OTHER_DISKS OTHER_DISKS c16t500601603B241045d0s2 ENABLED - fabric_15 OTHER_DISKS OTHER_DISKS Thanks for any helpSolveddevans342814 years agoLevel 44.5KViews1like34CommentsDMP, MPIO, MSDSM, SCSI-3 and ALUA configuration settings
Ok, I'm somewhat confused and the more I read the more confused I think I'm getting. I'm going to be setting up a 4 node active/active cluster for SQL. All of the nodes will have 2 seperate fiber channel HBAs connecting through 2 seperate switches to our NetApp. The NetApp supports ALUA, so the storage guy wants to use it. It is my understanding that I need to use SCSI-3 to get this to work. Sounds good to me so far. My question is, do I need to use any of Microsoft's MPIO or MSDSM? This is on Win 2008 R2. Or does Veritas take care of all of that? Also, I read that in a new cluster set up, only connect 1 path first and then install and then connect the 2nd path and let Veritas detect it and configure it. Is that accurate? Any info or directions you can point me will be greatly appreciated. Thanks!Solveddmp 5.1SP1PR2 installation prevents RHEL 6.4 server to boot
Hello All, The dmp 5.1SP1PR2 installation is completing but it unable to start it. getting follwing error Veritas Dynamic Multi-Pathing Startup did not complete successfully vxdmp failed to start on ddci-oml1 vxio failed to start on ddci-oml1 vxspec failed to start on ddci-oml1 vxconfigd failed to start on ddci-oml1 vxesd failed to start on ddci-oml1 It is strongly recommended to reboot the following systems: ddci-oml1 Execute '/sbin/shutdown -r now' to properly restart your systems After reboot, run the '/opt/VRTS/install/installdmp -start' command to start Veritas Dynamic Multi-Pathing installdmp log files, summary file, and response file are saved at: after reboot also it is not starting. i installed sfha-rhel6_x86_64-5.1SP1PR2RP4 rolloing patch and took a reboot, the server is stuck to start during the boot process. i am seeing the error messages from the console as below vxvm:vxconfigd: V-5-1-7840 cannot open /dev/vs/config: Device is already open ln: creating symbolic link '/dev/vx/rdmp/dmp'file exits /bin/mknod/ '/dev/vx/config': File exits . . . Loading vxdmp module Loading vxio module Loading vxspec module the server is stuck at here also i cannot take it to single user mode for trouble shooting. Kindly help me on this issue. Thanks and regards UvSolved4.2KViews0likes3CommentsHow Can I Disable DMP?
I have a customer who installed HDLM along with Storage Foundation 5 on a Solaris 10 server. Not surprisingly, they had issues. Now they want to keep HDLM and Volume Manager but disable DMP. While I do not agree with this approach, I've been given the task to disable DMP. So, I know I can use vxdmpadm exclude vxdmp to exclude controllers from being considered by DMP. My question is, is there a way to completely disable DMP altogether, or is excluding disks the best way to go? I've found references on the Symantec support site that discuss disabling DMP for earlier versions of Storage Foundation but nothing that relates to SF 5.SolvedUnable to boot RHEL 6.3 server after installing DMP 5.1 SP1 RP2
Dear Gents, I have recently installed DMP 5.1 SP1 RP2 on a rhel 6.3 os x86_64, after the installation when i took a reboot the system got hung during the boot time. i found the follwoing error message from teh console. "this release of vxted doesnot contain any moduels which are suitable for your 6.6.32-279.el6.x86_64 kernel. error reading module 'vxdmp'see documentation. i have found that there is an another release SFHA 5.1 SP1PR2RP4 and installed that too to verify but no luck. in my enviroment i need to install DMP 5.1 SP1 RP2 on a rhel 6.3 os x86_64 Os. any help much appreciated. Thanks. UvSolveduvahpux11 years agoLevel 43.9KViews0likes3CommentsDMP devices displayed at the format
Hi all, just to understand the way DMP manages devices. We want to use DMP devices inside zpools. So I enabled the dmp_native_support and disabled the MPxIO. Everything seems to work the right way. However in the format (solaris 10) I see dmp devices with strange names: 13. c0d11018 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100GB-00 /virtual-devices@100/channel-devices@200/disk@2b0a 14. c0d11035 <HP-OPEN-V-SUN-6007 cyl 4094 alt 2 hd 15 sec 512> 2GB-03 /virtual-devices@100/channel-devices@200/disk@2b1b 15. c0d11111 <HP-OPEN-V-SUN-6007 cyl 271 alt 2 hd 15 sec 512> 1GB-02 /virtual-devices@100/channel-devices@200/disk@2b67 16. c0d12111 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100G-loc /virtual-devices@100/channel-devices@200/disk@2f4f 17. hp_xp24k0_022bs6 <HP-OPEN-V*2-SUN-6007 cyl 10921 alt 2 hd 15 sec 512> 40GB /dev/vx/rdmp/hp_xp24k0_022bs6 18. hp_xp24k0_022ds0 <HP-OPEN-V-SUN-6007 cyl 5459 alt 2 hd 15 sec 512> 20GB /dev/vx/rdmp/hp_xp24k0_022ds0 19. hp_xp24k0_0128s5 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100G-loc /dev/vx/rdmp/hp_xp24k0_0128s5 20. hp_xp24k0_0231s4 <HP-OPEN-V-SUN-6007 cyl 4094 alt 2 hd 15 sec 512> 2GB-03 /dev/vx/rdmp/hp_xp24k0_0231s4 21. hp_xp24k0_0531s4 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100GB-01 /dev/vx/rdmp/hp_xp24k0_0531s4 22. hp_xp24k0_0635s4 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-00 /dev/vx/rdmp/hp_xp24k0_0635s4 23. hp_xp24k0_0636s7 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-01 /dev/vx/rdmp/hp_xp24k0_0636s7 24. hp_xp24k0_0637s0 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-02 /dev/vx/rdmp/hp_xp24k0_0637s0 Names end with the slice number, that is different for each disk. Moreover the partition is not present on the disk. For instance: prtvtoc /dev/vx/rdmp/hp_xp24k0_0636s7 | grep -v ^* 0 2 00 23040 4154880 4177919 2 5 01 0 4177920 4177919 I tried with a riconfigurative boot but nothing changed. The question is: how does DMP assign name to disks? is there any way to correct the format output? Many thanks, regards Mauro ViciniSolvedMauroVicini13 years agoLevel 23.8KViews0likes2Comments