dmp 5.1SP1PR2 installation prevents RHEL 6.4 server to boot
Hello All, Thedmp 5.1SP1PR2 installation is completing but it unable to start it. getting follwing error Veritas Dynamic Multi-Pathing Startup did not complete successfully vxdmp failed to start on ddci-oml1 vxio failed to start on ddci-oml1 vxspec failed to start on ddci-oml1 vxconfigd failed to start on ddci-oml1 vxesd failed to start on ddci-oml1 It is strongly recommended to reboot the following systems: ddci-oml1 Execute '/sbin/shutdown -r now' to properly restart your systems After reboot, run the '/opt/VRTS/install/installdmp -start' command to start Veritas Dynamic Multi-Pathing installdmp log files, summary file, and response file are saved at: after reboot also it is not starting. i installedsfha-rhel6_x86_64-5.1SP1PR2RP4 rolloing patch and took a reboot, the server is stuck to start during the boot process. i am seeing the error messages from the console as below vxvm:vxconfigd: V-5-1-7840 cannot open /dev/vs/config: Device is already open ln: creating symbolic link '/dev/vx/rdmp/dmp'file exits /bin/mknod/ '/dev/vx/config': File exits . . . Loading vxdmp module Loading vxio module Loading vxspec module the server is stuck at here also i cannot take it to single user mode for trouble shooting. Kindly help me on this issue. Thanks and regards UvSolvedHow to write udev rules for persistent device name for DMP subpaths?
Hi experts, I'm using SFRAC 6.0.3 on Oracle Linux 5.8 with Oracle storage (2 HBA cards, 4 paths) and would like to make the DMP subpaths persistent after rebooting. I knew that for single path I can write the following udev rule: KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="<scsi_id>", SYMLINK+="<your_disk_name>%n" But it didn't work for multipath... Anyone can help on this issue? Thanks in advance... Thanks and Regards, KiluwaHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedDMP, MPIO, MSDSM, SCSI-3 and ALUA configuration settings
Ok, I'm somewhat confused and the more I read the more confused I think I'm getting. I'm going to be setting up a 4 node active/active cluster for SQL. All of the nodes will have 2 seperate fiber channel HBAs connecting through 2 seperate switches to our NetApp. The NetApp supports ALUA, so the storage guy wants to use it. It is my understanding that I need to use SCSI-3 to get this to work. Sounds good to me so far. My question is, do I need to use any of Microsoft's MPIO or MSDSM? This is on Win 2008 R2. Or does Veritas take care of all of that? Also, I read that in a new cluster set up, only connect 1 path first and then install and then connect the 2nd path and let Veritas detect it and configure it. Is that accurate? Any info or directions you can point me will be greatly appreciated. Thanks!SolvedWhy does 1 subpath in multipathing use slice 2 and not the other??
Wondering why disk 2 show slice 2 whereas the other subpaths show only disk? Does this mean that this disk has been formatted/labeled differently? They all should be labeled EFI. [2250]$ vxdisk path SUBPATH DANAME DMNAME GROUP STATE c0t0d0s2 disk_0 - - ENABLED c3t500601683EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d0 storageunit1 - - ENABLED c2t500601603EA04599d0 storageunit1 - - ENABLED c2t500601693EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d2s2 storageunit2 - - ENABLED c3t500601683EA04599d2s2 storageunit2 - - ENABLED c2t500601693EA04599d2s2 storageunit2 - - ENABLED c2t500601603EA04599d2s2 storageunit2 - - ENABLED c3t500601613EA04599d1 storageunit3 - - ENABLED c3t500601683EA04599d1 storageunit3 - - ENABLED c2t500601603EA04599d1 storageunit3 - - ENABLED c2t500601693EA04599d1 storageunit3 - - ENABLED When I attempt to initialise the LUN I get this error. [2318]$ vxdisksetup -i storageunit2 vxedvtoc: No such device or address I can see from this output that paths are disabled. [2303]]$ vxdmpadm -v getdmpnode all NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME SERIAL-NO ARRAY_VOL_ID ======================================================================================================== disk_0 ENABLED Disk 1 1 0 disk 600508B1001C0CB883CB65D7A794AC54 - storageunit1 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F004C1C9AB085F2E111 1 storageunit2 ENABLED EMC_CLARiiON 4 2 2 emc_clariion0 60060160D2202F00D6FACBE385F2E111 5 storageunit3 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F00EE397F2986F2E111 9SolvedDMP devices displayed at the format
Hi all, just to understand the way DMP manages devices. We want to use DMP devices inside zpools. So I enabled the dmp_native_support and disabled the MPxIO. Everything seems to work the right way. However in the format (solaris 10) I see dmp devices with strange names: 13. c0d11018 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100GB-00 /virtual-devices@100/channel-devices@200/disk@2b0a 14. c0d11035 <HP-OPEN-V-SUN-6007 cyl 4094 alt 2 hd 15 sec 512> 2GB-03 /virtual-devices@100/channel-devices@200/disk@2b1b 15. c0d11111 <HP-OPEN-V-SUN-6007 cyl 271 alt 2 hd 15 sec 512> 1GB-02 /virtual-devices@100/channel-devices@200/disk@2b67 16. c0d12111 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100G-loc /virtual-devices@100/channel-devices@200/disk@2f4f 17. hp_xp24k0_022bs6 <HP-OPEN-V*2-SUN-6007 cyl 10921 alt 2 hd 15 sec 512> 40GB /dev/vx/rdmp/hp_xp24k0_022bs6 18. hp_xp24k0_022ds0 <HP-OPEN-V-SUN-6007 cyl 5459 alt 2 hd 15 sec 512> 20GB /dev/vx/rdmp/hp_xp24k0_022ds0 19. hp_xp24k0_0128s5 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100G-loc /dev/vx/rdmp/hp_xp24k0_0128s5 20. hp_xp24k0_0231s4 <HP-OPEN-V-SUN-6007 cyl 4094 alt 2 hd 15 sec 512> 2GB-03 /dev/vx/rdmp/hp_xp24k0_0231s4 21. hp_xp24k0_0531s4 <HP-OPEN-V-SUN-6007 cyl 27304 alt 2 hd 15 sec 512> 100GB-01 /dev/vx/rdmp/hp_xp24k0_0531s4 22. hp_xp24k0_0635s4 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-00 /dev/vx/rdmp/hp_xp24k0_0635s4 23. hp_xp24k0_0636s7 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-01 /dev/vx/rdmp/hp_xp24k0_0636s7 24. hp_xp24k0_0637s0 <HP-OPEN-V-SUN-6007 cyl 544 alt 2 hd 15 sec 512> 2GB-02 /dev/vx/rdmp/hp_xp24k0_0637s0 Names end with the slice number, that is different for each disk. Moreover the partition is not present on the disk. For instance: prtvtoc /dev/vx/rdmp/hp_xp24k0_0636s7 | grep -v ^* 0 2 00 23040 4154880 4177919 2 5 01 0 4177920 4177919 I tried with a riconfigurative boot but nothing changed. The question is: how does DMP assign name to disks? is there any way to correct the format output? Many thanks, regards Mauro ViciniSolvedConfigure DMP on linux
Hi All, I'm new in Storage foundation and Veritas volume manager, I need help how to configure Veritas Dynamic Multipathing on VCS 5.1 installed on two node running OS linux 5.3 I have following Disk, I want to enable Veritas to manage two disk disk_1 and disk_2: [root@NODE1 ~]# vxdmpadm getsubpaths NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-NAME CTLR ATTRS ================================================================================ sdb ENABLED(A) - disk_0 disk c1 - sde ENABLED(A) - disk_0 disk c2 - sdc ENABLED(A) - disk_1 disk c1 - sdf ENABLED(A) - disk_1 disk c2 - sdd ENABLED(A) - disk_2 disk c1 - sdg ENABLED(A) - disk_2 disk c2 - sda ENABLED(A) - disk_3 disk c0 - I want to configure the paths of disk_1 (sdc and sdf) as failover and for disk_2 (sdd and sdg) as failover to get configure like this NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-NAME CTLR ATTRS ================================================================================ sdb ENABLED(A) - disk_0 disk c1 - sde ENABLED(A) - disk_0 disk c2 - sdc ENABLED(A) - disk_1 disk c1 - sdf ENABLED - disk_1 disk c2 - sdd ENABLED(A) - disk_2 disk c1 - sdg ENABLED - disk_2 disk c2 - sda ENABLED(A) - disk_3 disk c0 - so if you have any steps to do this or if there is any articleplease provide. Regards.SolvedDMP default IOPOLICY is different for different kind of storage
[root@hostname]/> vxdmpadm getattr enclosure EMC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EMC0 MinimumQ Adaptive [root@hostname]/> vxdmpadm getattr enclosure HDS9500-ALUA0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ HDS9500-ALUA0 Round-Robin Single-Active [root@hostname]/> When I look at the DMP iopolicy for different storages I see that the default DMP mechanism is different. How is this default value set and is there any documentation on this. Also is there any recomendations from the storage vendor depending on the storage's host usage and throughput.Solved