missing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.Solveddmp 5.1SP1PR2 installation prevents RHEL 6.4 server to boot
Hello All, Thedmp 5.1SP1PR2 installation is completing but it unable to start it. getting follwing error Veritas Dynamic Multi-Pathing Startup did not complete successfully vxdmp failed to start on ddci-oml1 vxio failed to start on ddci-oml1 vxspec failed to start on ddci-oml1 vxconfigd failed to start on ddci-oml1 vxesd failed to start on ddci-oml1 It is strongly recommended to reboot the following systems: ddci-oml1 Execute '/sbin/shutdown -r now' to properly restart your systems After reboot, run the '/opt/VRTS/install/installdmp -start' command to start Veritas Dynamic Multi-Pathing installdmp log files, summary file, and response file are saved at: after reboot also it is not starting. i installedsfha-rhel6_x86_64-5.1SP1PR2RP4 rolloing patch and took a reboot, the server is stuck to start during the boot process. i am seeing the error messages from the console as below vxvm:vxconfigd: V-5-1-7840 cannot open /dev/vs/config: Device is already open ln: creating symbolic link '/dev/vx/rdmp/dmp'file exits /bin/mknod/ '/dev/vx/config': File exits . . . Loading vxdmp module Loading vxio module Loading vxspec module the server is stuck at here also i cannot take it to single user mode for trouble shooting. Kindly help me on this issue. Thanks and regards UvSolvedVxDMP and SCSI ALUA handler (scsi_dh_alua)
Hi, I have an question on scsi alua handler and VxDMP on Linux. we have a scsi_dh_alua handler on linux, which can handle the ALUA related check conditions sent from target controllers. so this will be handled by scsi layer itself and will not be propogated to the upper layers. So do we have anything simalar with VxDMP to handle the ALUA related errors reported by scsi layer or does it depend on scsi_dh_alua handler to handle the ALUA related check condions from target and retry at the scsi layer it self ??? Thanks, Inbaraj.SolvedUnable to boot RHEL 6.3 server after installing DMP 5.1 SP1 RP2
Dear Gents, I have recently installed DMP 5.1 SP1 RP2 ona rhel 6.3 os x86_64, after the installation when i took a reboot the system got hung during the boot time. i found the follwoing error message from teh console. "this release of vxted doesnot contain any moduels which are suitable for your 6.6.32-279.el6.x86_64 kernel. error reading module 'vxdmp'see documentation. i have found that there is an another release SFHA 5.1 SP1PR2RP4 and installed that too to verify but no luck. in my enviroment i need to installDMP 5.1 SP1 RP2 ona rhel 6.3 os x86_64 Os. any help much appreciated. Thanks. UvSolvedHow to write udev rules for persistent device name for DMP subpaths?
Hi experts, I'm using SFRAC 6.0.3 on Oracle Linux 5.8 with Oracle storage (2 HBA cards, 4 paths) and would like to make the DMP subpaths persistent after rebooting. I knew that for single path I can write the following udev rule: KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="<scsi_id>", SYMLINK+="<your_disk_name>%n" But it didn't work for multipath... Anyone can help on this issue? Thanks in advance... Thanks and Regards, KiluwaHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedvxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVMversion 51 Array Types Supporred: A/PF-LSI Depending Array TypesA/P State :Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! RemcoDMP, MPIO, MSDSM, SCSI-3 and ALUA configuration settings
Ok, I'm somewhat confused and the more I read the more confused I think I'm getting. I'm going to be setting up a 4 node active/active cluster for SQL. All of the nodes will have 2 seperate fiber channel HBAs connecting through 2 seperate switches to our NetApp. The NetApp supports ALUA, so the storage guy wants to use it. It is my understanding that I need to use SCSI-3 to get this to work. Sounds good to me so far. My question is, do I need to use any of Microsoft's MPIO or MSDSM? This is on Win 2008 R2. Or does Veritas take care of all of that? Also, I read that in a new cluster set up, only connect 1 path first and then install and then connect the 2nd path and let Veritas detect it and configure it. Is that accurate? Any info or directions you can point me will be greatly appreciated. Thanks!SolvedWhy does 1 subpath in multipathing use slice 2 and not the other??
Wondering why disk 2 show slice 2 whereas the other subpaths show only disk? Does this mean that this disk has been formatted/labeled differently? They all should be labeled EFI. [2250]$ vxdisk path SUBPATH DANAME DMNAME GROUP STATE c0t0d0s2 disk_0 - - ENABLED c3t500601683EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d0 storageunit1 - - ENABLED c2t500601603EA04599d0 storageunit1 - - ENABLED c2t500601693EA04599d0 storageunit1 - - ENABLED c3t500601613EA04599d2s2 storageunit2 - - ENABLED c3t500601683EA04599d2s2 storageunit2 - - ENABLED c2t500601693EA04599d2s2 storageunit2 - - ENABLED c2t500601603EA04599d2s2 storageunit2 - - ENABLED c3t500601613EA04599d1 storageunit3 - - ENABLED c3t500601683EA04599d1 storageunit3 - - ENABLED c2t500601603EA04599d1 storageunit3 - - ENABLED c2t500601693EA04599d1 storageunit3 - - ENABLED When I attempt to initialise the LUN I get this error. [2318]$ vxdisksetup -i storageunit2 vxedvtoc: No such device or address I can see from this output that paths are disabled. [2303]]$ vxdmpadm -v getdmpnode all NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME SERIAL-NO ARRAY_VOL_ID ======================================================================================================== disk_0 ENABLED Disk 1 1 0 disk 600508B1001C0CB883CB65D7A794AC54 - storageunit1 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F004C1C9AB085F2E111 1 storageunit2 ENABLED EMC_CLARiiON 4 2 2 emc_clariion0 60060160D2202F00D6FACBE385F2E111 5 storageunit3 ENABLED EMC_CLARiiON 4 4 0 emc_clariion0 60060160D2202F00EE397F2986F2E111 9Solved