missing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedVxDMP and SCSI ALUA handler (scsi_dh_alua)
Hi, I have an question on scsi alua handler and VxDMP on Linux. we have a scsi_dh_alua handler on linux, which can handle the ALUA related check conditions sent from target controllers. so this will be handled by scsi layer itself and will not be propogated to the upper layers. So do we have anything simalar with VxDMP to handle the ALUA related errors reported by scsi layer or does it depend on scsi_dh_alua handler to handle the ALUA related check condions from target and retry at the scsi layer it self ??? Thanks, Inbaraj.SolvedHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedvxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVM version 51 Array Types Supporred: A/PF-LSI Depending Array Types A/P State : Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! RemcoSF4.1 VxDMP disables dmpnode on single path failure
This is more like an informational question, since I do not assume anyone has a solution, but just in case I would be thankfull for some enlightment: I am forced to use an old Version SF4.1MP4 in this case on Linux SLES9. For whatever reason DMP does not work with the JBOD I have added. The JBOD (Promise VTrak 610fD) is ALUA, so half of all the available paths are always standy and that is ok. But the DMP in 4.1 when seeing one of 4 paths not working diables the whole DMP Node, rendering the disk unusable: Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-148 i/o error occured on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-148 i/o error anal ysis done on path 8/0x70 belonging to dmpnode 201/0x10<5>VxVM vxdmp V-5-0-0 SCSI error opcode=0x28 returned status=0x1 key=0x2 asc=0x4 ascq=0xb on path 8/0x70 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x50 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x70 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x10 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-112 disabled path 8/0x30 belonging to the dmpnode 201/0x10 Jun 12 15:35:01 kernel: VxVM vxdmp V-5-0-111 disabled dmpnode 201/0x10 Jun 12 15:35:01 kernel: Buffer I/O error on device VxDMP2, logical block 0 Currently my only solutions seems to stick with Linux DM-Multipathing an add the disks as foreign devices.SolvedSF5.1 VxDMP caches LUN size?
I recently recrated a LUN with a different size but the same LUN ID (Linux SLES11) and when trying to initialize it as a new disk to VxVM I keeps sticking to the old size, whatever I do (I tried to destroy and reinitialize in different formats and grow many times): # vxdisk list isar2_sas_1 Device: isar2_sas_1 public: slice=5 offset=65792 len=167700688 disk_offset=315 Multipathing information: numpaths: 8 sdu state=disabled sdt state=enabled sds state=disabled sdam state=disabled sdan state=enabled sdb state=enabled sda state=disabled sdv state=enabled ... but the main reasons seems to be DMP is somehow sticking to the old size: # fdisk -l /dev/vx/dmp/isar2_sas_1 Disk /dev/vx/dmp/isar2_sas_1: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b pepper:/etc/vx # fdisk -l /dev/sdt Disk /dev/sdt: 128.8 GB, 128849018880 bytes 255 heads, 63 sectors/track, 15665 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x55074b8b Any hints are greatly appreciated :)Solved