missing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedSFHACFS vs Hitachi-VSP & Host mode option 22
Hi! I'm installing several new SFHACFS clusters and during the failover testing I run into annoying problem - when I fence one node of the cluster, DMP logs high number of path down/path up events which in the end causes the disk to disconnect even on other active nodes. We've found out that our disks were exported without the host mode option 22, so we fixed this on storage. Even after this the clusters bahaved the same. Later I've read somewhere on internet, that it's good idea to relabel the disks, so I've requested new disks from storage and did vxevac to new disks. This fixed just two clusters we have, but the other two are behaving still the same. Have anybody experienced anything similar? Do you know anything I can test / check on the servers to determine the difference? The only difference between the enviroment is that the not-working clusters have the disks mirrored from two storage systems, while the working ones have the data disks only from one storage.SolvedVxDMP on top of RDAC/MPP
Hiya, I have a setup that I need to 'fix'... Currently it appears the servers have RDAC/MPP as the primary multipathing , however for what ever reason, it also has Veritas Volume Manager sitting atop of this! MPP is dealing with the pathing and presenting VxDMP with one 'virtual' path. Below is a vxdisk list example. Device: disk_0 devicetag: disk_0 type: auto clusterid: x_1 disk: name=x1-1 id=x2 group: name=xdg id=x1 info: format=cdsdisk,privoffset=256,pubslice=3,privslice=3 flags: online ready private autoconfig shared autoimport imported pubpaths: block=/dev/vx/dmp/disk_0s3 char=/dev/vx/rdmp/disk_0s3 guid: - udid: IBM%5FVirtualDisk%5FDISKS%5F600A0B80006E0140000035F24C090009 site: - version: 3.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=3 offset=65792 len=1140661648 disk_offset=0 private: slice=3 offset=256 len=65536 disk_offset=0 update: time=1299520130 seqno=0.29 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=51360 logs: count=1 len=4096 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-051423[051168]: copy=01 offset=000192 enabled log priv 051424-055519[004096]: copy=01 offset=000000 enabled lockrgn priv 055520-055663[000144]: part=00 offset=000000 Multipathing information: numpaths: 1 sdb state=enabled I going to be removing the MPP layer to allow VxDMP to take over, however my question is how will this effect my VxVM config? I'm hoping VxVM upon reboot will pickup the new paths etc, but will this effect in any way the DG's that sit on top of this? Note these are shared luns across a cluster. Do I need to worry about the dmppolicy+disk.info files etc ? I hope the question makes sense! ThanksSolved