missing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedGuest - Guest clustering on single Hardware
Hi Community, I found a document mentioning about the limitations of guest to guest clustering on a virtualized environment (i.e. vmware ESXi) because of the I/O fencing problems in Veritas Cluster Suite 5.1 My question is do we still have this same limitation on v 6.1? You can find the article below; https://www.google.com.tr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.symantec.com%2Fbusiness%2Fsupport%2Fresources%2Fsites%2FBUSINESS%2Fcontent%2Flive%2FTECHNICAL_SOLUTION%2F169000%2FTECH169366%2Fen_US%2FTechNote_VMware_IOFencing.pdf&ei=T0sIVNXKDOXNygPimoHQCw&usg=AFQjCNFmH2mS-AfGYlJ9WPGzoZwDo4yl7Q&bvm=bv.74649129,d.bGQSolved1.5KViews0likes1CommentAppHA, SRM, VOM VBS Orchestration
Hello All, I'm working through a complex implementation and have hit a couple of snags along the way. Maybe people have worked on verious bits along the way and may be able to contribute to an overall solution The object of this endeavor is to create a flexible environment between 2 regional datacenters that gives us the ability to bring up a complex multi-tiered application pod at either location. Physicals meeting this HA criteria run VCS-Geo Cluster, some VMs run VCS-Geo Cluster with VVR while other VMs run AppHA..using SRM. They are all logically grouped within 2VOM VBS groups. The Home VBS consists of the AppHA VMs, Service Groups from the VCS nodes in the home datacenter while the DR VBS consists of the AppHA VMs and the matching DR geo cluster Service groups. AppHA seems to work as planned. VMs move through SRM to the datacenters and register fine in the SymHA consoles and report through VCenter with current status. My primary hurdle seems to be the control of those AppHA VMs from VOM VBS. The VBS control relies on a control host scan; and when that scan takes upwards of 6 hrs to run and update status. That time is better spent doing a manual startup sequence. Again VOM communication and dynamic updating of the Virtualization environment seem to be an issue. The VCS based elements are not an issue with my belief that they are based on a static infrastructure. I have successfully started up and shut down the home VBS, but once it moves to the opposing Datacenter...VOM can't find the VM and AppHA Consoles cant tie updates to records Has anyone done VBS orchistration of AppHA assets withe SRM handling transfers? VOM 5.0 VMWare 5.1 SRM 5.1 AppHA 6.0 VCS 5.1 SP2\ 6.0.1 Many Thanks, Many thanksSolved1.5KViews0likes1CommentApplication HA clustering has dropped a disk... :-/
Hi, After I completed the Application HA clustering of SQL2008 across two Windows 2008 R2 nodes, I found that the SQL installation'svirtual Backup disk (M: for reference) remained on the first node after I'd initiated a switch to the second node via the High Availability tab in the vSphere Client. The other disks re-attached to the second node ok. On closer inspection in Cluster Explorer on one of the nodes I discovered that the Mount Point for that disk was completely missing! I attempted to manually create the Mount Point and brought the resource online in the cluster on the first node, but attempting the switch operation again failed as the virtual disk failed to re-attach to the second node. How to fix without blatting the clustering / SQL installations? Thanks!Solved2.9KViews0likes10CommentsHave VCS to send email to admin upon fault
We a looking for a way to have VCS sends off an e-mail alert when the cluster faults to an admin. Looking at some of the previous solutions, I have come across two possible solutions. First I need to setup the traps to detect the fault. https://www-secure.symantec.com/connect/forums/vcs-notifierhanotify Then I need to create a resource to send an alert. https://www-secure.symantec.com/connect/forums/alerting-feature-applicationha Is this correct? I might need help with finding the correct trap status for the fault alert. ThanksSolved3.1KViews1like9CommentsDisable AutoFailOver from stopping services
I have several services that I am monitoring that are set to autofailover to a seconds system. It has some up that occasionally I need to restart a service without HA failing over to another system. I can disable the failover from happening using the following command hagrp -modify App_Cluster AutoFailOver 0 However what happens is that if the service is stopped HA continues to shutdown all the other services that are up. I was researching and I cam across disabling the Evacuate on HA, but even with it disabled, it still shuts down the other services. hagrp -modify App_Cluster Evacuate 0 I want the other services to continue to run even if one went down for some reason. What is the best way to accomplish this?Solved1.9KViews3likes4CommentsSFHACFS vs Hitachi-VSP & Host mode option 22
Hi! I'm installing several new SFHACFS clusters and during the failover testing I run into annoying problem - when I fence one node of the cluster, DMP logs high number of path down/path up events which in the end causes the disk to disconnect even on other active nodes. We've found out that our disks were exported without the host mode option 22, so we fixed this on storage. Even after this the clusters bahaved the same. Later I've read somewhere on internet, that it's good idea to relabel the disks, so I've requested new disks from storage and did vxevac to new disks. This fixed just two clusters we have, but the other two are behaving still the same. Have anybody experienced anything similar? Do you know anything I can test / check on the servers to determine the difference? The only difference between the enviroment is that the not-working clusters have the disks mirrored from two storage systems, while the working ones have the data disks only from one storage.SolvedDynamic multipath using EMC storage Veritas 3.5
I am trying to setup multipathing with EMC Clariion. The problem is that vxdisk list fabric_0 only shows one path. The EMC array is in auto-trespass mode. This is solaris 8 and format shows two paths. # vxdisk list fabric_2 Device: fabric_2 devicetag: fabric_2 type: sliced hostid: ncsun1 disk: name=disk05 id=1302111549.6037.ncsun1 group: name=rootdg id=1072877341.1025.nc1 info: privoffset=1 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/fabric_2s4 char=/dev/vx/rdmp/fabric_2s4 privpaths: block=/dev/vx/dmp/fabric_2s3 char=/dev/vx/rdmp/fabric_2s3 version: 2.2 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=1048494080 private: slice=3 offset=1 len=32511 update: time=1302111558 seqno=0.5 headers: 0 248 configs: count=1 len=23969 logs: count=1 len=3631 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-023986[023738]: copy=01 offset=000231 enabled log priv 023987-027617[003631]: copy=01 offset=000000 enabled Multipathing information: numpaths: 1 c10t500601613B241045d5s2 state=enabled formt 8. c10t500601613B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0/ssd@w500601613b241045,0 16. c16t500601603B241045d0 <DGC-RAID5-0428 cyl 63998 alt 2 hd 256 sec 64> /ssm@0,0/pci@18,700000/SUNW,qlc@1/fp@0,0/ssd@w500601603b241045,0 vxdisk -o alldgs list show both paths. Two things here it should only show one of the paths and also the second path it shows with diskgroup in ( ). Another issue is why the disk dont show up as EMC_0 or similiar. *****The server has T3 connect and EMC which we are migrating from T3 to EMC. The EMC is the fabric naming convention. # vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS T30_0 sliced disk01 rootdg online T30_1 sliced disk02 rootdg online T31_0 sliced disk03 rootdg online T31_1 sliced disk04 rootdg online T32_0 sliced rootdg00 rootdg online T32_1 sliced rootdg01 rootdg online c1t0d0s2 sliced - - error c1t1d0s2 sliced - - error fabric_0 sliced - - error fabric_1 sliced - - error fabric_2 sliced disk05 rootdg online fabric_3 sliced disk06 rootdg online fabric_4 sliced disk07 rootdg online fabric_5 sliced disk08 rootdg online fabric_6 sliced disk09 rootdg online fabric_7 sliced disk10 rootdg online fabric_8 sliced - - error fabric_9 sliced - - error fabric_10 sliced - (rootdg) online fabric_11 sliced - (rootdg) online fabric_12 sliced - (rootdg) online fabric_13 sliced - (rootdg) online fabric_14 sliced - (rootdg) online fabric_15 sliced - (rootdg) online Here is the ASL...There is no APM prior to version Veritas 4.0. vxddladm listsupport snippet for brevity..... libvxDGCclariion.so A/P DGC CLARiiON The c10 and c16 are the paths for the EMC # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER_DISKS ENABLED OTHER_DISKS c10 OTHER_DISKS ENABLED OTHER_DISKS c16 OTHER_DISKS ENABLED OTHER_DISKS # vxdmpadm getsubpaths ctlr=c10 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c10t500601613B241045d7s2 ENABLED - fabric_0 OTHER_DISKS OTHER_DISKS c10t500601613B241045d6s2 ENABLED - fabric_1 OTHER_DISKS OTHER_DISKS c10t500601613B241045d5s2 ENABLED - fabric_2 OTHER_DISKS OTHER_DISKS c10t500601613B241045d4s2 ENABLED - fabric_3 OTHER_DISKS OTHER_DISKS c10t500601613B241045d3s2 ENABLED - fabric_4 OTHER_DISKS OTHER_DISKS c10t500601613B241045d2s2 ENABLED - fabric_5 OTHER_DISKS OTHER_DISKS c10t500601613B241045d1s2 ENABLED - fabric_6 OTHER_DISKS OTHER_DISKS c10t500601613B241045d0s2 ENABLED - fabric_7 OTHER_DISKS OTHER_DISKS # vxdmpadm getsubpaths ctlr=c16 NAME STATE PATH-TYPE DMPNODENAME ENCLR-TYPE ENCLR-NAME ====================================================================== c16t500601603B241045d7s2 ENABLED - fabric_8 OTHER_DISKS OTHER_DISKS c16t500601603B241045d6s2 ENABLED - fabric_9 OTHER_DISKS OTHER_DISKS c16t500601603B241045d5s2 ENABLED - fabric_10 OTHER_DISKS OTHER_DISKS c16t500601603B241045d4s2 ENABLED - fabric_11 OTHER_DISKS OTHER_DISKS c16t500601603B241045d3s2 ENABLED - fabric_12 OTHER_DISKS OTHER_DISKS c16t500601603B241045d2s2 ENABLED - fabric_13 OTHER_DISKS OTHER_DISKS c16t500601603B241045d1s2 ENABLED - fabric_14 OTHER_DISKS OTHER_DISKS c16t500601603B241045d0s2 ENABLED - fabric_15 OTHER_DISKS OTHER_DISKS Thanks for any helpSolvedConfigure GCO with vvr in linux
Totoal number of node on primary site 7. Management ip for all 7 nodes are defined and following are service gorup name with ip service group name 1- IP 10.1.1.1 service group name 2- IP 10.1.1.2 service gorup name 3- IP 10.1.1.3 service group name 4- IP 10.1.1.4 service group name 5- IP 10.1.1.5 Secondary site 3 nodes Managment ip for all 3 nodes are defined . If i want to configure GCO with VVR in this scenario. How many ip address required on both the nodesSolved1.6KViews0likes2Comments