Migrate ApplicationHA to new vCenter server
We're consolidating a number of vCenter servers into a single instance. One of the clusters we want to move has VMs managed by AppHA 6.0.1. Is there an easy way to migrate the cluster configuration from the old vCenter server to the new one without causing any cluster outages or losing the cluster configuration? I've found this article on how to deal with the permanent loss of a vCenter server, but an unsure what might happen if the existing vCenter server stays online. https://sort.symantec.com/public/documents/appha/6.0/windows/productguides/html/appha_userguide_60_win/apas11.htmSolved8.6KViews0likes9Commentsresource state UNKNOWN
Hi all. I have a single node cluster, ready to become a part of a global cluster in the future. I restarted the node (system) starting with this situation: # hagrp -state #Group Attribute System Value AppService State MIVDB01S |ONLINE| ClusterService State MIVDB01S |ONLINE| VVRService State MIVDB01S |ONLINE| after reboot I had this situation: # hagrp -state #Group Attribute System Value AppService State MIVDB01S |PARTIAL| ClusterService State MIVDB01S |ONLINE| VVRService State MIVDB01S |ONLINE| # hares -state #Resource Attribute System Value BackupServer State MIVDB01S OFFLINE DataFilesystem State MIVDB01S OFFLINE DatabaseServer State MIVDB01S OFFLINE NMSServer State MIVDB01S OFFLINE|STATE UNKNOWN RVGPrimary State MIVDB01S OFFLINE appNIC State MIVDB01S ONLINE datarvg State MIVDB01S ONLINE ntfr State MIVDB01S ONLINE wac State MIVDB01S ONLINE I cannot escape from the 'STATE UNKNOWN' situation. Can you help me ? I try to reboot the server for the second time. BRSolved7.1KViews1like10CommentsVVR paused due to network disconnection
Hi all. I have a global cluster with 2 minicluster systems with solaris 10 installed (SPARC) primary: MIVDB01S - 172.22.8.132 secondary: MILDB08S - 10.66.11.148 I stopped the secondary (init 0) for 3 days, and after that I startup the secondary (boot from ok-prompt) and after 1 day I checked for the situation of the replication but: MIVDB01S root vradmin -g datadg printrvg datarvg Replicated Data Set: datarvg Primary: HostName: 172.22.8.132 <localhost> RvgName: datarvg DgName: datadg Secondary: HostName: 10.66.11.148 RvgName: datarvg DgName: datadg vxrlink -g datadg status datarlk Wed Jan 21 09:51:08 2015 VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink datarlk. DCM contains 874432 Kbytes (1%) of the Data Volume(s). vradmin -g datadg repstatus datarvg Replicated Data Set: datarvg Primary: Host name: 172.22.8.132 RVG name: datarvg DG name: datadg RVG state: enabled for I/O Data volumes: 1 VSets: 0 SRL name: srl_vol SRL size: 1.00 G Total secondaries: 1 Secondary: Host name: 10.66.11.148 RVG name: datarvg DG name: datadg Data status: consistent, behind Replication status: paused due to network disconnection (dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 874432 Kbytes) (SRL protection logging) Timestamp Information: N/A vxprint -Pl | grep flags flags: write enabled attached consistent disconnected asynchronous dcm_logging resync_paused ----- 1st solution MIVDB01S root vradmin -g datadg resync datarvg ----- 2nd solution Stop vradmin on secondary then on primary # /usr/sbin/vxstart_vvr stop Start vradmin on secondary then on primary # /usr/sbin/vxstart_vvr start Can you help me ?Solved6.8KViews0likes8CommentsR12: XML Reports end with WARNING Status, Using Virtual Hostname by Veritas HA
Dear Experts, We are using the Veritas HA cluster to configure our ERP Application on Virtual Hostname. With Release 11i every thing was working fine, We have upgraded our Application from 11.5.10.2 to R12, 12.1.3. Now in the Upgraded Application we are unable to open/view any of the XML based Concurrent Reports. All the XML Concurrent Reports are ending in Status WARNING. When we click on the View Out put, we are just getting the XML code and not the XML based reports. The log file of Out Put Postprocessor shows the following errors: ================================================ 5/28/13 12:40:20 PM] [STATEMENT] [960882:RT4152834] Get Output Type [5/28/13 12:40:20 PM] [STATEMENT] [960882:RT4152834] XML file name: /mnt/oracle/ERPROD/inst/apps/ERPROD_erp-lh-dr/logs/appl/conc/out/o4152834.out [5/28/13 12:40:20 PM] [STATEMENT] [960882:RT4152834] XML file is on node: ERP-LH-DR [5/28/13 12:40:20 PM] [UNEXPECTED] [960882:RT4152834] java.sql.SQLException: Exhausted Resultset at oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:1224) at oracle.apps.fnd.cp.util.CpUtil.getCanonicalLocalNode(CpUtil.java:127) at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:244) at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:176) [5/28/13 12:40:20 PM] [960882:RT4152834] Completed post-processing actions for request 4152834. And error like below: The error we are getting from the Application log files are related to Hostname. Following is the abstract from the Output Post Processor log. [5/29/13 2:56:02 PM] [UNEXPECTED] [960900:RT4155505] java.sql.SQLException: Exhausted Resultset at oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:1224) at oracle.apps.fnd.cp.util.CpUtil.getCanonicalLocalNode(CpUtil.java:127) at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:244) at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:176) =============================================================== Concurrent Manger Log files shows the following error: =========================================================== --------------------------------------------------------------------------- Executing request completion options... Output file size: 430455 ------------- 1) PUBLISH ------------- Beginning post-processing of request 4158856 on node ERP-LH-DR at 30-MAY-2013 18:53:38. Post-processing of request 4158856 failed at 30-MAY-2013 18:53:38 with the error message: One or more post-processing actions failed. Consult the OPP service log for details. -------------------------------------- ------------- 2) PRINT ------------- Not printing the output of this request because post-processing failed. -------------------------------------- Finished executing request completion options. --------------------------------------------------------------------------- Concurrent request completed ========================================================== Please suggests: 1- Is there any way to make these XML reports work with Veritas HA, using Virtual hostname 2- Can we identify or pin point from where the Out Put Postprocessor is setting the Physical Hostname. Regards Ali Ammar6.5KViews1like6CommentsVMwareDisks error
I installed SVS 6.0.1 on my server and update VRTSvcsag to version 6.0.2. Then i configured VMwareDisks resource in main.cf. VMwareDisks VMwareDisks1 ( ESXDetails = { "10.172.117.95" = "root=ISIuJWlWLwPOiWKuM" } DiskPaths = { "[95_storage] rhel5104/rhel5104_1.vmdk" = "0:1", "[95_storage] rhel5104/rhel5104_2.vmdk" = "0:2", "[95_storage] rhel5104/rhel5104_3.vmdk" = "0:3" } ) But after had started, the VMwareDisks resource is not probed with the error message: Dec 28 01:19:42 rhel5104 AgentFramework[16962]: VCS ERROR V-16-10061-22521 VMwareDisks:VMwareDisks1:monitor:Incorrect configuration: The disk '[95_storage] rhel5104/rhel5104_3.vmdk' has incorrect RDM configuration. -------------------------------------------------------------------------------------------- The Disk UUID is automatically updated in main.cf when hastart: VMwareDisks VMwareDisks1 ( ESXDetails = { "10.172.117.95" = "root=ISIuJWlWLwPOiWKuM" } DiskPaths = { "6000C291-229e-4704-719c-2a66b8f21ad8:[95_storage] rhel5104/rhel5104_1.vmdk" = "0:1", "6000C29a-82d7-d365-a09e-68d37698afd9:[95_storage] rhel5104/rhel5104_2.vmdk" = "0:2", "6000C29d-0bd2-901d-5b95-24934b43144e:[95_storage] rhel5104/rhel5104_3.vmdk" = "0:3" } )Solved6.1KViews1like8Commentscannot configure vxfen after reboot
Hello, We move physically a server, and after reboot, we cannot configure vxfen. # vxfenconfig -c VXFEN vxfenconfig ERROR V-11-2-1002 Open failed for device: /dev/vxfen with error 2 my vxfen.log : Wed Aug 19 13:17:09 CEST 2015 Invoked vxfen. Starting Wed Aug 19 13:17:23 CEST 2015 return value from above operation is 1 Wed Aug 19 13:17:23 CEST 2015 output was VXFEN vxfenconfig ERROR V-11-2-1041 Snapshot for this node is different from that of the running cluster. Log Buffer: 0xffffffffa0c928a0 VXFEN vxfenconfig NOTICE Driver will use customized fencing - mechanism cps Wed Aug 19 13:17:23 CEST 2015 exiting with 1 Engine version 6.0.10.0 RHEL 6.3 any idea to help me running the vxfen (and the had after ... ) ?6KViews0likes7CommentsUnable to monitor additionally igswd.exe process.
Softwares - Windows Server 2012R2 - SAP Solution Manager 7.1 - Application HA 6.1 I am testing functions of the ApplicationHA agents for monitoring SAP NetWeaver. After I used Configuration Wizard for monitoring SAP system, I used VCS Command to add addtional process(igswd.exe). However, the configuration did not work. Althought igswd.exe was running, the ApplicationHA gracefully stopped the SAP instance. What was the problem?(Was commands wrong?) VCS commands I used below. > hares -display SAP_S01_DVEBMGS00_res | findstr ProcMon SAP_S01_DVEBMGS00_res ProcMon global disp+work.exe jcontrol.exe > haconf -makerw > hares -modify SAP_S01_DVEBMGS00_res ProcMon dis+work.exe jcontrol.exe igswd.exe > haconf -dump > notify_sink.exe -f Thank you.Solved5.8KViews0likes3CommentsNetwork adaptor unavailable
Hi All, After upgrading a cluster server from windows 2008 R1 to windows 2008 R2 the server cannot connect to the network anymore. The adaptor is still there and the heartbeats are working , but the principal network cannot be reached. The server is an HP and the veritas cluster software is 5.1 SP2. Any suggestions how to fix ? Since the portal from Symantec is unavailable to log a case , i need some help ?Solved4.6KViews1like2CommentsBuilding NetBackup Global Cluster with VVR Option... need main.cf
Hello, I am trying to build a global cluster with VVR option. Where i have one NetBackup cluster in Site A with two nodes and one Netbackup Cluster in Site B with single node. I know how to do replication manually for the catalog but would like to add it to global cluster configuration to automate everything. I would appreciate if anyone can share the details what services need to be modified or new service group need to be created to accomodate replication part. If i can have a working main.cf for the above solution it will be really helpfull. Best Regards4.6KViews0likes4Comments