Can LLT heartbeats communicate between NICs with different device names?
One 2-node vcs cluster, the heartbeat NICs are eth2 and eth3 on each node, IF eth2 on node1 down, and eth3 on node2 down. Does this mean the 2 heartbeat Links both down, and the Cluster is in split brain situation? Can LLT heartbeats communicate between NIC eth2 and NIC eth3? Since the 《VCSInstallation Guide》requires the 2 heartbeat Links in different networks.We should put eth2 of both nodes in the VLAN (VLAN1), and put eth3 of both nodes in another vlan (VLAN2). So in this situation heartbeats cannot communicate between eth2 and eth3. But, in a production cluster system, we found out the 4 NICs--eth2 and eth3 of both nodes are all in a same VLAN. and this lead me to post the discussion thread to ask this question: IF eth2 on node1 down, and eth3 on node2 down, What will happen to the cluster (which isin active-standby mode)? Thanks!Solved1.6KViews5likes5CommentsVCS system states in the state of ADMIN_WAIT
If you name the cluster name cluster while running VCW, then system states of all VCS nodes are going to the state of ADMIN_WAIT. If you try to start VCS with the "hastart -onenode" command after run "hastop -all", then hastatus will display "STALE_ADMIN_WAIT". Check the main.cf, you will see the cluster name is cluster. include "types.cf" cluster Cluster( UserNames = { admin = gJHkHEfFGjISjGFqJJeRJoJSiRJj } Administrators = { admin } ) system symantecha1( ) system symantecha2( )Solved1.5KViews3likes2CommentsNFS share doesn't failover due to being busy
Hello! We are trying to implement a failover cluster, which hosts database and files on clustered NFS share. Files are used by the clustered application itself, and by several other hosts. The problem is, that when active node fails (I mean an ungraceful server shutdown or some clustered service stop), the other hosts still continue to use files on our cluster-hosted NFS share. That leads to an NFS-share "hanging", when it doesn't work on the first node, and still cannot be brought online of the second node. Other hosts also experience hanging of requests to that NFS share. Later, I will attach logs, where problem can be observed. The only possible corrective action found by us is total shutdown and sequential start of all cluster nodes and other hosts. Please recommend us a best-practice actions, required for using NFS share on veritas cluster server (maybe, some start/stop/clean scripts being included as a cluster resource, or additional cluster configuration options). Thank you, in advance! Best regards, Maxim Semenov.Solved4.4KViews3likes13CommentsIntegrating SAP with VCS 6.2 (on Oracle Linux 6.5)
Hi, I was wondering if someone has some additional information regarding how to setup my cluster... I have both VCS (inclusing Storage Foundation) and Linux knowledge. I do however have no background in SAP. And as SAP is a very complex product, I can not see the forest because of the trees... Setup 2 node (active-passive) cluster of Oracle Linux 6.5 nodes. Veritas Storage Foundation HA (= VxVM + DMP + VCS). Oracle 11.2 as database. SAP ECC 6.0 Apart from the Installation & Configuration guide on the SAP NetWeaver Agent, I found little information about implementing SAP in VCS. Source: "Symantec™ High Availability Agent for SAP NetWeaver Installation and Configuration Guide for Linux 6.2". But unfortunately I can not find a howto, guide or whatever from Symantec, nor from the usual Google attempts. My customer is however also not very SAP knowledged. From what I understand it is a very basic SAP setup, if not the simplest. They are using SAP ECC6.0 and an Oracle 11.2 database. So I assume they are just having a Central Instance and the Database. After some Google resource, I found out that SAP ECC 6.0 is technically a SAP NetWeaver 7.0. On Symantec SORT, I found 3 versions of SAP NetWeaver. I downloaded the first one, as the descripton says: SAP NetWeaver SAP NetWeaver 7.1, 7.2, 7.3, 7.4 SAP NetWeaver 7.1, 7.3 Agent: SAPNW04 5.0.16.0 Application version(s): SAP R/3 4.6, R/3 Enterprise 4.7, NW04, NW04s, ERP/ECC 5.0/6.0, SCM/APO 4.1/5.0/5.1/7.0, SRM 4.0/5.0/7.0, CRM 4.0/5.0/7.0 Source: https://sort.symantec.com/agents/detail/1077 SAP ERP 2005 = SAP NetWeaver 2004s (BASIS 7.00) = ECC 6.0 Source: http://itknowledgeexchange.techtarget.com/itanswers/difference-bet-ecc-60-sap-r3-47/ Source: http://www.fasttrackph.com/sap-ecc-6-0/ Source : http://wulibi.blogspot.be/2010/03/what-is-sap-ecc-60-in-brief.html Currently I have this setup unfinshed: Installed & configured Storage Foundation HA on both nodes. Instaled the ACC Libraries on both nodes. see: https://sort.symantec.com/agents/detail/1183 Installed the SAP NetWeaver Agent on both nodes. see: https://sort.symantec.com/agents/detail/1077 Configured next to the CusterServiceGroup, 3 Service Groups: SG_sap the shared storage Resources: DiskGroup + Volumes + Mount. the SAPNW Agent Resource. SG_oracle the shared storage Resources: DiskGroup + Volumes + Mount the Oracle Agent Resurce. SG_nfs still empty. SAPNW Agent. SAP instance type The SAPNW Agent documentation states: The agent supports the following SAP instance types: Central Services Instance Application Server Instance Enqueue Replication Server Instance. Source: "Symantec™ High Availability Agent for SAP NetWeaver Installation and Configuration Guide for Linux 6.2" But I guess the SAP ECC 6.0 has them all in one central instance, right? So I only need one SAPNW Agent. How is the SAP installed: only ABAP only Java add-in (both ABAP and Java). Source: "Symantec™ High Availability Agent for SAP NetWeaver Installation and Configuration Guide for Linux 6.2" I have no idea. How can I find this out? InstName Attribute Another thing is the InstName Attribute. This also does not correspond with the information I have. My SAP intance is T30. So the syntax is correct more or less, but it isn't listed below. Which is important also to decide on the value for the ProcMon Attribute The SAPSID and InstName form a unique identifier that can identify the processes running for a particular instance. Some examples of SAP instances are given as follows: InstName = InstType DVEBMGS00 = SAP Application Server - ABAP (Primary) D01 SAP = Application Server - ABAP (Additional) ASCS02 = SAP Central Services - ABAP J03 = SAP Application Server - Java SCS04 = SAP Central Services - Java ERS05 = SAP Enqueue Replication Server SMDA97 = Solution Manager Diagnostics Agent Source: "Symantec™ High Availability Agent for SAP NetWeaver Installation and Configuration Guide for Linux 6.2" In the listing of the required attributes it is also stated. However, the default value is CENTRAL. I guess this is correct in my case? InstName Attribute: An identifier that classifies and describes the SAP server instance type. Valid values are: APPSERV: SAP Application Server ENQUEUE: SAP Central Services ENQREP: Enqueue Replication Server SMDAGENT: Solution Manager Diagnostics Agent SAPSTARTSRV: SAPSTARTSRV Process Note: The value of this attribute is not case-sensitive. Type and dimension: string-scalar Default: APPSERV Example: ENQUEUE EnqSrvResName Attribute A required attribute is the EnqSrvResName Attribute. The documentation says this should be the Resource Name for the SAP Central Instance. But I am assuming I only have a SAP Central Instance. So I guess I should use the name of my SAP Agent Resouce from my SAP Service Group? EnqSrvResName Attribute: The name of the VCS resource for SAP Central Services (A)SCS Instance. This attribute is used by Enqueue and Enqueue Replication Server. Using this attribute the Enqueue server queries the Enqueue Replication Server resource state while determining the fail over target and vice a versa. Type and dimension: string-scalar Default: No default value Example: SAP71-PI1SCS_sap Source: "Symantec™ High Availability Agent for SAP NetWeaver Installation and Configuration Guide for Linux 6.2" Is anyone able to help me out? Thanks in advance.Solved2.4KViews2likes9CommentsHow to change SQL Agent/Server start up account?
Hi, I have an existing 2 node active-actvieSQL 2008 R2 Veritas 6.0 cluster. When I installed the SQL instance on each node, Iset a specific AD user for starting the SQL Agent and SQL Server service. I set it the same on each node. I now want to change that to a different user. I know how to do that via the SQL Server Configuration Manager on each node, but is that the correct way to do it in a Veritas cluster?Changing it there will restart the service and cause a failover I believe. What is the correct way to do this?Solved1.8KViews2likes7CommentsError while installing VCS in solaris 11
The following warnings were discovered on the systems: CPI WARNING V-9-40-4923 To avoid a potential reboot after installation, you should modify the /etc/system file on solaris with the appropriate values, and reboot prior to package installation. Appropriate /etc/system file entries are shown below: set lwp_default_stksize=0x8000 set rpcmod:svc_default_stksize=0x8000 CPI WARNING V-9-40-4923 To avoid a potential reboot after installation, you should modify the /etc/system file on solaris11 with the appropriate values, and reboot prior to package installation. Appropriate /etc/system file entries are shown below: set lwp_default_stksize=0x8000 set rpcmod:svc_default_stksize=0x8000 installer log files and summary file are saved at: /opt/VRTS/install/logs/installer-201301250058fNCSolved623Views2likes1CommentLooking for root cause on my a resource is offine.
I recently noticed that I have a resource that is offline. I am fairly new to VCS and I'm looking to track down how I can determine when and why this resource is offline. VCS seems to have a ton of logs and I'm not sure which one will benefit me in my search for the root cause. Can anyone point me in the right direction?Solved1.1KViews2likes5CommentsIs it recommended to configure coordinator DG for failover SG
Dears , Is it mandatory or recommended to implment IO fencing using co-ordinator disks if i have only failover service group with non-shared disk groups ? disk headers will have the info. "on which node it's imported" and DG is not shared ..Solved691Views2likes4CommentsMigrating VCS node to another server
Hello, everybody! I'm wondering if it's possible to migrate a VCS 5.0 node in a two-node ha cluster by pulling out a disk from an old server and put it into a new one. The difference between the old server and the new one is basically the number of CPUs and RAM so it seems it shouldn't be a problem. Please, correct me if I'm deeply wrong and it won't work. If it's doable, are there any VCS-related pitfalls it's better to be aware beforehand? Thank you. With best regards, Sergey.Solved854Views2likes6Comments