Automatic Failover
I would like to use Veritas Cluster Server to achieve high availability and automatic fail over for my applications. These application are java applications with some services with the backend databases using Sybase The java applications can be broken into 2 parts : web layer and application layer. The underlying infrastructure will be in RHEL Linux for java applications and database (Sybase) My question is : is VCS supporting seamless automatic failover for the java services including database services without requiring manual intervention ? What i want to achieve is : after i setup active-passive for application layer, i expect the active node to automatically failover to passive node and immediately the passive node become active nodeSolved998Views0likes1CommentTrigger after failed cleanup script
Hi there, I have a system where the cleanup script can fail/timeout and I want to execute another script if this happens. And I was wondering which can be the best way of doing this. In the veritas cluster server administrators guide for Linux I found the trigger RESNOTOFF. From the documentation it is my understanding that this trigger will be triggered in the following cases: A resource fails going offline (started by VCS) and the clean up fails. A resource goes offline unexpectedly and the clean up fails. I have tested this and the RESNOTOFF is working in the first scenario but not in the second. For testing the second scenario I kill the service and I can see the following message in the engine_A.log: VCS ERROR V-16-2-13067 (node1) Agent is calling clean for resource(service1) because the resource became OFFLINE unexpectedly, on its own. When the cleanup fails I would expect the resource to became UNABLE TO OFFLINE. However, the status of the resource is still ONLINE: # hares -state service1 #Resource Attribute System Value service1 State node1 ONLINE service1 State node2 OFFLINE So the resource is ONLINE and VCS keeps running the cleanup command indefinitely (which is failing). I was wondering if I need to configure something else to make the RESNOTOFF to work in this particular scenario. Thanks,Solved939Views0likes3CommentsUnable to bring the Service Group online.
Hi All, I tried to bring a SG online in a node but it's not comming online. Let me explaing the issue. We did reboot of a node aixprd001 and we found that /etc/filesystem is corrupted so the SG bosinit_SG is in partial state since lot of cluster FS in not mounted. Then we corrected the entry and done the manual mout of all the FS but the SG still show the status partial so we did the bellow command. hagrp -clear bosinit_SG -all Once done the SG is in online state. For safer side we tried to offline the SG and brought it up online again but the SG failed to come online, Bellow is the only error we able find the engine_A.log file. 2014/12/17 06:49:04 VCS NOTICE V-16-1-10166 Initiating manual online of group bosinit_SG on system aixprd001 2014/12/17 06:49:04 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group bosinit_SG on all nodes Please help me by providing suggestion, I will provide the output of logs if needed. Thanks, RufusSolved2.1KViews0likes4CommentsVCS with GCO\Windows2012\VMware 5.1\IBM SVC SAN based mirroring
We are looking at potentially using VCS for a cross site DR project and have been told by a solution provider that we can use the GCO option in VCS to allow us to have Windows 2012, running on VMware, and using IBM SVC SAN based mirroring. They have not been very clear how it would actually work, and I cannot discern from the VCS 6.1 docs how it would be possible. The constraints we are working with is that we must use the native volume managers, file systems, and IBM SVC SAN based replication. I can understand how it might work if RDMs are used for the VMs, but they are saying that we can used VMDKs inside VMFS data stores and still achieve our goals. Can anyone verify this is true, and if it is, explain how it is achieved? I understand how it work with local clustering using the NativeDisks and VMwareDisks agents, but it isn't clear howit works under GCO.The issue I see with it is that there does not appear to be anything that manipulating the VMFS datastores or the LUNs they are made up from. I would really appreciate it if someone could explain exactly what agents are necessary and how they tie into VMware to ensure than the SAN replicated LUNs\VMFS data stores are not mounted\accessed at the remote site. Thank you, Ian.Solved1.3KViews0likes4CommentsSG is not switching to next node.
Hi All, I am new to VCS but good in HACMP. In our environment we are using VCS-6.0, I one server we found that the SG is not moving from one node to another node when we tried manual failover using the bellow command. hagrp -switch <SGnamg> -to <sysname> We able to see that the SG is offline in the currnent node but it's not coming online in the secondary node. There is no error locked in engine_A.log except the bellow entry cpus load more than 60% <Secondary node name> Can anyone help me to find the solution for this. I will provide the output of any commands if you need more info to help me out to get this trouble shooted :) Thanks,Solved1.8KViews1like8CommentsCNFS Share read-only on NFS Clients
I have a CNFS cluster with the following configuration. It's being used to run cfsshares for clients on the network. -- System State Frozen A Node1 RUNNING 0 A Node2 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService Node1 Y N OFFLINE B ClusterService Node2 Y N ONLINE B cfsnfssg Node1 Y N ONLINE B cfsnfssg Node2 Y N ONLINE B cfsnfssg_dummy Node1 Y N OFFLINE B cfsnfssg_dummy Node2 Y N OFFLINE B cvm Node1 Y N ONLINE B cvm Node2 Y N ONLINE B vip1 Node1 Y N ONLINE B vip1 Node2 Y N OFFLINE B vxfen Node1 Y N ONLINE B vxfen Node2 Y N ONLINE After all the configuration on cluster nodes, the client is able to mount the shared nfs volume, but is unable to write to it. root@Node1 # cfsshare display CNFS metadata filesystem : /locks Protocols Configured : NFS #RESOURCE MOUNTPOINT PROTOCOL OPTIONS share1 /emm1 NFS rw share2 /emm2 NFS rw share3 /emm3 NFS rw I need to be able to write on these shares from clients, Any help would be appreciated. Thanks.431Views0likes0CommentsVVR with Geo-cluster Implementation.
Hello, We are planning to implement a DR solution using VVR for data replication and Geo-cluster for failover. The production side consists of 2 node cluster (2 ldoms) and the DR consist of 1 node cluster (1 ldom). On the production side we have 1 ldom per customer with a single VCSservice group for the customer,whereas some of of the ldoms have several customers in a single cluster with multiple service groups (1 service group per customer). We need the ability to failover a single customer to the DR site. In the case of single ldom or enviorment per customer we can do this easily. My query isin case ofmultiple customers/service groupson the single ldom and in a single VCS cluster, will this cause any limitation or is just case of failing over the service group for the particular customer.Solved986Views0likes2Commentswhat Disaster Recovery Solution to use?
Hello, My company is planning to implement a DR solution for the our production enviornment. The production enviorments that we have are all 2 node fail-over VCS clusters (SFHA 6.0.1). The DR systems consist of a single node. The products used for this DR solution is VVR for data replication and Geo-cluster for the failover. There is layer 2 stretch network between the 2 sites (prod & DR). We are discussing on3 solutions, 1.Using VVR for data mirroring and creating single node cluster on the DR site and then creating a geo cluster between the prod and the DR sites. Since the network is stretched,we are planning to have the DR ystems on the same VLAN/subnet as the prod systems. The application VIP will also be the same on both the sites as the service group is up only on one site at a time. 2.Using VVR for data mirroring and adding the node in the DR as the third node in the cluster. 3. Having a Pure Stretched Clusters. Which solution mentioned above will be the best suited for this purpose? Can you tell me about the pors and cons of each option.Solved792Views1like3CommentsVERITAS STORAGE EXEC 5.5 REPLICATIONS PROBLEMS
Hi, I have two nodes in cluster and quotas defined in Node A are not replicated to a Node B, when I switch the cluster resources. Where policies and quotas are defined? Local: C: \ ProgramFiles \ Veritas \ StorageExec \ DB SCAudit.mdb or SCTrend.mdb Should not be in a cluster resource? Thank you all for your answers628Views0likes2CommentsVCS Global Clustering - WAC Error V-16-1-10543 IpmServer::open Cannot create socket errno = 97
Hi, I have a customer who has two VCSclusters running on RHEL 5.6 servers. These clusters are further protected by site failover using GCO (global cluster option). All was working fine since installation with remote cluster operations showing up on the local cluster etc. But then this error started to appear in the wac_A.log file .... VCS WARNING V-16-1-10543 IpmServer::open Cannot create socket errno = 97 Since this the cluster will not see of the remote clusters state, but can ping it as seen from the hastatus command below: site-ab04# hastatus -sum -- SYSTEM STATE -- System State Frozen A site-ab04 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService site-ab04 Y N ONLINE B SG_commonsg site-ab04 Y N ONLINE B SG_site-b04g3 site-ab04 Y N OFFLINE B SG_site-b04g4 site-ab04 Y N OFFLINE B SG_site-a04g0 site-ab04 Y N OFFLINE B SG_site-a04g1 site-ab04 Y N OFFLINE B SG_site-a04g2 site-ab04 Y N OFFLINE B vxfen site-ab04 Y N ONLINE -- WAN HEARTBEAT STATE -- Heartbeat To State M Icmp site-b04c ALIVE -- REMOTE CLUSTER STATE -- Cluster State N site-b04c INIT Does any one have any ideas? Networking all seems to be in order. Thanks, Rich1.5KViews0likes3Comments