VCS 5.1 Remote cluster configuration
Hello all, I am trying to link two VCS clusters together using the Remote Cluster Configuration Wizard and I keep getting the following error message VCS error V-16-10-39. Following error were encountered while connecting to the cluster: Connection Refused. Please change the data and try again or press cancel to exit the wizard. Has anyone run across this error before?2.1KViews0likes14CommentsUnable to bring the Service Group online.
Hi All, I tried to bring a SG online in a node but it's not comming online. Let me explaing the issue. We did reboot of a node aixprd001 and we found that /etc/filesystem is corrupted so the SG bosinit_SG is in partial state since lot of cluster FS in not mounted. Then we corrected the entry and done the manual mout of all the FS but the SG still show the status partial so we did the bellow command. hagrp -clear bosinit_SG -all Once done the SG is in online state. For safer side we tried to offline the SG and brought it up online again but the SG failed to come online, Bellow is the only error we able find the engine_A.log file. 2014/12/17 06:49:04 VCS NOTICE V-16-1-10166 Initiating manual online of group bosinit_SG on system aixprd001 2014/12/17 06:49:04 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group bosinit_SG on all nodes Please help me by providing suggestion, I will provide the output of logs if needed. Thanks, RufusSolved2KViews0likes4CommentsSG is not switching to next node.
Hi All, I am new to VCS but good in HACMP. In our environment we are using VCS-6.0, I one server we found that the SG is not moving from one node to another node when we tried manual failover using the bellow command. hagrp -switch <SGnamg> -to <sysname> We able to see that the SG is offline in the currnent node but it's not coming online in the secondary node. There is no error locked in engine_A.log except the bellow entry cpus load more than 60% <Secondary node name> Can anyone help me to find the solution for this. I will provide the output of any commands if you need more info to help me out to get this trouble shooted :) Thanks,Solved1.8KViews1like8CommentsUnable to add user parameter while configuring Apache Agent
Im Using VCS 4.0 on linux. Cluster Manager : 4.4 Cluster Server : 4.1 When im trying to Do "Import Types" and import vcsApacheTypes.cf , I dont see user parameter here!. Moreover , If i try to manually add a User parameter that doesnt work either. Please let me know how to configure an Apache agent so that the apache process starts with a specified user, instead of root user where i installed VCS.Solved1.6KViews0likes9CommentsVCS AutoStartList ungracefully failover.
I have a cluster setup and everything seems to be working as expected except in one test case of an ungraceful shutdowns. The outstanding issue seems to be with the ungraceful shutdown. At this time when it comes to an ungraceful shutdown, it seems to be able to fail only one way. This order seems to be determined by the AutoStartList. If you are already running on the last system in the list, it will not go back the first system. System A is ungracefully shutdown > System B sees the fault and starts the resources System A is brought back online and all errors cleared System B is ungracefully shutdown > System A sees the fault but does not start any of the resources Is this correct. Is there a way to force it to try the first system in the list?Solved1.5KViews0likes9CommentsVCS Global Clustering - WAC Error V-16-1-10543 IpmServer::open Cannot create socket errno = 97
Hi, I have a customer who has two VCSclusters running on RHEL 5.6 servers. These clusters are further protected by site failover using GCO (global cluster option). All was working fine since installation with remote cluster operations showing up on the local cluster etc. But then this error started to appear in the wac_A.log file .... VCS WARNING V-16-1-10543 IpmServer::open Cannot create socket errno = 97 Since this the cluster will not see of the remote clusters state, but can ping it as seen from the hastatus command below: site-ab04# hastatus -sum -- SYSTEM STATE -- System State Frozen A site-ab04 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService site-ab04 Y N ONLINE B SG_commonsg site-ab04 Y N ONLINE B SG_site-b04g3 site-ab04 Y N OFFLINE B SG_site-b04g4 site-ab04 Y N OFFLINE B SG_site-a04g0 site-ab04 Y N OFFLINE B SG_site-a04g1 site-ab04 Y N OFFLINE B SG_site-a04g2 site-ab04 Y N OFFLINE B vxfen site-ab04 Y N ONLINE -- WAN HEARTBEAT STATE -- Heartbeat To State M Icmp site-b04c ALIVE -- REMOTE CLUSTER STATE -- Cluster State N site-b04c INIT Does any one have any ideas? Networking all seems to be in order. Thanks, Rich1.5KViews0likes3CommentsVCS with GCO\Windows2012\VMware 5.1\IBM SVC SAN based mirroring
We are looking at potentially using VCS for a cross site DR project and have been told by a solution provider that we can use the GCO option in VCS to allow us to have Windows 2012, running on VMware, and using IBM SVC SAN based mirroring. They have not been very clear how it would actually work, and I cannot discern from the VCS 6.1 docs how it would be possible. The constraints we are working with is that we must use the native volume managers, file systems, and IBM SVC SAN based replication. I can understand how it might work if RDMs are used for the VMs, but they are saying that we can used VMDKs inside VMFS data stores and still achieve our goals. Can anyone verify this is true, and if it is, explain how it is achieved? I understand how it work with local clustering using the NativeDisks and VMwareDisks agents, but it isn't clear howit works under GCO.The issue I see with it is that there does not appear to be anything that manipulating the VMFS datastores or the LUNs they are made up from. I would really appreciate it if someone could explain exactly what agents are necessary and how they tie into VMware to ensure than the SAN replicated LUNs\VMFS data stores are not mounted\accessed at the remote site. Thank you, Ian.Solved1.3KViews0likes4CommentsVCS for SQL Server 2008 R2
Hi, I need some help and I hope someone here can help me. I am trying to create a disaster recovery cluster for our SQL databases. There are two cluster servers at each location. I have created a cluster in location A with notification option and geocluster option enabled. I created a second cluster in location B also with notification option and geocluster enabled. I have not linked the clusters. I need to install three instances of SQL. I have a few questions though. To install three instances of SQL do I link the clusters first? Is the installation process (and storage migration)A1 -> A2 and separately B1 -> B2 and geocluster later? or is it geocluster first and install A1 -> A2 -> B1 -> B2? Do I repeat this process for EACH instance? One of the instances needs to use SQL Reporting services. Can I tie IIS to the SSRS service group? The goal would be that IIS would run on the server hosting SSRS but not on any of the other servers.How wouldthis be done? All help is appreciated.Solved1.3KViews0likes8CommentsInter cluster communication
I am trying to automate scripts that we run across multiple clusters during a Disaster Recovery scenario. We havean HTC (Hitachi True Copy) resource on the database cluster that makes the local storage a P-VOL before importing the database disk group. This includes database storageon the local cluster as well as application storage located on another CFScluster. We currently run one script to failover the database to the DR site. Once the storage is failed over, we run another script on the CFS cluster to unfreeze (persistent) the service groups we need to run, 'vxdg -Cs import' the shared disk group, and run fsck on the shared file systems we are importing, then we start the shared mount points that have CVMVolDg / CFSMount resources. I am looking for a way to tie these two scripts together, but we have security restrictions in our environment such that there is no root-to-root communication between the clusters. TCP port 14141 is enabled between the clusters. Does anyone have any suggestions for kicking off the script on the second CFS cluster? I have been able to kick off a trigger from the database cluster to the application cluster, and was thinking about invoking a preonline trigger to import the shared disk group and run fsck. I was also thinking of invoking a postoffline trigger to deport the shared disk group, but the postoffline trigger only accepts two arguments, <system> and <group>.One issue with triggers is I need to make sure they only run on one node. I would do this using: export VCS_HOST=cluster-vip halogin admin hatrigger -preonline 0 <CVM master> <group> IMPORT The preonline script would then check the 4th argument and see that it is IMPORT and run the 'vxdg -Cs import' and 'fsck' commands. I guess another way to do this would be to set UserIntGlobal to 1 for the service group which the preonline and postoffline triggers can check for and import/export the shared disk group, but then I end up with trying to make sure that only one system (the CVM master) runs the import command and only one system runs the export command after all other CFS service groups are offline. In the case of these clusters, the CFS mount points will not necessary all be mounted on the CVM master, and the CVM master won't always be the last node to offline the shared mount point. Does anyone have any suggestions?1.2KViews0likes8Comments