Agent are in failed status. Below are the messages in engineA.log file. Please let me knwo the cause of this issue
VCS WARNING V-16-1-53025 Agent Script has faulted; ipm connection was lost; restarting the agent
VCS ERROR V-16-1-10015 Cannot star...
we are facing issue when joining our encryption management servers in a cluster. Inspite of meeting all the environmental requirements, the cluster page displays that Join could not be complete and the error message says that the remote ...
I have one three node clusters and need to bring down node 2 for maintainance. When I bring the node two back, the vcs hardware group was offline and giving error.
How can I bring that back online?
Hello to all,
recentlly I install VCS 6.0.2 over windows 2012 for a file sharing service betwwen two cluster nodes, global group has been configured using mirrir view agent.
global resource will not come up, becouse of a mirror view faulty . all reco...
We had a hardware failure and on restarting the server we could not reach our mount points or even start the server with hastart but nothing was started and we keep getting the error in Title above.
Kindly assist in resolving this issue.
Hi to all
I have redhat linux 5.9 64bit with SFHA 5.1 SP1 RP4 with fencing enable ( our storage device is IBM .Storwize V3700 SFF scsi3 compliant
[root@mitoora1 ~]# vxfenadm -d
I/O Fencing Cluster Information:
Recently when I brought up the backup server, which was down for a long time due to drive failure, now the disks are swapped and working as they should.
The os version is Solaris 9 Sun 240 Server
VERITAS cluster server 4.1
In the two nod...
I'm having an issue with one of our cluster wherein, when tried to fail-over the service group it failed with below errors -
2013/10/30 22:22:19 VCS ERROR V-16-2-13006 (XXXXX) Resource(vdaappdg_dg): clean procedure did not complete within t...
Recently this message started appearing on the server.
Oct 14 08:38:15 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble
Oct 14 08:38:15 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0...
We have started to receive the following warnings and messages about every 30 minutes or so.
Sep 29 10:45:27 node1 kernel: LLT INFO V-14-1-10205 link 0 (eth1) node 1 in trouble
Sep 29 10:45:27 node1 kernel: LLT INFO V-14-1-10205 link 1 (eth2) no...
What has happened that MULtinicB resource faulted on both nodes.
that lead to the failure of proxies and other dependent resources configured in service group
can you please let us know what could be the recovery procedure if the network get...
We replaced one of our solaris servers (swapped the hard drives into the new server) after a hardware failure. When the server came back up, all the applications we have on the servers in the cluster stopped functioning. All the servers' logs show th...
Hi All ,
I am getting the above messages from one of the node in VCS. We are running with RHEL 5.8 version.
The messages refering to a particular filesystem is mounted in rw mode, and there was no issues reported except these messages.
I found an art...
Everyday for the same time, we getting on the syslog below error,
Kindly please anyone suggets how to resolve ?
Aug 21 21:50:09 l165ux12 vmunix: LLT INFO V-14-1-10063 llt_send_port: no memory to xmit
Aug 21 21:50:09 l165ux12 vmunix: LLT INFO V...
Engine_A is started to be flooded with V-16-2-13027 error code , eventually some resources fail after passing multiple failed monitoring cycle , clean is called .
- PrivNIC is configured with 2 interfaces , which are the same two interfac...
I am having this problem with the VCS hearbeat links.
The VCS are being run on a Solaris machine v440. The VCS version is 4.0 on Solaris 9, I know it's old & EOL. Im just hoping to find and pinpoint the soloution to this problem.
While investigating RCA case for hung in symantec file store NAS issue.
we raised a case with symantec and found that this is a known bug 2384962.
The case we raised with symantec was 418-911-600
We are facing similar issue in othe...
detected an issue with SFS NAS cluster and found that filesystems a cannot be confirmed on df, and general users cannot login to the node.
SFS event log:->
84441) 2013 Jun 19 17:42:39 kyornas051_01 sfsfs_event.network.alert: Node kyornas051_02 ...