cancel
Showing results for 
Search instead for 
Did you mean: 

VCS Engine does not autostart

achebib
Level 4
Partner

HI. I am facing some FAT and I am performing a standard reboot.

I have one cluster with two nodes and another server with Coordination Point Server.

When I restart one node I verify the service conmutes. The server shutdwon cleanly and startutp cleanly with all native configured VCS.

Howerver, when a invoke an "init 0" in both nodes at same time, the cluster goes down and when I make a hastatus -sum appear VCS ERROR V-16-1-10600 Cannot connect to VCS engine. 

The cluster goes online again after I manualley start it with hastart.

I wonder if what is happening is normal.

Thanks in advance.

 

 

4 REPLIES 4

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi

 

Please explain more detail. You say "Howerver, when a invoke an "init 0" in both nodes at same time," but you also say "I have one cluster with two nodes and another server with Coordination Point Server."

 

How many nodes are there?

 

Post the contents of /etc/llthosts and /etc/gabtab

 

And then the steps you're following.

achebib
Level 4
Partner

Hi,

 

Thank you so much for your help.

I was following the procedure to check a FAT.

 

Prerequisites: VCS must be online. VCS configured and online on all nodes.

Test Procedure:

Step

Action

Expected Result

1.        

Reboot one server by using the command ‘init 6’.

The servers should shutdown cleanly and startup cleanly with all native and configured VCS autostart enabled resources.

2.        

Verify the Expected Results occur.

3.        

Repeat steps 1 and 2 for each of the nodes within the cluster.

4.        

Reboot all of the servers simultaneously by using the command ‘init 6’.

5.        

Verify the Expected Results occur.

6.        

Shutdown all the nodes within the cluster by using the ‘init 0’ command.

7.        

Power off, then power on all the servers.

8.        

Verify the Expected Results occur.

 

Remarks

Shutdown difficulties could be caused by poorly constructed dependencies.

Startup could be stopped by improper configuration of LLT, GAB or the cluster.

VCS service groups that are online may attempt to failover as configured during the reboots.

 
I simply was makin a Resource Group with an IP and a NIC, and I forgot to link the IP to the NIC

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Ok, I'm still unclear as to what is wrong. If you perform any of the reboot / power cycle / Init operations here the cluster the nodes should form a cluster again.

 

Always use gabconfig -a to see which nodes have joined and if the gab and had ports are active.

starflyfly
Level 6
Employee Accredited Certified

Have you check if VCS_START  has set to 1?

 

About the VCS configuration files

VCS configuration files include the following:

  • main.cf

    The installer creates the VCS configuration file in the /etc/VRTSvcs/conf/config folder by default during the VCS configuration. The main.cf file contains the minimum information that defines the cluster and its nodes.

    See Sample main.cf file for VCS clusters.

    Sample main.cf file for global clusters

  • types.cf

    The file types.cf, which is listed in the include statement in the main.cf file, defines the VCS bundled types for VCS resources. The file types.cf is also located in the folder /etc/VRTSvcs/conf/config.

    Additional files similar to types.cf may be present if agents have been added, such as Oracletypes.cf.

  • /etc/sysconfig/vcs

    This file stores the start and stop environment variables for VCS engine:

    • VCS_START—Defines the startup behavior for VCS engine after a system reboot. Valid values include:

      1—Indicates that VCS engine is enabled to start up.

      0—Indicates that VCS engine is disabled to start up.

    • VCS_STOP—Defines the shutdown behavior for VCS engine during a system shutdown. Valid values include:

      1—Indicates that VCS engine is enabled to shut down.

      0—Indicates that VCS engine is disabled to shut down.

    The installer sets the value of these variables to 1 at the end of VCS configuration.

    If you manually configured VCS, make sure you set the values of these environment variables to 1.

Note the following information about the VCS configuration file after installing and configuring VCS:

  • The cluster definition includes the cluster information that you provided during the configuration. This definition includes the cluster name, cluster address, and the names of users and administrators of the cluster.

    Notice that the cluster has an attribute UserNames. The installvcs program creates a user "admin" whose password is encrypted; the word "password" is the default password.

  • If you set up the optional I/O fencing feature for VCS, then the UseFence = SCSI3 attribute is present.

  • If you configured the cluster in secure mode, the main.cf includes the VxSS service group and "SecureClus = 1" cluster attribute.

  • The installvcs program creates the ClusterService service group if you configured the virtual IP, SMTP, SNMP, or global cluster options.

    The service group also has the following characteristics:

    • The group includes the IP and NIC resources.

    • The service group also includes the notifier resource configuration, which is based on your input to installvcs program prompts about notification.

    • The installvcs program also creates a resource dependency tree.

    • If you set up global clusters, the ClusterService service group contains an Application resource, wac (wide-area connector). This resource’s attributes contain definitions for controlling the cluster in a global cluster environment.

      Refer to the Veritas Cluster Server Administrator's Guide for information about managing VCS global clusters.

Refer to the Veritas Cluster Server Administrator's Guide to review the configuration concepts, and descriptions of main.cf and types.cf files for Linux systems.