cancel
Showing results for 
Search instead for 
Did you mean: 

UUID not configured for VCS

nore
Level 3

Greetings

I'm trying to cluster two nodes using VCS, so I downloaded VRTS_SF_HA_Solutions_6.1_RHEL.tar.gz, extracted it, and then installed it on RHEL 6.

In the final steps, the cluster should be configured. But I'm not able to configure the UUID correctly.
 

logs for both nodes from a host (balancer) used for installation.

[root@balancer rhel6_x86_64]# tail /opt/VRTS/install/logs/installer-201409221212
LNM/start.had.node1
2014/09/22 12:27:42 VCS INFO V-16-1-10118 GAB timeout set to 30000 ms. GAB peak
load timeout set to 30000 ms.
2014/09/22 12:27:42 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout
set to 200000 ms
2014/09/22 12:27:42 VCS NOTICE V-16-1-11059 GAB registration monitoring action s
et to log system message
2014/09/22 12:27:47 VCS INFO V-16-1-10077 Received new cluster membership
2014/09/22 12:27:47 VCS NOTICE V-16-1-10112 System (node1) - Membership: 0x1, DDNA: 0x2
2014/09/22 12:27:47 VCS NOTICE V-16-1-10086 System node1 (Node '0') is in Regular Membership - Membership: 0x1
2014/09/22 12:27:47 VCS NOTICE V-16-1-10322 System node1 (Node '0') changed state from CURRENT_DISCOVER_WAIT to LOCAL_BUILD
2014/09/22 12:27:48 VCS ERROR V-16-1-10614 Cluster UUID is not configured or it is empty,  on system node1 - VCS Stopping. Manually Restart VCS after configuring Cluster UUID.
0 12:21:29 cmd exit=0 (duration: 0 seconds)
0 12:21:29 ## The proc_start_sys() return '0'
[root@balancer rhel6_x86_64]# tail /opt/VRTS/install/logs/installer-201409221212LNM/start.had.node2
2014/09/22 12:29:56 VCS INFO V-16-1-10118 GAB timeout set to 30000 ms. GAB peak load timeout set to 30000 ms.
2014/09/22 12:29:56 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout set to 200000 ms
2014/09/22 12:29:56 VCS NOTICE V-16-1-11059 GAB registration monitoring action set to log system message
2014/09/22 12:30:01 VCS INFO V-16-1-10077 Received new cluster membership
2014/09/22 12:30:01 VCS NOTICE V-16-1-10112 System (node2) - Membership: 0x2, DDNA: 0x1
2014/09/22 12:30:01 VCS NOTICE V-16-1-10086 System node2 (Node '1') is in Regular Membership - Membership: 0x2
2014/09/22 12:30:01 VCS NOTICE V-16-1-10322 System node2 (Node '1') changed state from CURRENT_DISCOVER_WAIT to LOCAL_BUILD
2014/09/22 12:30:01 VCS ERROR V-16-1-10614 Cluster UUID is not configured or it is empty,  on system node2 - VCS Stopping. Manually Restart VCS after configuring Cluster UUID.
0 12:23:40 cmd exit=0 (duration: 0 seconds)
0 12:23:40 ## The proc_start_sys() return '0'


On node1:

[root@node1 bin]# cat /etc/llthosts
0 node1
1 node2

 

[root@node1 bin]# ./uuidconfig.pl -clus -configure -use_llthost

 

No UUID is configured on node1 node2
Error String: Configuring new UUID on node1 node2
Generating a new UUID for the cluster ...Done

Populating UUID on cluster nodes: node1 node2
        Setting UUID on system node1 ...command scp /tmp/clusuuid.8711 [node1]:/etc/vx/.uuids/clusuuid returned 256

[root@node1 bin]# ll -a /etc/vx
total 52
drwxr-xr-x.   4 root root  4096 Sep 19 16:24 .
drwxr-xr-x. 115 root root 12288 Sep 22 12:08 ..
-r--r--r--.   1 root root  4898 Oct 22  2013 amf-modinst-script
drwxr-xr-x.   3 root root  4096 Sep 19 16:24 dcli
drwxr-xr-x.   5 root root  4096 Sep 19 16:20 licenses

-r--r--r--.   1 root root 16596 Oct 22  2013 modinst-script

[root@cluster1 bin]# ./uuidconfig.pl -clus -display -use_llthost

Finding existing UUID information ...

node1 .... does not exist.
node2 .... does not exist.
Done

No UUID is Configured


NO_UUID :       node1 node2


Any help is appreciated.

 

Thanks

13 REPLIES 13

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi,

Can you paste the contents of /etc/llttab from both the hosts ?

Does it contain a unique cluster ID ?

 

G

mikebounds
Level 6
Partner Accredited

Do you have a /etc/vx/.uuids/clusuuid file on either node - if so you can just copy the file manually from one node to another.  However, it looks the uuidconfig.pl perl script is using scp to itself, so I suspect neither node has a /etc/vx/.uuids/clusuuid file, but if the file in tmp it is copying ( /tmp/clusuuid.8711) is still there, then you can copy this to both nodes

Alternately if clusuuid file is not in /tmp then use:

/opt/VRTSvcs/bin/uuidconfig.pl -clus -configure node1

and then copy clusuuid  in /etc/vx/.uuids/clusuuid to the other node in the cluster

Mike

nore
Level 3

[root@node1 bin]# cat /etc/llttab
set-node node1
set-cluster 12345
link eth1 eth-00:0c:29:17:4d:92 - ether - -
link-lowpri eth2 eth-00:0c:29:17:4d:9c - ether - -

 

[root@node2 ~]# cat /etc/llttab
set-node node2
set-cluster 12345
link eth1 eth-00:0c:29:93:19:ee - ether - -
link-lowpri eth2 eth-00:0c:29:93:19:f8 - ether - -

 

nore
Level 3

/etc/vx/.uuids doesn't exist on both nodes and 

 

[root@node1 bin]# cat /tmp/clusuuid.*

 

 

.. is empty.. nothing is there..

 

nore
Level 3

That directory is not created here too.

[root@node1 bin]# ./uuidconfig.pl -clus -configure node1

 

No UUID is configured on node1
Error String: Configuring new UUID on node1
Generating a new UUID for the cluster ...Done

Populating UUID on cluster nodes: node1
        Setting UUID on system node1 ...command scp /tmp/clusuuid.9175 [node1]:/etc/vx/.uuids/clusuuid returned 256[root@node1 bin]# ll -a /etc/vx
total 52
drwxr-xr-x.   4 root root  4096 Sep 19 16:24 .
drwxr-xr-x. 115 root root 12288 Sep 22 12:08 ..
-r--r--r--.   1 root root  4898 Oct 22  2013 amf-modinst-script
drwxr-xr-x.   3 root root  4096 Sep 19 16:24 dcli
drwxr-xr-x.   5 root root  4096 Sep 19 16:20 licenses
-r--r--r--.   1 root root 16596 Oct 22  2013 modinst-script

[root@node1 bin]# cat /tmp/clusuuid.9175

..also empty..

mikebounds
Level 6
Partner Accredited

I would try manually creating directory "/etc/vx/.uuid" and then run " ./uuidconfig.pl -clus -configure node1" again.

 

Mike

nore
Level 3

Just tried that, the .uuid directory stays empty after running uuidconfig

nore
Level 3

Nothing seem to work out in generating the UUID, can I just create a UUID randomly and slap it in both nodes? and is there a specific format for it?

rsharma1
Level 5
Employee Accredited Certified

whats the o/p of "/opt/VRTSvcs/bin/osuuid list" on each node? (uuid is generated based on the osuuid -the osuuid files are generally under /.osuuid or /etc/.osuuid generated by the OS image)

 

mikebounds
Level 6
Partner Accredited

uuidconfig.pl runs:
/opt/VRTSvcs/bin/osuuid list --new --loc /opt/VRTSvcs/bin

so try running this - it should output a UUID to the screen and if it does, then output this to /etc/vx/.uuids/clusuuid and copy to other node.

This should create the UUID, but I am still not sure if VCS will startup - if it doesn't I think you will need to log a support call with Symantec.

Mike

 

nore
Level 3

[root@node1 bin]# ./osuuid list
{4724eb82-4000-11e4-8240-4bc40957167a}

 

[root@node2 bin]# ./osuuid list
{1b4f0c36-4000-11e4-906b-d24bff740a19}

 

 

nore
Level 3

Ok, tried that and it didn't work out.

[root@node1 bin]# cat /etc/vx/.uuid/clusuuid
{875c1c44-1dd2-11b2-944d-2e9c867e92f3}

[root@node2 bin]# cat /etc/vx/.uuid/clusuuid
{875c1c44-1dd2-11b2-944d-2e9c867e92f3}

 

 

I forgot to mention that the whole problem in configuration was due to "had" failing to start.

This happens when I run "installer" and choose the "configure an installed product option"

Anyway, the log still contains the UUID issue, find the start.had.node1 log on https://bpaste.net/show/f3b15854c3cf

 

Not sure about the support call, as I am using the trial versions for currently testing if the clustering setup works as expected on our product.

nore
Level 3

Ok, I figured it out. The issue was that node1/node2 are configured incorrectly in the /etc/hosts file and when I added the UUIDs manually, I used the wrong directory (.uuid) where it should be .uuids.

Thanks everybody. I still have many questions regarding VCS, should I open a new thread for that?