Forum Discussion

mikebounds's avatar
mikebounds
Level 6
11 years ago

CVM won't start on remote node with an FSS diskgroup

I am testing FSS (Flexible Shared Storage) on SF 6.1 on RH 5.5 in a Virtual Box VM and when I try to start CVM on the remote node I get: VCS ERROR V-16-20006-1005 (r55v61b) CVMCluster:cvm_clus:mon...
  • mikebounds's avatar
    11 years ago

    The issue of this post which was CVM would not start if an FSS diskgroup was present, giving error message:

    CVMCluster:cvm_clus:monitor:node - state: out of cluster reason: Disk for disk group not found: retry to add a node failed

    was resolved by recreating separate diskgroup which was purely CVM (no exported disks).  The likely issues was UDID mismatches or conflicts as it would appear with non-FSS failover and CVM diskgroups, all that is required is that VxVM read the private region, but with FSS diskgroups, my theory is the UDID is required to be used to ensure that if you export a disk then it only shows as a remote disk on other systems if the same disk can NOT be seen on the SAN of the remote system and it needs to use the UDID to determine this.

    Hence in Virtual box, the same disk will normally show as having different UDID when viewed from different systems, and if this disk is shared then I did indeed see a single disk presented on one server BOTH via the SAN and as a remote disk, but when I made the UDID the same by changing the hostname of one of the nodes so both nodes had the same hostname and hence the same constucted UDID, then VxVM correctly identied the remote disk was available via the SAN and hence ONLY showed the disk as a SAN attached disk and not also a remote disk.

    Although in my opening post I was not exporting any shared SAN disks (only local disks), I believe the UDID checking when autoimporting the diskgroups caused the issue.

    Mike