cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

cluster_script fails with NB 6.5 and Solaris Cluster 3.2

peebee
Level 2

Hi,

I am getting an error when trying to configure a 2 node NetBackup Master cluster (using Solaris Cluster) running the cluster_config script...

Background -
2 x SunFire 280R, Solaris 10 10/09, Solaris Cluster 3.2, 2010.09.17 Recommended Patch Cluster, NetBackup Enterprise Server 6.5.

So far -
Configured Solaris Cluster,
Created shared zpool/zfs file system mounted as /opt/openv on node1,
Ran NetBackup installation on node1 using shared hostname nbmaster,
Unmounted /opt/openv, exported zpool and did the reverse on node2,
Ran NetBackup installation on node2 using shared hostname nbmaster,
Unmounted /opt/openv, exported zpool and did the reverse on node1,
Ran /usr/openv/netbackup/bin/cluster/cluster_config on node1 and specified NetBackup Master Server failover, Nodes "node1 node2", logical hostname "nbmaster" & shared directory path "/opt/openv"

Problem -
scnb-hars resource cannot be verified on node2...

root@node1:/root # /usr/openv/netbackup/bin/cluster/cluster_config
(...blah...)
No Portlist specified ...
Would use default 13724/tcp list.
No nafo groups or network adapters specified ...
Will try to auto-discover the network adapters and configure them into nafo groups.
Creating a failover instance ...
Registering resource type <VRTS.scnb>...done.
Creating failover resource group <scnb-harg>...done.
Creating logical host resource <nbmaster>...done.
Creating resource <scnb-hars> for the resource type <VRTS.scnb>...(C189917) VALIDATE on resource scnb-hars, resource group scnb-harg, exited with non-zero exit status.
(C720144) Validation of resource scnb-hars in resource group scnb-harg on node node2 failed.
FAILED: scrgadm -a -j scnb-hars  -g scnb-harg  -t VRTS.scnb -y scalable=false -y Port_list=13724/tcp -y Network_resources_used=nbmaster
There was a problem creating the resources and resource group.
Please fix the error and run cluster_config again.

The following entries appear in /var/adm/messages...
Dec 21 14:20:12 node1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <scnb_validate> for resource <scnb-hars>, resource group <scnb-harg>, node <node1>, timeout <300> seconds
Dec 21 14:20:13 node1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <scnb_validate> completed successfully for resource <scnb-hars>, resource group <scnb-harg>, node <node1>, time used: 0% of timeout <300 seconds>

Dec 21 14:20:13 node2 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <scnb_validate> for resource <scnb-hars>, resource group <scnb-harg>, node <node2>, timeout <300> seconds
Dec 21 14:20:13 node2 Cluster.RGM.fed: [ID 838032 daemon.error] scnb-harg.scnb-hars.2: Couldn't run method tag. Error in execve: No such file or directory.
Dec 21 14:20:13 node2 Cluster.RGM.global.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <scnb-hars>, resource group <scnb-harg>, time used: 0% of timeout <300, seconds>

(See attached files for further info)


The cluster_config refers to "Make sure you enter the path exactly as it appears in /etc/vfstab" - as I'm using zfs for the shared file system, there are no /etc/vfstab entries. So I've tried it using a UFS file system with /etc/vfstab entries on both nodes using a zfs volume (for the purposes of getting the cluster_config working only) but this gives the same error.

Is there anything wrong with the method I've followed? I'm still not sure if I'm supposed to install NetBackup on each node to a local /opt/openv and have a different shared directory path (e.g. /opt/openv/shared)?

Thanks in advance

1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Level 6
Partner    VIP    Accredited Certified

Have you verified all the requirements listed on p.58 of the High Availability Guide

Also follow the installation steps in the HA Guide as well as the NBU install guide.

NBU must be installed on local disk - not the shared filesystem. The cluster_config script will create all databases on the share and create symbolic links. You need to have a different mount point for the cluster share.

I cannot find any ZFS info - not sure that it is supported in a clustered NBU server (it is listed in the OS compatibility guide). I see the following requirements regarding storage/filesystems:

  • For a configuration that uses HA Storage Plus, first mount the disk on the machine that you want to configure NetBackup on.
  • The shared disk must be configured and accessible to all cluster nodes on which you want to install NetBackup. See the Sun Cluster documentation for more information on how to create and configure a shared disk...
  • You must be able to mount the disk on all nodes at the same time (i.e., a global file system).

Please don't forget the rsh requirement (NO workaround, no alternative...):

Make sure that each node in the cluster, on which you want to install NetBackup, is rsh equivalent. As the root user you need to be able to perform a remote logon to each node in the cluster without entering a password. This configuration is only necessary for installation, upgrades, and configuration of the NetBackup server and any NetBackup database agents and options. After installation and configuration is complete this configuration is no longer required.

 

Also ensure that you have /etc/hosts entries for the virtual IP and hostname on both nodes.

 

Any reason why you chose NBU 6.5 and not 7.0? Clustered installation and configuration is a lot easier than in 6.5.

View solution in original post

3 REPLIES 3

Marianne
Level 6
Partner    VIP    Accredited Certified

Have you verified all the requirements listed on p.58 of the High Availability Guide

Also follow the installation steps in the HA Guide as well as the NBU install guide.

NBU must be installed on local disk - not the shared filesystem. The cluster_config script will create all databases on the share and create symbolic links. You need to have a different mount point for the cluster share.

I cannot find any ZFS info - not sure that it is supported in a clustered NBU server (it is listed in the OS compatibility guide). I see the following requirements regarding storage/filesystems:

  • For a configuration that uses HA Storage Plus, first mount the disk on the machine that you want to configure NetBackup on.
  • The shared disk must be configured and accessible to all cluster nodes on which you want to install NetBackup. See the Sun Cluster documentation for more information on how to create and configure a shared disk...
  • You must be able to mount the disk on all nodes at the same time (i.e., a global file system).

Please don't forget the rsh requirement (NO workaround, no alternative...):

Make sure that each node in the cluster, on which you want to install NetBackup, is rsh equivalent. As the root user you need to be able to perform a remote logon to each node in the cluster without entering a password. This configuration is only necessary for installation, upgrades, and configuration of the NetBackup server and any NetBackup database agents and options. After installation and configuration is complete this configuration is no longer required.

 

Also ensure that you have /etc/hosts entries for the virtual IP and hostname on both nodes.

 

Any reason why you chose NBU 6.5 and not 7.0? Clustered installation and configuration is a lot easier than in 6.5.

peebee
Level 2

Hi Marianne,

 

We're tied to NBU 6.5 I'm afraid (customer's spec).

Thanks for the info - I've now installed NBU locally on each node and am reading up on Solaris Cluster global/cluster file systems (clustering NBU inside a Solaris zone was much easier than this!).

Hopefully, the cluster_config script can then create the SUNW.HAStoragePlus "scnb-hasp" resource.

 

Many thanks

Pete

peebee
Level 2

Hi Marianne,

 

Never managed to pursue NBU Master and Solaris Cluster - clustered file system over iSCSI started giving me IO errors and I ran out of time to see what I'd done wrong. Customer has now chosen to cluster using VCS (phew!) so will close this post.

 

Thanks for all your help!

Pete