cancel
Showing results for 
Search instead for 
Did you mean: 

Cannot start NetBackup after update in VCS

sirin_zarin
Level 4
Partner Accredited

Hi all,

I'm update netbackup 7.1(installed on VCS cluster) to 7.1.0.4 and have next for run netbackup on node:

/var/VRTSvcs/log/NetBackup_A.log

2012/05/02 13:46:22 VCS ERROR V-16-2-13069 Thread(4147116912) Resource(nbu_server) - clean failed.
2012/05/02 13:47:56 VCS ERROR V-16-2-13078 Thread(4147116912) Resource(nbu_server) - clean completed successfully after 1 failed attempts.
2012/05/02 13:47:56 VCS ERROR V-16-2-13071 Thread(4147116912) Resource(nbu_server): reached OnlineRetryLimit(1).
2012/05/02 16:04:52 VCS WARNING V-16-2-13102 Thread(4147116912) Resource (nbu_server) received unexpected event start in state Online
2012/05/02 16:35:00 VCS ERROR V-16-2-13066 Thread(4147116912) Agent is calling clean for resource(nbu_server) because the resource is not up even after online completed

/var/VRTSvcs/log/engine_A.log

....

2012/05/02 16:35:01 VCS ERROR V-16-2-13066 (srv-vrts-n1) Agent is calling clean for resource(nbu_server) because the resource is not up even after online completed.
2012/05/02 16:38:52 VCS INFO V-16-2-13716 (srv-vrts-n1) Resource(nbu_server): Output of the completed operation (clean)
....

What can I do to fix this?

1 ACCEPTED SOLUTION

Accepted Solutions

sirin_zarin
Level 4
Partner Accredited

UPD 3

I solved the problem with "ERROR: Unable to start Sybase." but it is strange.

i used sepatare catalog for NBDBMS_CONF_DIR (by default - /usr/openv/var/global ), my NBDBMS_CONF_DIR is /nbu/var/global and symlink is created during installation

[root@srv-vrts-n1 nbupd]# ls -la /usr/openv/var/global
lrwxrwxrwx 1 root root 15 May  4 14:01 /usr/openv/var/global -> /nbu/var/global

Script /usr/openv/db/bin/nbdbms_start_server does not work properly during the installation of updates. i change to script NBDBMS_CONF_DIR=/usr/openv/var/global on NBDBMS_CONF_DIR=/nbu/var/global and installation updates successful.

It is a very strange

P.S. After the upgrade everything works fine. I'll check a few days on the stand, and write the result

View solution in original post

18 REPLIES 18

Har-D
Level 4
Employee Certified

Hi Sirin,

 

Seems issue with NBU services not starting up properly. Do both(all) the nodes having the same issue?

Can you freeze the cluster and try onlining the NBU services, check if they start fine?

Also, check <NBU>/bin/cluster/AGENT_DEBUG.log. It shows logs related to cluster startup/monitor of the NBU services.

A couple of Articles related to the issue :

http://www.symantec.com/business/support/index?page=content&id=TECH146056

http://www.symantec.com/business/support/index?page=content&id=TECH157191

 

sirin_zarin
Level 4
Partner Accredited

Hi, Har-D

Yes, this is for all nodes. I see this link, but not help. Yes. Netbackup startup normal with cluster frozen.

Maybe i not correct update NetBackup...

1.stop netbackup on all nodes

2. frozen cluster group for netbackup

3. update passive node

4. update active node

5. unfrozen cluster group for netbackup

6. running netbackup... and fail (

thnx for you help )

Marianne
Level 6
Partner    VIP    Accredited Certified

Pity you did not ask for advice before your upgrade attempt... Active node must be upgraded first...

Did you ensure rsh connectivity between cluster nodes before you started (NBU cluster install/upgrade is STILL not ssh enabled)?

Did you offline the entire SG or just the NBU resource?
(If entire SG is offlined, the EMM database cannot be upgraded because the filesystem is unmounted and dg deported and IP for virtual hostname not accessible.)

Upgrade steps are covered in the NBU Clustered Master Server Admin Guide http://www.symantec.com/docs/DOC3679 :

To upgrade a NetBackup failover server
1 Ensure that a good backup of your cluster environment exists that includes a catalog backup.
See “Configuring NetBackup catalog backups in a cluster” on page 86.
2 For each NetBackup server that runs outside of the cluster, ensure that the
server list is accurate. This list should contain the name of each node where
NetBackup can run and the name of the virtual server.
3 Take the VCS NetBackup cluster resource offline before you begin the upgrade.
4 Enable the VCS configuration in read and write mode with haconf -makerw
5 Freeze the NetBackup group using the following command:
hagrp -freeze <nbu_group_name> -persistent
6 Stop the NetBackup cluster agent on all nodes of the NetBackup Group using
the following command:
haagent -stop NetBackup -force -sys <node>
7 On the active node, install the NetBackup server software.
Note the following:
■ Follow the instructions for how to upgrade NetBackup as described in the
Symantec NetBackup Installation Guide.
■ If required to specify the server name, provide the virtual name of the server.
8 On each inactive node to which NetBackup may failover, install the NetBackup server software.
Note the following:
■ Follow the instructions for how to upgrade NetBackup as described in the Symantec NetBackup Installation Guide.
■ If required to specify the server name, provide the virtual name of the server.
9 Start the VCS NetBackup cluster agent on all nodes of the NetBackup Group.
Use the following command:
haagent -start NetBackup -sys <node>
10 Unfreeze the NetBackup group using the following command:
hagrp -unfreeze <nbu_group_name> -persistent
11 Enable the VCS configuration in read-only mode with haconf -dump -makero
12 Take the NetBackup group offline and then bring online.

 

My Advice:

Please log a Severity 1 call with NetBackup support - call description: Upgrade of clustered master server failed.

A knowledgeable engineer will be able to assess current state of NBU via WebEx session and what remediation steps must be taken.

We can ask here for installation logs, ask you to manually start NBU, ask you for all sorts of logs, but that will unnecessary lengthen the troubleshooting process.

PS:  This discussion is more NBU-related than VCS. Can we move it for you to the NetBackup forum?

 

 

sirin_zarin
Level 4
Partner Accredited

Hi Marianne,

I performed these steps...

srv-vrts-n1 - primary node

srv-vrts-n2 - secondary node

..............

Ofline -  NetBackup(resource nbu_server)

[root@srv-vrts-n1 ~]# /opt/VRTSvcs/bin/haconf -makerw
[root@srv-vrts-n1 ~]# /opt/VRTSvcs/bin/hagrp -freeze nbu_group -persistent
[root@srv-vrts-n1 ~]# /opt/VRTSvcs/bin/haagent -stop NetBackup -force -sys srv-vrts-n2
VCS NOTICE V-16-1-10001 Please look for messages in the log file
[root@srv-vrts-n1 ~]# /opt/VRTSvcs/bin/haagent -stop NetBackup -force -sys srv-vrts-n1
VCS NOTICE V-16-1-10001 Please look for messages in the log file
..............

and run installation pack NB_CLT_7.1.0.4 on active node, but..

............

Installation of pack NB_CLT_7.1.0.4 completed Thu May  3 15:30:36 MSD 2012 Rev. 1.42.24.8.
------------------------------------------------

Checking LiveUpdate registration for the following products: NB
This may take a few minutes.

Product NB is installed and will be registered.

Updating LiveUpdate registration now...this may take some time.

Installing required pack, NB_7.1.0.4, now.

Install pack NB_7.1.0.4 Thu May  3 15:30:36 MSD 2012 Rev. 1.42.24.8

Running preinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.preinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n1.e-lab is a master server, and it is the EMMSERVER.

Saving pre-existing binaries.
Saved binaries successfully.
Using gzip to compress saved files in /usr/openv/pack/NB_7.1.0.4/save/pre_NB_7.1.0.4.050312_153036.tar.

Extracting files out of /home/nbu/nbupd/VrtsNB_7.1.0.4.linuxR_x86.tar.gz.


Tar extraction successful.
See /usr/openv/pack/pack.history for more details.

Running postinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.postinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n1.e-lab is a master server, and it is the EMMSERVER.
ln: creating symbolic link `/usr/openv/var/global/global': File exists

Running /usr/openv/netbackup/client/Linux/RedHat2.6/pdinstall to update PDDE.
Removing old packaged libraries for RedHat
NetBackup Deduplication software is installed, overwriting binaries.
+ Extracting PDDE agent package (/usr/openv/netbackup/client/Linux/RedHat2.6/pddeagent.tar.gz)...
Unpacking SYMCpddea package.
Checking for pre-existing SYMCpddea package.
Removing pre-existing SYMCpddea package.
Installing SYMCpddea package.
+ Extracting PDDE server package (/usr/openv/pddeserver.tar.gz)...
Saving configuration files.
Keeping existing /usr/openv/lib/ost-plugins/pd.conf
Starting setup for PDDE script
removing /etc/default/pdde
Done setup for PDDE script
Checking to see if the PDDE configuration needs upgrading
pdregistry.cfg exists.
Making changes to /usr/openv/lib/ost-plugins/pd.conf

PDDE install finished successfully.
Full PDDE installation log saved to /var/log/puredisk/2012-05-03_15:37-pdde-install.log.

Running "bpclusterutil -addSvc nbaudit"
Running "bpclusterutil -enableSvc nbaudit"
Running "bpclusterutil -addSvc nbvault"

ERROR: Unable to start Sybase.
------------------------------------------------
Installation of pack NB_7.1.0.4 FAILED Thu May  3 15:30:36 MSD 2012 Rev. 1.42.24.8.
------------------------------------------------

There are stopped daemons.

    Do you want to restart all NetBackup daemons? [y,n]

 

i found this http://www.symantec.com/business/support/index?page=content&id=TECH150583

but not help me (

Yes, you can move to the NetBackup forum.

 

Marianne
Level 6
Partner    VIP    Accredited Certified

For some or reason I misread initial post last night - I thought that you were upgrading from 7.0 to 7.1...
The steps I've listed was for an upgrade, not a patch install.
SORRY!
I am now reading through the 7.1.0.4 patch install readme - Inactive node first, then active node...

To install this Maintenance Release in a UNIX Cluster Environment:

1) Before you install this Maintenance Release, make sure that NetBackup is at
   release level 7.1 and configured to run in a cluster.

2) Offline the cluster by following the procedures outlined in the
    "NetBackup High Availability Guide."

3) Freeze the cluster by following the procedures outlined in the
    "NetBackup High Availability Guide."

4) Install this Maintenance Release on the inactive node(s) of the cluster
   first. (Perform the steps in the, "To install this Maintenance Release as
   root on the NetBackup Server" procedure).

5) Install this Maintenance Release on the active node of the cluster.
   (Again, perform the steps in the, "To install this Maintenance Release as
   root on the NetBackup Server" procedure).

6) Unfreeze the cluster by following the procedure outlined in the "NetBackup
   High Availability Guide."

7) Online the cluster by following the procedures outlined in the
    "NetBackup High Availability Guide."

 

Was NBU in the first place installed as Clustered Master?

If I remember correctly, the following was supposed to list the Virtual hostname, not node 1:

Machine srv-vrts-n1.e-lab is a master server, and it is the EMMSERVER.

As I've said last night, the best and quickest way to get this fixed is to log a Support call.

Else, we will try our best to help, but it could turn into a lengthy process... (If I had the time, I would update my own lab clustered server to verify the steps, but simply don't have the time right now).

Pleas  get us full installation logs on both nodes ;
the 'response file' on both nodes:
cat /usr/openv/netbackup/bin/cluster/vcs/VCS_NBU_RSP ;
bp.conf on both nodes:
cat /usr/openv/netbackup/bp.conf

 

**** Moved thread to NBU forum *****

 

 

 

sirin_zarin
Level 4
Partner Accredited

UPD

Hmm... reinstalled netbackup on all nodes.. and run...

 

[root@srv-vrts-n1 ~]#
[root@srv-vrts-n1 ~]# /usr/openv/netbackup/bin/cluster/cluster_config
[root@srv-vrts-n1 ~]#
 

why?

I could not find in docs required parameters for this... (

run in debug

[root@srv-vrts-n1 ~]# bash -x /usr/openv/netbackup/bin/cluster/cluster_config
+ case "`uname -s`" in
++ uname -s
+ unset POSIXLY_CORRECT
+ ECHO='/bin/echo -e'
+ export ECHO
+ SPACE=' '
++ /bin/echo -e ' '
++ tr ' ' '\011'
+ TAB=' '
+ VCS_BIN=/opt/VRTSvcs/bin
+ SC_BIN=/usr/cluster/bin
+ HACMP_BIN=/usr/es/sbin/cluster/utilities
+ HPSG_BIN=
++ id
++ egrep '^uid=0\('
+ ISROOT='uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)'
+ '[' 'uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)' = '' ']'
++ uname -s
+ OS_TYPE=Linux
++ uname -r
+ OS_VERSION=2.6.32-71.el6.x86_64
++ uname -n
++ sed 's/\..*$//'
+ HOST=srv-vrts-n1
+ CLUTYPE=NO_CLUSTER
+ VIRTUAL_NAME=
+ FQ_VIRTUAL_NAME=
+ NODES=
+ ConfigAction=installer_steps
+ Application=nbu
+ NodeList=
+ NBU_SERVER_TYPE=
+ NBU_VNAME=
+ installer_App_Ops
+ '[' '' = -s ']'
+ return 0
++ dirname /usr/openv/netbackup/bin/cluster/cluster_config
+ init_common_file=/usr/openv/netbackup/bin/cluster/init_common
+ '[' -f /usr/openv/netbackup/bin/cluster/init_common ']'
+ . /usr/openv/netbackup/bin/cluster/init_common
++ dirname /usr/openv/netbackup/bin/cluster/cluster_config
+ init_file=/usr/openv/netbackup/bin/cluster/init_nbu
+ function_is_defined init_app_vars_nbu
+ LANG_SV=en_US.UTF-8
+ LANG=
+ type init_app_vars_nbu
+ head -1
+ grep 'init_app_vars_nbu is a function'
+ ret=0
+ LANG=en_US.UTF-8
+ return 0
+ '[' -f /usr/openv/netbackup/bin/cluster/init_nbu ']'
+ init_app_vars_nbu
+ APP_ROOT=/usr/openv
+ APP_PATH=/usr/openv/netbackup
+ RSP=/usr/openv/netbackup/bin/cluster/NBU_RSP
+ APP_NAME=NetBackup
+ APP_LC_NAME=nbu
+ bpconf=/usr/openv/netbackup/bp.conf
+ vmconf=/usr/openv/volmgr/vm.conf
+ '[' Linux = HP-UX ']'
+ '[' Linux = SunOS ']'
+ RSH_CMD_NAME=rsh
+ RCP_CMD_NAME=rcp
+ export APP_ROOT APP_PATH RSP APP_NAME APP_LC_NAME RSH_CMD_NAME
+ export RCP_CMD_NAME
+ LOG_DIR=/usr/openv/netbackup/logs/cluster
+ '[' '!' -d /usr/openv/netbackup/logs/cluster ']'
++ date +%m%d%y
+ LOG_DATE=050312
+ TRACE_FILE=/usr/openv/netbackup/logs/cluster/log.cc.050312
+ export TRACE_FILE
+ TRACE 1 'Begin cluster configuration script: Application: NetBackup'
++ date '+%m:%d:%y %H:%M:%S'
+ TIME='05:03:12 19:35:38'
+ '[' 1 -eq 0 ']'
+ '[' -n /usr/openv/netbackup/logs/cluster/log.cc.050312 ']'
+ /bin/echo -e '05:03:12 19:35:38 Begin cluster configuration script: Application: NetBackup'
+ TRACE 1 'Parameters : '
++ date '+%m:%d:%y %H:%M:%S'
+ TIME='05:03:12 19:35:38'
+ '[' 1 -eq 0 ']'
+ '[' -n /usr/openv/netbackup/logs/cluster/log.cc.050312 ']'
+ /bin/echo -e '05:03:12 19:35:38 Parameters : '
+ installer_Ops
+ TRACE 1 'Command Executed: cluster_config '
++ date '+%m:%d:%y %H:%M:%S'
+ TIME='05:03:12 19:35:38'
+ '[' 1 -eq 0 ']'
+ '[' -n /usr/openv/netbackup/logs/cluster/log.cc.050312 ']'
+ /bin/echo -e '05:03:12 19:35:38 Command Executed: cluster_config '
+ '[' nbu = oc ']'
+ '[' '' = -isCluster ']'
+ '[' '' = -checkCluster ']'
+ '[' '' = -isNBClustered ']'
+ '[' '' = -preConfig ']'
+ '[' '' = -listInp ']'
+ '[' '' = -listNBGroups ']'
+ '[' '' = -cleanUp ']'
+ '[' '' = -completeConfig ']'
+ '[' '' = -g ']'
+ return 1
+ ret=1
+ '[' 1 -eq 0 ']'
+ '[' '' = -vcs_agent_upgrade ']'
+ getopts s:n:o:mraq opt
+ '[' -z rsh ']'
+ '[' -z rcp ']'
+ export RSH_CMD_NAME RCP_CMD_NAME
+ TRACE 1 'RSH/RCP command used: rsh / rcp'
++ date '+%m:%d:%y %H:%M:%S'
+ TIME='05:03:12 19:35:38'
+ '[' 1 -eq 0 ']'
+ '[' -n /usr/openv/netbackup/logs/cluster/log.cc.050312 ']'
+ /bin/echo -e '05:03:12 19:35:38 RSH/RCP command used: rsh / rcp'
+ case "$ConfigAction" in
+ TRACE 1 'Performing installer steps []'
++ date '+%m:%d:%y %H:%M:%S'
+ TIME='05:03:12 19:35:38'
+ '[' 1 -eq 0 ']'
+ '[' -n /usr/openv/netbackup/logs/cluster/log.cc.050312 ']'
+ /bin/echo -e '05:03:12 19:35:38 Performing installer steps []'
+ exit
 

 

 

Marianne
Level 6
Partner    VIP    Accredited Certified
So, you decided to start from scratch with new installation? Please go through the master server cluster guide, ensure all pre-reqs are in place (e.g shared drive requirements, entry for Virtual hostname in /etc/hosts on both nodes, rsh enabled, etc). Complete the checklist with all parameters - you will be prompted for those during installation and subsequent cluster setup. When you get to the prompt for the master server, type in the virtual name. Install script will detect that VCS is running and ask if you want to config NBU in a cluster. Follow the prompt. Please understand if you decide to start from scratch, and NBU was not properly clustered before, you will not be able to recover old catalog. Previous disk and/or tape backups will have to be imported.

sirin_zarin
Level 4
Partner Accredited

it's a test bed )

I know the steps during installation on a node...

i configured /etc/hosts, enable rsh, configure volume for catalog from san and etc..

but what should I do for normal operation /usr/openv/netbackup/bin/cluster/cluster_config ?

##################################################

[root@srv-vrts-n1 ~]# cat /usr/openv/netbackup/bp.conf
SERVER = srv-vrts-nb
CLIENT_NAME = srv-vrts-n1.e-lab
CONNECT_OPTIONS = localhost 1 0 2
USE_VXSS = PROHIBITED
VXSS_SERVICE_TYPE = INTEGRITYANDCONFIDENTIALITY
EMMSERVER = srv-vrts-nb
HOST_CACHE_TTL = 3600
VXDBMS_NB_DATA = /usr/openv/db/data

.....................

[root@srv-vrts-n1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain srv-vrts-n1 srv-vrts-n1.e-lab
::1         localhost localhost.localdomain

# virtual IP

192.168.200.43 srv-vrts-nb.e-lab srv-vrts-nb

# nodes IP
192.168.200.39  srv-vrts-n1.e-lab srv-vrts-n1
192.168.200.23  srv-vrts-n2.e-lab srv-vrts-n2

# sirin

192.168.200.168  sirin
192.168.201.101  ops-sym-srv
192.168.201.99  wnbu-msrv-01

# esxi

192.168.200.169 esx
192.168.208.136 test-vm
 

 

 

sirin_zarin
Level 4
Partner Accredited

UPD 2 Typical installation on two nodes (not update, clear installation)

configured /etc/hosts, enable rsh, configure volume for catalog from san and etc...

[root@srv-vrts-n1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain srv-vrts-n1 srv-vrts-n1.e-lab
::1         localhost localhost.localdomain

# virtual IP

192.168.200.43 srv-vrts-nb.e-lab srv-vrts-nb

# nodes IP
192.168.200.39  srv-vrts-n1.e-lab srv-vrts-n1
192.168.200.23  srv-vrts-n2.e-lab srv-vrts-n2

primary node

root@srv-vrts-n1 NetBackup_7.1_LinuxR_x86_64]# ./install


Symantec Installation Script
Copyright 1993 - 2011 Symantec Corporation, All Rights Reserved.


        Installing NetBackup Server Software

Do you wish to continue? [y,n] (y) y

The NetBackup and Media Manager software is built for use on LINUX_RH_X86 hardware.
Do you want to install NetBackup and Media Manager files? [y,n] (y)

NetBackup and Media Manager are normally installed in /usr/openv.
Is it OK to install in /usr/openv? [y,n] (y) y

Reading NetBackup files from /home/nbu/NetBackup_7.1_LinuxR_x86_64/linuxR_x86/anb

usr/openv/db/
...
...
Product NB is installed and will be registered.

Updating LiveUpdate registration now...this may take some time.



A NetBackup Server or Enterprise Server license key is needed
for installation to continue.

Enter license key:xxxxx

All additional keys should be added at this time.
Do you want to add additional license keys now? [y,n] (y) n

Use /usr/openv/netbackup/bin/admincmd/get_license_key
to add, delete or list license keys at a later time.

Installing NetBackup Enterprise Server version: 7.1



If this machine will be using a different network interface than the
default (srv-vrts-n1), the name of the preferred interface should be used
as the configured server name.  If this machine will be part of a
cluster, the virtual name should be used as the configured server name.

Would you like to use "srv-vrts-n1" as the configured
NetBackup server name of this machine? [y,n] (y) n

Enter the name of this NetBackup server: srv-vrts-nb

Is srv-vrts-nb the master server? [y,n] (y) y
Do you want to create a NetBackup cluster? [y,n] (y) y


*******************************************************************************
This script will only handle global configuration options.
If you need Local configuration, enter values for the node
you are currently on. Then after the script is completed
use the web GUI or ha commands to enter local
*******************************************************************************


Enter the VCS group name for the NetBackup group (nbu_group):
Enter the VCS resource name for NetBackup (nbu_server):


******************************************************************************


VCS group name for the NetBackup group is :nbu_group
VCS resource name for NetBackup is : nbu_server

Is this correct? [y,n]: y


******************************************************************************


Enter the VCS resource name for the NetBackup server IP (nbu_ip):
Enter the netmask for the NetBackup service (255.255.255.0):


******************************************************************************


You entered the following for Virtual IP configuration
NetBackup IP Resource name = nbu_ip
NetBackup virtual name = srv-vrts-nb
The IP address to use with NetBackup = 192.168.200.43
The IP address type( IPv4 or IPv6 ) = IPv4
The netmask to use with NetBackup = 255.255.255.0
Is this correct? [y/n]: y


******************************************************************************


NetBackup will run on the following nodes: srv-vrts-n1
IP address of the node srv-vrts-n1  = 192.168.200.39
Is this correct? [y,n]: y


******************************************************************************


Enter the VCS resource name for the NIC (nbu_nic):
Enter the network device for NetBackup: eth0


******************************************************************************


You entered the following for Network configuration
NetBackup NIC VCS resource name = nbu_nic
The NIC to use with NetBackup on srv-vrts-n1 = eth0
Is this correct? [y/n]: y


******************************************************************************



        Initial Cluster Configuration for NetBackup Server takes only
        one shared disk.

        Multiple disks, if needed,  can be configured after the inital
        cluster configuration completes. After the installation is
        complete, Run /usr/openv/netbackup/bin/cluster/cluster_mvnbdb
        to configure multiple disks.


 Select your VCS disk configuration.
        1. Disk Group
        2. None (We will just configure a mount for NetBackup)

Enter the disk configuration: 2
Enter the VCS resource name for the NetBackup shared disk mount point
        (nbu_mount):
Enter the mount point for the NetBackup disk (/opt/VRTSnbu): /nbu
Enter the Block Device for the NetBackup disk: /dev/vx/dsk/nbu_shared/vxNBUvol
Enter the FS Type for the NetBackup file system: vxfs
Enter any mount options you would like passed to the NetBackup disk
        (Enter for none): rw
Enter any fsck options you would like passed to the NetBackup disk
        (Default is -y. This must be included if options are added):


******************************************************************************


VCS NetBackup Mount Point resource name = nbu_mount
NetBackup Mount Point = /nbu
Block Device = /dev/vx/dsk/nbu_shared/vxNBUvol
File System type = vxfs
Mount options = rw
fsck options = -y
Is this correct? [y,n]: y


******************************************************************************


Creating NetBackup Group....
Setting up NetBackup limitations on srv-vrts-n1
Adding NetBackup type to VCS...
Adding nbu_group
Adding nbu_nic to nbu_group
Adding nbu_ip to nbu_group
Adding nbu_mount to nbu_group
Linking nbu_ip to nbu_nic
Linking nbu_server to nbu_ip
Linking nbu_server to nbu_mount
Probing resources on srv-vrts-n1.
Waiting for NIC resource to finish probing
       Probe completed for  nbu_nic resource on node srv-vrts-n1.
                The resource is ONLINE
Waiting for IP resource to finish probing
       Probe completed for  nbu_ip resource on node srv-vrts-n1.
                The resource is ONLINE
Waiting for Disk Group and Volume resources to finish probing
Waiting for MOUNT resources to finish probing
       Probe completed for  nbu_mount resource on node srv-vrts-n1.
                The resource is OFFLINE
Waiting for resources to come online
The NetBackup VCS configuration was successful!
Waiting for virtual name to come online...


Media servers can be added during this installation or
to a NetBackup environment after installation completes.
Refer to the NetBackup Administrator's Guide, Volume I for
more information.

Do you want to add any media servers now? [y,n] (n) n

Checking network connections.
bp.conf: IP_ADDRESS_FAMILY = AF_INET: default value, no update needed

Sending SIGHUP to xinetd process.

Reloading configuration:                                   [  OK  ]


NetBackup maintains a centralized catalog (separate from the image
catalog) for data related to media and device configuration, device
management, storage units, hosts and host aliases, media server status,
NDMP credentials, and other information.  This is managed by the
Enterprise Media Manager server.

Enter the Enterprise Media Manager server (default: srv-vrts-nb):
Converting STREAMS files.  This may take a few minutes.

STREAMS files conversion is complete.


Successfully updated the session cache parameters.
Starting the NetBackup network daemon.
Starting the NetBackup client daemon.
Starting the NetBackup SAN Client Fibre Transport daemon.
Creating /usr/openv/tmp/sqlany

Installed SQL Anywhere Version 11.0.1.2475
Installation completed successfully

set_value: Key "AZDBPasswordFilePath" successfully updated
AZ database setup complete.

Database server is NB_srv-vrts-nb
Creating the NetBackup database.
Creating NetBackup staging directory in: /nbu/db/staging
Creating /nbu/db/staging
Starting the NetBackup database.
Authenticating the NetBackup database.
VXDBMS_NB_DATA entry in bp.conf updated successfully.
Setting of database authentication for NBDB successful.
Change of dba password for NBDB successful.
Creating the NetBackup database files.
Creating the NetBackup EMM schema.
Verifying the running version of NBDB ...
NBDB version 7.1.0.0 verified.
Nothing to upgrade. Version unchanged.
Database [NBDB] validation successful.
Database [NBDB] is alive and well on server [NB_srv-vrts-nb].
Creating the NetBackup Authorization database.

Starting the NetBackup Event Manager.


Starting the NetBackup Audit Manager.

Starting the NetBackup Deduplication Manager.
Starting the NetBackup Deduplication Engine.

Starting the NetBackup database manager process (bpdbm).

Creating Directive Set for LotusNotes
Creating Directive Set for MS_Exchange_Mailbox
Creating Directive Set for MS_Exchange_Database
Creating Directive Set for MS_Exchange_Public_Folders
Creating Directive Set for MS_Exchange_Database_Availability_Groups
Creating Directive Set for MS_SharePoint_Portal_Server
Creating Template Set for Oracle_RMAN
Creating Template Set for Oracle_XML_Export
Creating Template Set for DB2
Creating Directive Set for Windows2003
Creating Directive Set for Windows2008
Creating Directive Set for Enterprise_Vault_7.5
Creating Directive Set for Enterprise_Vault_8.0
Creating Directive Set for Enterprise_Vault_9.0
Creating Directive Set for Enterprise_Vault_10.0

Converting snapshot policies:

Policy conversion summary:
        Number of original policies:                    0
        Number of non-snapshot policies skipped:        0
        Number of policies not needing conversion:      0
        Number of policies converted to
          'auto' snapshot method:                       0
        Number of policies converted:                   0

Updating client hardware definitions:

Hardware update conversion summary:
        Number of policies processed:                         0
        Number of policies with affected clients:             0
        Number of clients processed:                          0
        Number of clients converted:                          0
        Number of Disaster Recovery flags cleared:            0

Starting the NetBackup compatibility daemon.
Starting the NetBackup Enterprise Media Manager.
Starting the NetBackup Resource Broker.

Populating the database tables.  This will take some time.

Starting the Media Manager device daemon processes.

Do you want to start the NetBackup bprd process so
backups and restores can be initiated? [y,n] (y) y
Starting the NetBackup request daemon process (bprd).
Starting the NetBackup Job Manager.
Starting the NetBackup Policy Execution Manager.
Starting the NetBackup Storage Lifecycle Manager.
Starting the NetBackup Remote Monitoring Management System.
Starting the NetBackup Key Management daemon.
Starting the NetBackup Service Layer.
Starting the NetBackup Agent Request Server.
Starting the NetBackup Bare Metal Restore daemon.
Starting the NetBackup Vault daemon.
Starting the NetBackup Service Monitor.
Starting the NetBackup Bare Metal Restore Boot Server daemon.


OpsCenter is the next-generation monitoring, reporting and
administrative solution designed to centrally manage one or
more NetBackup installations from a web browser.  Existing
NetBackup Operations Manager or Veritas Backup Reporter
installations can be upgraded to OpsCenter.

If an OpsCenter server already exists in your environment
or you plan to install one, enter the real hostname of that
OpsCenter server here.  Do not use a virtual name.  If you
do not want this local machine to be an OpsCenter server,
enter NONE.

Enter the OpsCenter server (default: NONE):

NetBackup daemons will be shut down to complete the
NetBackup cluster configuration.

Updating /usr/openv/volmgr/vm.conf
We have detected that this is an EMM server.
Adding nbemm, nbrb & NB_dbsrv to the list of monitored daemons...
Starting the NetBackup agent on srv-vrts-n1
Waiting 240 seconds for nbu_group to come online...
NetBackup is online...


NetBackup server installation complete.


File /usr/openv/tmp/install_trace.11467 contains a trace of this install.
That file can be deleted after you are sure the install was successful.
 

secondary node

[root@srv-vrts-n2 NetBackup_7.1_LinuxR_x86_64]# ./install


Symantec Installation Script
Copyright 1993 - 2011 Symantec Corporation, All Rights Reserved.


        Installing NetBackup Server Software

Do you wish to continue? [y,n] (y) y

NetBackup Deduplication software is incorrectly installed.
/etc/pdregistry.cfg was unexpectedly found.
Moving to /etc/pdregistry.cfg.2012-04-28_18:11 in order to continue.


The NetBackup and Media Manager software is built for use on LINUX_RH_X86 hardware.
Do you want to install NetBackup and Media Manager files? [y,n] (y) y

NetBackup and Media Manager are normally installed in /usr/openv.
Is it OK to install in /usr/openv? [y,n] (y) y

Reading NetBackup files from /home/nbu/NetBackup_7.1_LinuxR_x86_64/linuxR_x86/anb

usr/openv/db/
...
...

Product NB is installed and will be registered.

Updating LiveUpdate registration now...this may take some time.



A NetBackup Server or Enterprise Server license key is needed
for installation to continue.

Enter license key:xxx

All additional keys should be added at this time.
Do you want to add additional license keys now? [y,n] (y) n

Use /usr/openv/netbackup/bin/admincmd/get_license_key
to add, delete or list license keys at a later time.

Installing NetBackup Enterprise Server version: 7.1



If this machine will be using a different network interface than the
default (srv-vrts-n2), the name of the preferred interface should be used
as the configured server name.  If this machine will be part of a
cluster, the virtual name should be used as the configured server name.

Would you like to use "srv-vrts-n2" as the configured
NetBackup server name of this machine? [y,n] (y) n

Enter the name of this NetBackup server: srv-vrts-nb

Is srv-vrts-nb the master server? [y,n] (y) y

This is the list of existing NetBackup cluster groups:
nbu_group

Do you wish to join this existing NetBackup cluster group? [y,n] (y) y
Adding node srv-vrts-n2 to NetBackup Group nbu_group
VCS
Enter the network device for NetBackup on srv-vrts-n2: eth0

You entered the following for Network configuration
The NIC to use with NetBackup on srv-vrts-n2 = eth0
The IP address for srv-vrts-n2 is : 192.168.200.23

Is this correct? [y/n]: y


Media servers can be added during this installation or
to a NetBackup environment after installation completes.
Refer to the NetBackup Administrator's Guide, Volume I for
more information.

Do you want to add any media servers now? [y,n] (n) n
Sending SIGHUP to xinetd process.

Reloading configuration:                                   [  OK  ]


Successfully updated the session cache parameters.
Starting the NetBackup network daemon.
Starting the NetBackup client daemon.
Creating /usr/openv/tmp/sqlany

Installed SQL Anywhere Version 11.0.1.2475
Installation completed successfully

set_value: Key "AZDBPasswordFilePath" successfully updated
AZ database setup complete.




OpsCenter is the next-generation monitoring, reporting and
administrative solution designed to centrally manage one or
more NetBackup installations from a web browser.  Existing
NetBackup Operations Manager or Veritas Backup Reporter
installations can be upgraded to OpsCenter.

If an OpsCenter server already exists in your environment
or you plan to install one, enter the real hostname of that
OpsCenter server here.  Do not use a virtual name.  If you
do not want this local machine to be an OpsCenter server,
enter NONE.

Enter the OpsCenter server (default: NONE):


NetBackup server installation complete.


File /usr/openv/tmp/install_trace.21363 contains a trace of this install.
That file can be deleted after you are sure the install was successful.
[root@srv-vrts-n2 NetBackup_7.1_LinuxR_x86_64]#
 

Marianne
Level 6
Partner    VIP    Accredited Certified

Since NBU 7.x (can't remember if it's 7.0 or 7.1) there is no need for cluster_config.

NBU installation will automatically configure a one-node Service Group during initial install.
When  you install the 2nd node, the installation script will ask if you want to join the cluster and then automatically update NBU as well as VCS (extremely important that rsh works fine at this point in time).

All of this is well-documented in NBU Clustered Master Server Admin Guide http://www.symantec.com/docs/DOC3679 .

sirin_zarin
Level 4
Partner Accredited

NBU installation will automatically configure a one-node Service Group during initial install.
When  you install the 2nd node, the installation script will ask if you want to join the cluster and then automatically update NBU as well as VCS (extremely important that rsh works fine at this point in time).

yes, it works ) but only for a clean installation, in the post before I did and showed (post UPD 2 Typical installation on two nodes)

But when there are two servers are not originally installed as a cluster ... as they were then collected in a cluster?

sirin_zarin
Level 4
Partner Accredited

Ok... netbackup installed on two nodes (post UPD 2 Typical installation on two nodes)

but update fail (

active node - srv-vrts-n2

passive node - srv-vrts-n1

steps - offline resource NetBackup (nbu_server) and freeze nbu_group, stop NetBackup cluster agent to all nodes.

update passive node - good

update active node - fail ( but  NB_CLT_7.1.0.4 install and NB_7.1.0.4 failed install

....[root@srv-vrts-n2 nbupd]# ./NB_update.install

There are 2 packs available in /home/nbu/nbupd:
(* denotes installed pack)

        NB_7.1.0.4
        NB_CLT_7.1.0.4 *

Enter pack name (or q) [q]: NB_7.1.0.4


Install pack NB_7.1.0.4 Thu May  3 23:44:53 MSD 2012 Rev. 1.42.24.8

Running preinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.preinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n2.e-lab is a master server, and it is the EMMSERVER.
ln: creating symbolic link `/usr/openv/var/global/global': File exists

Saving pre-existing binaries.
Binaries must have been saved on previous install.
because pre-existing tar file found in /usr/openv/pack/NB_7.1.0.4/save.
They will not be re-saved.
NB_7.1.0.4
Extracting files out of /home/nbu/nbupd/VrtsNB_7.1.0.4.linuxR_x86.tar.gz.


Tar extraction successful.
See /usr/openv/pack/pack.history for more details.

Running postinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.postinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n2.e-lab is a master server, and it is the EMMSERVER.
ln: creating symbolic link `/usr/openv/var/global/global': File exists

Running /usr/openv/netbackup/client/Linux/RedHat2.6/pdinstall to update PDDE.
Removing old packaged libraries for RedHat
NetBackup Deduplication software is installed, overwriting binaries.
+ Extracting PDDE agent package (/usr/openv/netbackup/client/Linux/RedHat2.6/pddeagent.tar.gz)...
Unpacking SYMCpddea package.
Checking for pre-existing SYMCpddea package.
Removing pre-existing SYMCpddea package.
Installing SYMCpddea package.
+ Extracting PDDE server package (/usr/openv/pddeserver.tar.gz)...
Saving configuration files.
Starting setup for PDDE script
Done setup for PDDE script
Checking to see if the PDDE configuration needs upgrading
pdregistry.cfg exists.
Making changes to /usr/openv/lib/ost-plugins/pd.conf
RESTORE_DECRYPT_LOCAL already in pd.conf
PREFETCH_SIZE already in pd.conf
META_SEGKSIZE already in pd.conf
CLIENT_POLICY_DATE already in pd.conf

PDDE install finished successfully.
Full PDDE installation log saved to /var/log/puredisk/2012-05-03_23:45-pdde-install.log.

Running "bpclusterutil -addSvc nbaudit"
Running "bpclusterutil -enableSvc nbaudit"
Running "bpclusterutil -addSvc nbvault"

ERROR: Unable to start Sybase.
------------------------------------------------
Installation of pack NB_7.1.0.4 FAILED Thu May  3 23:44:53 MSD 2012 Rev. 1.42.24.8.
------------------------------------------------
Exiting NB_update.install
[root@srv-vrts-n2 nbupd]#
...

config files

bp.conf on srv-vrts-n2

[root@srv-vrts-n2 nbupd]# cat /usr/openv/netbackup/bp.conf
SERVER = srv-vrts-nb
SERVER = srv-vrts-n1.e-lab
SERVER = srv-vrts-n2.e-lab
CLUSTER_NAME = srv-vrts-nb
CLIENT_NAME=srv-vrts-n2.e-lab
CONNECT_OPTIONS = localhost 1 0 2
USE_VXSS = PROHIBITED
VXSS_SERVICE_TYPE = INTEGRITYANDCONFIDENTIALITY
EMMSERVER = srv-vrts-nb
HOST_CACHE_TTL = 3600
VXDBMS_NB_DATA = /nbu/db/data
KMS_DIR = /nbu/kms
[root@srv-vrts-n2 nbupd]#

bp.conf on srv-vrts-n1

[root@srv-vrts-n1 nbupd]# cat /usr/openv/netbackup/bp.conf
SERVER = srv-vrts-nb
SERVER = srv-vrts-n1.e-lab
SERVER = srv-vrts-n2.e-lab
CLUSTER_NAME = srv-vrts-nb
CLIENT_NAME = srv-vrts-n1.e-lab
CONNECT_OPTIONS = localhost 1 0 2
USE_VXSS = PROHIBITED
VXSS_SERVICE_TYPE = INTEGRITYANDCONFIDENTIALITY
EMMSERVER = srv-vrts-nb
HOST_CACHE_TTL = 3600
VXDBMS_NB_DATA = /nbu/db/data
KMS_DIR = /nbu/kms
[root@srv-vrts-n1 nbupd]#

 

NBU_RSP on srv-vrts-n1

[root@srv-vrts-n1 nbupd]# cat /usr/openv/netbackup/bin/cluster/NBU_RSP
#DO NOT DELETE OR EDIT THIS FILE!!!
NBU_GROUP=nbu_group
SHARED_DISK=/nbu
NODES=srv-vrts-n1  srv-vrts-n2
VNAME=srv-vrts-nb
VIRTUAL_IP=192.168.200.43
SUBNET=255.255.255.0
CLUTYPE=VCS
START_PROCS=NB_dbsrv nbevtmgr nbemm nbrb ltid vmd bpcompatd nbjm nbpem nbstserv nbrmms nbsl nbvault nbsvcmon bpdbm bprd bptm bpbrmds bpsched bpcd bpversion bpjobd nbproxy vltcore acsd tl8cd odld tldcd tl4d tlmd tshd rsmd tlhcd pbx_exchange nbkms nbaudit nbatd nbazd

PRODUCT_CODE=NBU
DIR=netbackup mkdir
DIR=netbackup/db mv
DIR=var mkdir
DIR=var/global mv
DIR=volmgr/mkdir
DIR=volmgr/misc mkdir
DIR=volmgr/misc/robotic_db mv
DIR=kms mv
DIR=netbackup/vault mkdir
DIR=netbackup/vault/sessions mv

LINK=volmgr/misc/robotic_db
LINK=netbackup/db
LINK=netbackup/vault/sessions
LINK=var/global
PROBE_PROCS=nbevtmgr nbstserv vmd bprd bpdbm nbpem nbjm nbaudit nbemm nbrb NB_dbsrv
[root@srv-vrts-n1 nbupd]#

 

NBU_RSP on srv-vrts-n2

cat /usr/openv/netbackup/bin/cluster/NBU_RSP
#DO NOT DELETE OR EDIT THIS FILE!!!
NBU_GROUP=nbu_group
SHARED_DISK=/nbu
NODES=srv-vrts-n1  srv-vrts-n2
VNAME=srv-vrts-nb
VIRTUAL_IP=192.168.200.43
SUBNET=255.255.255.0
CLUTYPE=VCS
START_PROCS=NB_dbsrv nbevtmgr nbemm nbrb ltid vmd bpcompatd nbjm nbpem nbstserv nbrmms nbsl nbvault nbsvcmon bpdbm bprd bptm bpbrmds bpsched bpcd bpversion bpjobd nbproxy vltcore acsd tl8cd odld tldcd tl4d tlmd tshd rsmd tlhcd pbx_exchange nbkms nbaudit nbatd nbazd

PRODUCT_CODE=NBU
DIR=netbackup mkdir
DIR=netbackup/db mv
DIR=var mkdir
DIR=var/global mv
DIR=volmgr/mkdir
DIR=volmgr/misc mkdir
DIR=volmgr/misc/robotic_db mv
DIR=kms mv
DIR=netbackup/vault mkdir
DIR=netbackup/vault/sessions mv

LINK=volmgr/misc/robotic_db
LINK=netbackup/db
LINK=netbackup/vault/sessions
LINK=var/global
PROBE_PROCS=nbevtmgr nbstserv vmd bprd bpdbm nbpem nbjm nbaudit nbemm nbrb NB_dbsrv
[root@srv-vrts-n2 nbupd]
 

Marianne
Level 6
Partner    VIP    Accredited Certified

Sorry - seems we were posting at more or less the same time.

Great stuff!  All working fine now!

As per the manual and as you have experienced, NBU can only be clustered during a new, clean installation. This is really the only way.

Not sure what you mean with 'Typical installation on two nodes'. It no longer works like in 6.x when you did the installation on each server and then had to run cluster_config. Even in 6.x you had to enter virtual name as master server. Installation script then told you to run cluster_config.

Patch installation should work fine now.

PS: By the way, rsh and rcp can be linked to ssh and scp if customer won't allow rsh: http://www.symantec.com/docs/TECH160242

One last thing:

nbemmcmd -listhosts will only display correct info after you have switched to node 2.

Herewith my own reecent experiencience when clustering NBU in VMs:

After node 2 was succesfully installed and added to SG (NBU still online on node 1):

# /usr/openv/netbackup/bin/admincmd/nbemmcmd -listhosts

NBEMMCMD, Version:7.1

The following hosts were found:

server             nbumas
cluster            nbumas
master             mvdb-lnx1

Command completed successfully.

 

Shows only one node… Offline, online, still the same output.

Switch/Failover.

Only NOW is emm updated with 2nd node!

# /usr/openv/netbackup/bin/admincmd/nbemmcmd -listhosts

NBEMMCMD, Version:7.1

The following hosts were found:

server             nbumas
cluster            nbumas
master             mvdb-lnx1
master             mvdb-lnx2

Command completed successfully.

 


 

 

sirin_zarin
Level 4
Partner Accredited

Hmm... but the problem remains. Update is not an active node is successful, update the active node fails.

listing on active node

##########################################

------------------------------------------------
Installation of pack NB_CLT_7.1.0.4 completed Fri May  4 12:05:38 MSD 2012 Rev. 1.42.24.8.
------------------------------------------------

Checking LiveUpdate registration for the following products: NB
This may take a few minutes.

Product NB is installed and will be registered.

Updating LiveUpdate registration now...this may take some time.

Installing required pack, NB_7.1.0.4, now.

Install pack NB_7.1.0.4 Fri May  4 12:05:38 MSD 2012 Rev. 1.42.24.8

Running preinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.preinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n2 is a master server, and it is the EMMSERVER.

Saving pre-existing binaries.
Saved binaries successfully.
Using gzip to compress saved files in /usr/openv/pack/NB_7.1.0.4/save/pre_NB_7.1.0.4.050412_120538.tar.

Extracting files out of /home/nbu/nbupd/VrtsNB_7.1.0.4.linuxR_x86.tar.gz.


Tar extraction successful.
See /usr/openv/pack/pack.history for more details.

Running postinstall script.
See /usr/openv/pack/pack.history for more details.
/home/nbu/nbupd/VrtsNB_7.1.0.4.postinstall: Running. Hardware/OS Type=Linux/RedHat2.6

Machine srv-vrts-n2 is a master server, and it is the EMMSERVER.
ln: creating symbolic link `/usr/openv/var/global/global': File exists

Running /usr/openv/netbackup/client/Linux/RedHat2.6/pdinstall to update PDDE.
Removing old packaged libraries for RedHat
NetBackup Deduplication software is installed, overwriting binaries.
+ Extracting PDDE agent package (/usr/openv/netbackup/client/Linux/RedHat2.6/pddeagent.tar.gz)...
Unpacking SYMCpddea package.
Checking for pre-existing SYMCpddea package.
Removing pre-existing SYMCpddea package.
Installing SYMCpddea package.
+ Extracting PDDE server package (/usr/openv/pddeserver.tar.gz)...
Saving configuration files.
Keeping existing /usr/openv/lib/ost-plugins/pd.conf
Starting setup for PDDE script
removing /etc/default/pdde
Done setup for PDDE script
Checking to see if the PDDE configuration needs upgrading
pdregistry.cfg exists.
Making changes to /usr/openv/lib/ost-plugins/pd.conf

PDDE install finished successfully.
Full PDDE installation log saved to /var/log/puredisk/2012-05-04_12:13-pdde-install.log.

Running "bpclusterutil -addSvc nbaudit"
Running "bpclusterutil -enableSvc nbaudit"
Running "bpclusterutil -addSvc nbvault"

ERROR: Unable to start Sybase.
------------------------------------------------
Installation of pack NB_7.1.0.4 FAILED Fri May  4 12:05:38 MSD 2012 Rev. 1.42.24.8.
------------------------------------------------

There are stopped daemons.

    Do you want to restart all NetBackup daemons? [y,n] (y)
#####################################

my pre-install steps

Offline resource NetBackup (nbu_server) on cluster and next:

[root@srv-vrts-n2 ~]# /opt/VRTSvcs/bin/haconf -makerw
[root@srv-vrts-n2 ~]# /opt/VRTSvcs/bin/hagrp -freeze nbu_group -persistent
[root@srv-vrts-n2 ~]# /opt/VRTSvcs/bin/haagent -stop NetBackup -force -sys srv-vrts-n2
[root@srv-vrts-n2 ~]# /opt/VRTSvcs/bin/haagent -stop NetBackup -force -sys srv-vrts-n1

[root@srv-vrts-n2 ~]# /opt/VRTSvcs/bin/haagent -display NetBackup
#Agent       Attribute      Value
NetBackup    AgentDirectory
NetBackup    AgentFile
NetBackup    Faults         0
NetBackup    Running        No
NetBackup    Started        No

and run update on nodes...

run update on srv-vrts-n1 - results successful
run update on srv-vrts-n2 - results not successful


 

sirin_zarin
Level 4
Partner Accredited

UPD 3

I solved the problem with "ERROR: Unable to start Sybase." but it is strange.

i used sepatare catalog for NBDBMS_CONF_DIR (by default - /usr/openv/var/global ), my NBDBMS_CONF_DIR is /nbu/var/global and symlink is created during installation

[root@srv-vrts-n1 nbupd]# ls -la /usr/openv/var/global
lrwxrwxrwx 1 root root 15 May  4 14:01 /usr/openv/var/global -> /nbu/var/global

Script /usr/openv/db/bin/nbdbms_start_server does not work properly during the installation of updates. i change to script NBDBMS_CONF_DIR=/usr/openv/var/global on NBDBMS_CONF_DIR=/nbu/var/global and installation updates successful.

It is a very strange

P.S. After the upgrade everything works fine. I'll check a few days on the stand, and write the result

Marianne
Level 6
Partner    VIP    Accredited Certified

Strange.... Glad you managed to figure it out.

Will remember and double-check before I patch my lab cluster. This is my first Linux cluster. I have installed more clustered master servers on Solaris than I can remember.
At my previous company I had a lab clustered master server on Solaris. Always used the lab to test patch and/or upgrade before we upgraded at customers. NEVER had any problems. Last update I've done on that cluster was 7.1.0.2. 100% successful as per the documentation.

sirin_zarin
Level 4
Partner Accredited

Thanks for your help ))

sirin_zarin
Level 4
Partner Accredited

I tested, everything works fine.