Great answers. I just wanted to add some common sense items that are sometimes overlooked.
1. plan downtime. Mr. Murphy always seems to want his bowling team spreadsheet restored at times like this :)
2. check your available disk space on all machines since Netbackup saves old binaries/help files, etc...for uninstallation. You don't want to have to take the blame for a system disk corrupting due to overflowing its capacity. A good time to check would be when you are checking OS patches.
3. I suggest a phased upgrade with time and testing. Do the master the first day. Do test restores and allow normal scheduled backups to run. Next phase is the media servers and I would try to do all of them in the same day. Again, do test restores and allow normally scheduled backups to run.. Final phase would be the clients OS plus agents.
4. check locally produced scripts/programs for expected functionality. As Mr. Bob Dylan sings, " For the binaries they are a-changing"
One problem at our site is that we first did the DR environment as our testlab. The upgrade went smooth and everything was working fine.........
UNTIL the rsync program that runs on the production server to keep the DR images current with production ran.
NetBackup 6.5 added a simple text file in the images directory called /usr/openv/netbackup/db/images/db_marker.txt.
# cat db_marker.txt
!PLEASE DO NOT DELETE THIS FILE
WARNING: REMOVAL OF THIS FILE MAY INCREASE THE EXPOSURE TO SERIOUS OPERATIONAL ISSUES UP TO AND INCLUDING POTENTIAL DATA LOSS
This file is used to ensure access to the DB directory is valid upon NetBackup Database Manager startup.
Yep, rsync did what it is supposed to do and it deleted this new file it found in the DR images directory!
Netbackup stopped working and data loss was experienced on the DR server.
Bottom line: check # 4. above :)