Showing results for 
Search instead for 
Did you mean: 

Hardware refresh for combined master/media server

Level 4

We are getting ready to do a hardware refresh on our single combined master/media server. The old OS is server 2008 and the new OS is server 2012 R2. We are on 7.7.3. We utilize Exagrid disk storage appliances with their OST plugin. Our catalog backup is on the Exagrid devices, but on a standard CIFS share and NOT on an OST share.

I've found tech article TECH77447 that outlines procedure to do a catalog backup and restore using the DR file to hardware refresh a master server. My question is, is this all we need, or is there more to this process since we also have a media server role? Will this process move across our policies, disk pools, storage units, etc... or will we be left to manually recreate those?

We have the new machine loaded with the OS, updated with windows patches, and it has 7.7.3 same patch level installed to all the same directories as existing server. It is named the same as the old master/media server at this point. Our plan is to...

1. Shut down old backup server

2. Boot new, assign same IP to it and join it to domain with same domain acct as old one

3. Perform catalog recovery by pointing Netbackup to DR file and CIFS share with catalog backup.

4. Install Exagrid OST plugin

5. Test backups.

What I feel I'm missing is... steps between 3/4/5. I'm trying to figure out if we are going to need to reconfigure all of our storage units, disk pools, policies and such, or if the catalog recovery is going to include that information.

Can anyone offer any guidance here?


Level 6

I would add

6. Test restores

Policies should be included in the catalog backup, some media server stuff like storage units might not be. Also check if any tuning you have done is kept. Like buffer sizes & numbers

Don't know exagrid, but think the changing MSDP host in the deduplication admin guide would be a good starting point.


The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

Level 6
Partner    VIP    Accredited Certified

I like the fact that you are using Basic Disk (CIFS) for catalog backups - certainly the easiest to restore from.

All catalogs and configurations will be restored with catalog recovery.

I would adjust steps as follows:

1. Shut down old backup server

2. Boot new, assign same IP to it and join it to domain with same domain acct as old one

3. Install NBU to same path as original server - same NBU version and patch level (if applicable)

4. Install Exagrid OST plugin 
    Test network comms between Exagrid and master (ping)
     Ensure NBU services are started as correct user (

5. Perform catalog recovery by pointing Netbackup to DR file and CIFS share with catalog backup.

6. Restart NBU

7. Confirm Storage Server and Disk Pool(s) are UP

8. Test backups.

8. Test restores

We are right in the thick of this process and have run into a problem that is halting our progress.

We've completed a full catalog backup, have shut off the services on the old server, and shut its network port.  We have NBU 7.7.3 on the new 2012 R2 server installed successfully to the exact same path as the old server, and have a disk attached to the new server with the same drive letter as the old server for the catalog.  

We have run the catalog recovery wizard, and have pointed it to the DR file.  The catalog recovery wizard runs all the way through and restores the entire catalog, and then hangs at the end and never "finishes" so we can move forward.  In looking at the tail end of of the restore log file, we see this:

17:36:42 (3.001) TAR - Z:\db\images\\1412000000\catstore\RR_Datacenter_VMs_1412384791_FULL.f_imgUserGroupNames0
17:36:42 (3.001) TAR - Z:\db\images\\1412000000\catstore\RR_Datacenter_VMs_1412384791_FULL.f_imgStrings0
17:36:43 (3.001) TAR - Z:\db\error\cbu_1496127600
17:36:43 (3.001) TAR - Z:\db\error\log_1495954800
17:36:43 (3.001) TAR - Z:\db\error\log_1496041200
17:36:43 (3.001) INF - TAR EXITING WITH STATUS = 0
17:36:43 (3.001) INF - TAR RESTORED 439219 OF 439219 FILES SUCCESSFULLY
17:36:43 (3.001) INF - TAR KEPT 0 EXISTING FILES
17:36:44 (3.001) Status of restore from copy 1 of image created 5/30/2017 8:28:57 AM = the requested operation was successfully completed

17:36:44 ( INF - Status = the requested operation was successfully completed.

17:36:46 ERR - Failed to find last backup image record for client ccpudbck, client_type 7, with policy Netbackup_Catalog and sched_type 0 (227)

The last line seems to be where it is hanging up.  Everything I can find online says that this could be due to a different install path on the new server vs old server.  We have double and triple checked and Netbackup is installed to the same path on the new server as the old server.  The catalog drive letter is the same.  

Can anyone help here?  My colleague is on the phone with support right now working on a high severity case, because we absolutely have to have this cut over and working before the weekend.  If we can't figure this out, we need to revert back to our old server and start this process all over again next week.  Help!


   VIP    Certified


Please run bpdbjobs to figure out the active job id, then run bpdbjobs -jobid XX -all_columns

check in there what is not OK. You can do the same by running another JAVA GUI... I am hoping you did create all logs catalogs by running mklogdir script inside netbackup/logs - those would be needed to troubleshoot further...

Level 6
Partner    VIP    Accredited Certified

I hope Support managed to assist you.

You never mentioned in your original post that NBU was installed in one path and catalogs moved to a different path (Z:\ )

How was it done on the original server?

Were ALTPATH files used or the registry modification method (where the original TN 68160 got removed due to catalog recovery issues)?

Please see this discussion:


Support was tenacious and willing to stay on the line for as long as it took, but they did not mention the specific thing that you are pointing out here unfortunately.  We did end up rolling back to our old server on Friday, in order to get our weekend full backups done.  

After looking at the links you provided and reading, I think that is our exact problem.  Our old server has the registry entry DATABASE_PATH under [HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\NetBackup\CurrentVersion\Paths] and its value is Z:\\db.  Our new server does not, and it seems very clear that the error message is telling us exactly what the problem is... It's saying "I restored everything to Z:\db".... then.... "Hey, I looked under C:\Program Files\Veritas\Netbackup\db and didn't find any of the stuff I just restored, what gives?".

Let's hope the problem is really this simple.  Unfortunately, this is one of those times where the person who configured and installed this NBU installation originally, AND the person who most likely did the conversion to the Z:\ drive for storing the catalog are long gone from the organization and this particular bit does not appear to be documented anywhere in our internal notes, and also was not something ever mentioned in any of the official Veritas documentation that I ever read about migrating to new physical hardware.  Frustrating.  

Also frustrating that despite this probably being a common and (supposedly?) supported method of moving the catalog, and the information about the catalog being on Z:\db while the rest of the install being on C:\ being right there in the log they looked at several times, support was not able to make that correlation.

We are discussing whether we want to attempt this cutover again this week ourselves with this new knowledge, or pause and investigate some kind of professional services engagement to help us get through this hardware refresh.  If we decide to go the latter route, does anyone have any suggestions on how to proceed?  We've reached out to our reseller for info on that, and to support for more info, and are waiting to hear back.

I am leaning towards getting an expert in here, simply because while we may have solved this particular problem, I'm wondering what other fun things we might uncover after we get the catalog restored successfully.

Level 6
Partner    VIP    Accredited Certified

Apparently TN 68160 was removed due to catalog recovery issues exactly as you have experienced.

Problem is that TN68160 was around for so long that 100's (if not 1000's) users have implemented the registry change method to move catalogs.

If you still have the call open with Support, please mention post   and TN68160 and ask them how to proceed.

I am pretty sure that creating the registry entry will ensure successful catalog recovery, but will probably again leave you in an unsupported state.

Wondering if @mph999 can provide any advice from a Support perspective?