I have recently upgraded my netbackup environment from Netbackup 6.5 to netbackup 18.104.22.168, after that I could see that my catalog file system increases 10GB by every week. Catalog compression already in place but not helping much. Any one is facing this issue after the upgrade? Any idea why this is happening?
MAster and Media server running in Solaris 10
Number of clients: 350
Has anything else changed since the upgrade?
The images directory should actually be slightly smaller in 7.5 compared to 6.5 as the .f files all get drawn into the EMM database (so that does grow)
Are your image cleanup jobs all running successfully - this ensure the catalog is kept tidy.
Have you started using any other features of NetBackup since upgrading that could have had an effect? (GRT, Indexing etc.)
Let us know if anything at all has changed since the upgrade so that we know where to look
No we haven't enabled any indexing or GRT after upgrade. Also the image cleanup is running fine. One thing we did after upgrade is setting up Opscenter. not sure that can contribute of catalog growth.
Is it definatley the catalog space that has increased (do you catalog backups show increasing data sizes) or is it just the disk space you are monitoring?
7.5 does a lot of logging and these grow rapidly - if it is just the disk space usage increasing then check the logs folders, you may need to use vxlogcfg to keep them more under control
I am not aware that OpsCenter changes anything and have not seen anything about catalog size changes in the release notes
Metdata migration is success in my case. It only moves the metadata from image files. But the image files prior to upgrade are still there in the /usr/openv/netbackup/db/images directory. Do you think it will remove all the old .f files after the metadata migration?
The header files are moved to EMM database. See http://www.symantec.com/docs/TECH170687 and 'Related Articles'.
We had a similar post here not so long ago. The 'culprit' turned out to be a specific client where the exclude_list was 'accidently' removed at more or less the the same time as the upgrade.
Also check for Client names that may have been added to more than one policy.
Check folder sizes in master's image directory to see if one or more clients have unusual large image folders.
We experienced the same issue after upgrading from 7.1 to 22.214.171.124 (We're running AIX Master and media servers), and leverage BMR.
What Symantec found out was TIR information was not being pruned during the image cleanup processes. (Our catalog grew from approximately 750 Gig to now just over 1.2 TB). We're testing an EEB which does appear to have resolved the issue. (EEB 3108642 which replaces BPDBM).
What we found in our bpdbm logs which pointed to the TIR pruning issue was (grepping for compress in the bpdbm logs):
01:18:05.061  <4> ImageDelete::~ImageDelete: deleted 4404 expired records, compressed 2198, tir removed 6, deleted 5922 expired copies01:18:05.061  <4> ImageDelete::~ImageDelete: deleted 4404 expired records, compressed 2198, tir removed 6, deleted 5922 expired copies
Since we backup, using BMR, approximately 1700 clients per night, that led Symantec down the road of looking at the TIR info.
Not sure if this relates to you, but thought I'd pass it along, with relevant EEB info so you can mention to support. :)
this looks promising:
OPTION 3: Post upgrade to NetBackup 7.5
You can use cat_import command to import all image metadata for all clients or individual clients in parallel.
To import all clients:
Windows: <install-path>\Veritas\NetBackup\bin>cat_import -all -delete_source -base <install-path>\NetBackup\db
Unix : /usr/openv/netbackup/bin/cat_import -all -delete_source -base /usr/openv/netbackup/db
To import 1 clients images at a time:
Windows: <install-path>\Veritas\NetBackup\bin>cat_import -client <clientname> -delete_source -base <install-path>\NetBackup\db
Unix : /usr/openv/netbackup/bin/cat_import -client <clientname> -delete_source -base /usr/openv/netbackup/db
Thank you sfroeschke, Looks like I am facing the same TIR pruning issue. I have already in discussion with Symantec support person. Looks like he is not aware of this TIR issue. Let me check with him for that EEB.
From my BPDBM logs:
04:48:30.487  <4> ImageDelete::~ImageDelete: deleted 789 expired records, compressed 187, tir removed 10, deleted 1190 expired copies
17:04:35.146  <4> ImageDelete::~ImageDelete: deleted 893 expired records, compressed 144, tir removed 23, deleted 827 expired copies
Depending on details of your configuration, sfroeschke's useful tip to look for how much (or how little) TIR pruning is reported by your image cleanup results may not be proof of a problem.
Here is a more precisely accurate way to see if you're experiencing the problem sfroeschke encountered where the image catalog was growing (well, not shrinking like it should) because TIR information was not getting properly pruned:
First, find an incremental backup image that matches BOTH of the following conditions:
Once you have identified some backupid ("hostname_1234567890") that fits these criteria, run the following command:
bpimagelist -L -backupid hostname_1234567890 | grep "TIR Info"
If both of the above conditions are true for this image, the output of that bpimagelist command should report a TIR Info value other than 8. But if do you see that TIR Info is 8 (see example below), then you're probably encountering sfroeschke's problem, and your support case does indeed deserve to be called "another occurrance of issue 3108642."
TIR Info: 8
I did a bpimagelist to a incremetal image whihc maches above conditions. And am getting TIR info as 8. So looks like we are encountering the same issues as sfroeschke's. But what it mean by TIR info 8, how that cause catalog space growth?.
$ sudo /usr/openv/netbackup/bin/admincmd/bpimagelist -L -backupid azote-bkp_1366538982|grep "TIR Info"
TIR Info: 8