cancel
Showing results for 
Search instead for 
Did you mean: 

nb catalog more than 5TB

Albatross_
Level 5

Hi,

I have one master server with 7.5 running on redhat 5.8 including media server.

another 4 media servers with same 7.5 version with linux os and one windows os.

No of NB clients are 2000.( exchange, sql , vms, oracle, bmr ). Also have 4 NDMP Datadomain.

I have been told that we are having more than 4.5 TB of catalog (  I am new to env and waiting for the access to servers / got access to opscenter readonly )

Any suggestions / recommedations how do we migrate to newer version.

few of my peers says migrating catalog with 4.5 TB is highly impossible, we need to prune the catalog to migrate, Is that true ?

If Yes, the best methods to prune the catalog ?

How to migrate the the NB version to 7.7.

Awaiting for replies

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions

mph999
Level 6
Employee Accredited

I don't agree with :

"Few of my peers says migrating catalog with 4.5 TB is highly impossible, we need to prune the catalog to migrate, Is that true ?"

However, I would agree that 4.5 TB Catalog is unusual.

The problem with a large catalog is speed:

Some commands will take longer, for example, image cleanups.

The biggest issue however is backup and recover.  It's going to take a good few hours the downtime needed to restore a catalog of this size, if every required, may be longer than your management would allow.

Why is to so large, are you keeping many backups for Infinity that you don't need to ?

Either way, you have 4 options:

1.  Get rid of some backup images if not required

2.  Build a new master and start using that 

3.  Archive parts of this catalog, as per the NetBackup Admin Guide (See section about protecting the catalog).

4.  Catalog compression

 

 

View solution in original post

13 REPLIES 13

Marianne
Level 6
Partner    VIP    Accredited Certified

Best to get access and see for yourself. 

A NBU master with 4.5 TB catalog is very unusual.
The recommendation is normally to setup a new master server when catalogs grow larger than 1 TB.

We have seen master servers running fine with catalogs up to about 2TB.

To prune catalogs, you first of all need to have a look at what is being backed up on each client and retention period/level.

Infinity will mean that catalogs grow and grow with nothing expired while no company has a legal requirement for data to be kept forever.

Expiring old, no longer backups is one way of pruning.
Another method will be to implement Catalog Archiving (described im NBU Admin Guide I)

To migrate to new hardware, prepare new server with same hostname, same NBU version and patch level, installed to same path as old server.
Recover catalog backup taken on old server, then upgrade new server.

See: 

Using catalog backup and recovery to transfer NetBackup catalogs between UNIX or Linux master servers as part of a hardware refresh
http://www.veritas.com/docs/000041077

revarooo
Level 6
Employee

4.5Tb is extremely large. What are your policy retentions set to? You don't have all (or a large proportion) of your policies set to Infinity do you?

If you do, you need to speak with your stakeholders to find out how long their data needs to be retained for and adjust as necessary (maybe even expiring really old images IF they are not needed for x many years or months)

Catalog compression is also an option

You could also look at catalog archiving: http://www.veritas.com/docs/000028588

 

revarooo
Level 6
Employee

Also make sure that if you are backing up clients across multiple policies you are not duplicting what you are backing up! See this so many times and it is a waste of resources.

Nicolai
Moderator
Moderator
Partner    VIP   

The catalog conversion that takes place in 7.5 -> 7.6 or 7.7 is related to database backup.  Plus the structure of MSDP is upgraded again in 7.7 (also between 7.5 and 7.6).

I would not be that scared for version upgrade - but planing is everyting. You might even team up with local Veritas consultant with past experince - could save you a lot of USD in the long run. I would also highly recommend to test the upgrade in a closed lab enviroments first.

~Nicolai

 

mph999
Level 6
Employee Accredited

I don't agree with :

"Few of my peers says migrating catalog with 4.5 TB is highly impossible, we need to prune the catalog to migrate, Is that true ?"

However, I would agree that 4.5 TB Catalog is unusual.

The problem with a large catalog is speed:

Some commands will take longer, for example, image cleanups.

The biggest issue however is backup and recover.  It's going to take a good few hours the downtime needed to restore a catalog of this size, if every required, may be longer than your management would allow.

Why is to so large, are you keeping many backups for Infinity that you don't need to ?

Either way, you have 4 options:

1.  Get rid of some backup images if not required

2.  Build a new master and start using that 

3.  Archive parts of this catalog, as per the NetBackup Admin Guide (See section about protecting the catalog).

4.  Catalog compression

 

 

jim_dalton
Level 6

...or build a totally new instance of netbackup, unrelated to the other. Mind you this could be pricey when it comes to licensing and extra hardware but its definitely an option.Jim

Genericus
Moderator
Moderator
   VIP   

I run on solaris, and I upgraded to 7.6 with a compressed catalog of about 3TB, uncompressed was almost 6TB. It was not that big of a deal.

If you are really concerned, build a test server on an isolated network, restore the catalog there, and test the upgrade.

There is a command listed in the upgrade docs that gives you an image count in your catalog, and they can calculate estimated time based on that.

 

Ironically, after getting to 7.6, I was able to work with application groups to pare down some of my crazy long retentions, and my catalog is now half of what it was.

 

Don't let the application groups use NetBackup as an archival tool, it is for backup and recovery for two cases:

1. Disaster - recovering systems that crash, or restoring applications after major issues

2. Dumb-aster - recovering after "Ooops, I deleted my home directory"

( I have to give credit to Bob Roberts for the second, it still cracks me up)

 

NetBackup 10.2.0.1 on Flex 5360, duplicating via SLP to Access 3350, duplicating via SLP to LTO8 in SL8500 via ACSLS

jim_dalton
Level 6

...but it is an archival tool, Genericus. Not oft used but certainly part of its functionality. That would make the catalog grow, certainly.Jim

Marianne
Level 6
Partner    VIP    Accredited Certified
And still no confirmation of actual size...

Albatross_
Level 5

Hi Folks,

Thank you all for your suggestions, I had a meeting with my peers and discussed about the difficulty in migrating to latest version.

Seems like they are gonna take help from Symantec.

Fingers crossed. Meanwhile I was into job for pruning the old catalog. As of now the catalog size came down to 3.6 TB.

Waiting for the replies from other app teams for pruning the old catalog images from the DB.

Fingers crossed.

 

Cheers

 

 

Marianne
Level 6
Partner    VIP    Accredited Certified

Curious to know how you are 'pruning' the old catalog.....

sdo
Moderator
Moderator
Partner    VIP    Certified

As a test (on Windows), I just reduced the space used by the \db\images folder tree, on a small test system, from 2.5 GB to 0.5 GB (i.e. an 80% reduction), by enabling catalog compression.

Albatross_
Level 5

Hi, 

we have taken up meeting with App teams, according to them we need to retain the data of last few years.

The rest of the images are need to be expired. 

I have been told by other peers that they are taking up the task by manually expiring the images. ( I have no direct access to those servers and expecting to get the access at the earliest, Hopefully someone does the documentation :) )

 

Thanks