02-18-2020 10:51 AM
I am using Netbackup server 8.1.1 in 10 different locations. For each location there is an different setup. I am planning to consolidate the master server. In that case single master server will manage the backup for all locations. Catalog size of NetBackup is 3 TB for all location.
Please share or suggest me if there is any document, KB Articles or admin guide available.
Solved! Go to Solution.
03-01-2020 01:48 PM
As for the catalog, there are no special requirements or limitations for the situation you are describing. That means the general recommendations should be applied. That is you should try to limit your catalog size to a maximum of 2TB - that said, there is not reason why the catalog canoot be larger than that (and I know of a number of instances with very large catalogs - up to 8TB). The cahallenge with a large catalog is protecting it adequately.
You are correct in that each client will only need to send to the master the backup metadata. If your WAN links are prone to flap, this is a problem for NetBackup. There are ways to help manage this by the use to the resilient links feature, but if you have unreliable links you will have problems.
The other thing to consider too is the total number of jobs per day, up to 8.1.2 this was an absolute maximum of 1 per second (and more realistically 1/2 to 1/3 this in practise) and whether the proposed master can handle the load.
BTW - this limit has been addressed in 8.2 (https://www.veritas.com/content/support/en_US/doc/103228346-135207307-0/v136155851-135207307)
02-19-2020 06:23 AM
How far apart are your locations? And what is the network connection speed between them?
I've seen a consolidation similar to this, but only within the same city; I was not impressed by the performance at the time.
The challenge is: what are you going to do about the catalogs of the masters being retired?
If you want to keep their information and merge it into the new master, I believe you'll need to engage consulting services from Veritas. If you're just going to let the old catalogs expire, then that's much simpler.
Perhaps an alternative plan could be to use a single monitoring tool, like OpsCenter, that can collectively monitor all your master servers?
02-21-2020 06:58 AM
Hello,
As @DPFreelance suggested, try to engage veritas on this, it is not that simple, especially for the catalog part.
03-01-2020 05:41 AM
Thanks DPSafelite,
I have 4 locations. Network connectivity between each locations is 1 GBPS.
We will not merge the catalog. I am not worry about the catalog as this will be completely new setup with new hardware. I am worry about the below points:
03-01-2020 01:48 PM
As for the catalog, there are no special requirements or limitations for the situation you are describing. That means the general recommendations should be applied. That is you should try to limit your catalog size to a maximum of 2TB - that said, there is not reason why the catalog canoot be larger than that (and I know of a number of instances with very large catalogs - up to 8TB). The cahallenge with a large catalog is protecting it adequately.
You are correct in that each client will only need to send to the master the backup metadata. If your WAN links are prone to flap, this is a problem for NetBackup. There are ways to help manage this by the use to the resilient links feature, but if you have unreliable links you will have problems.
The other thing to consider too is the total number of jobs per day, up to 8.1.2 this was an absolute maximum of 1 per second (and more realistically 1/2 to 1/3 this in practise) and whether the proposed master can handle the load.
BTW - this limit has been addressed in 8.2 (https://www.veritas.com/content/support/en_US/doc/103228346-135207307-0/v136155851-135207307)