cancel
Showing results for 
Search instead for 
Did you mean: 

nbdeployutil Not showing info from all Media Servers

Jake_NotFromSta
Level 2

I'm trying to find out what is using up all of the space on my 2 netbackup appliances. One master/media the other just a media server.

I've run nbdeployutil on the master server and the resulting report says capacity totals "97.47TB" which is pretty much right at how much data is stored on one appliance. However I have 2 appliances that should each have about 97ish TB of unique data on them.

I'm not sure if my setup is configured wrong or if the nbdeployutil just cant see into more than just the master server's media.

 

Both of my 2 appliances have about 120TB of space and are around 85% full each.

 

Any help would be most appreciated.

3 REPLIES 3

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
nbdeployutil is meant to report on front-end capacity, not total amount of data stored in the environment. Front-end data is the sum total equivalent to 1 full backup per client.

Jake_NotFromSta
Level 2
I see. I guess I read that but It didnt click what that meant until now when its not 1:00AM. Is there any way to do a chargeback to see the actual diskspace taken up by a client? or to find out the dedupe rate of an individual client? Or perhaps find out how many unique data units are assigned to an individual client?

Marc_LHeureux
Level 4
Partner Accredited

This is one of the major challenges when using dedup disk storage, and as far as I know there's no easy answer.  I mostly deal with OST storage servers but have some experience with MSDP ones; the theory is very much the same.

If I understand your problem correctly, you're trying to figure who are the worst offenders that consume your usable capacity of the storage servers.  Once you know that, you'll either improve their consumption somehow or find an alternate backup solution for them; or at least have some justification for purchasing more storage.

The first few thoughts right off the bat are:

  1. Do you keep the same clients on the same storage servers every time or are they bouncing back and forth?  You will get better dedup if you keep similar clients on the same storage server and keep all your full backups for any given client on the same storage server.
  2. Having your backups span two storage servers will noticeably hurt your global dedup rate.  Even though data is common across many clients (e.g. C:\Windows) it will still have to have a copy on each storage server.  We found that we'd need at least 25% more storage by having two smaller dedup pools instead of one bigger one.
  3. You can get some hints for each client's greediness from the activity monitor in the Deduplication Rate column.  Check the results for both full and incremental backups.  You might find the worst offenders pretty quickly that way.  e.g. if most clients are reporting 95% dedup on their full backup but a few report only 50% - find out why those are different.  Similarly for incrementals, but the dedup rates would be lower, say 50% on average and the worst offenders get less than 10%.
  4. Dedplucation rates depend heavily on your data's daily change rate & retention period.  Any clients whose daily incremental backups (any day for differential; the day after the full for cumulative or the delta between consecutive days) are larger than the average will warrant further investigation.
  5. Data types also play a signficant role in deduplication.  We have a 6 week retention; we ran some calculations and found that database data requires 120% of usable backup storage for the given FETB after dedup & compression, but filesystem data only required 85% for given FETB.  Meaning if we have 100GB of database data we need 120GB on our storage server to retain 6 weeks, but if it's 100GB of standard OS backups it only takes 85GB to store 6 weeks of backups.  It sounds like you're needing 200% of storage per FETB (97TB from nbedployutil -> 2 x 97TB for the appliances), I'm guessing your retention periods are longer than 6 weeks, taking that into account with the fact that you have 2 appliances (see point #2), you could be on par with us.
  6. Make sure you don't have any backups being retained longer than expected.  I recently found a series of database backups over a period of a few weeks that were many months past their intended expiration, deleting those gave back more than 15% of my usable space.

Your last comment - "find out how many unique data units are assigned to an individual client" - the last time I checked there weren't any tools to find out how many blocks on disk were only referenced by a single client or backup set.  We've had problems when an appliance was getting full and the business owner wanted us to tell them which clients should be removed and expunged from backup to free up enough disk space to handle the remaining clients.  We spent many weeks with our veritas account and support teams, thought we had found some, expunged many backups with some risk to data protection, and barely made a dent in our usage.  In the end the business conceded and picked up another appliance but at least we made a concerted effort.

I'm curious to know how this plays out for you, what your general findings are.