cancel
Showing results for 
Search instead for 
Did you mean: 

Optimized Synthetic Backups and Image Cataloging

Steven_Moran
Level 4
Accredited Certified

Stating this to make sure I'm clear on the feature:

- An Optimized Synthetic Backup is a Full backup produced at the Storage Server using the OST metadata from past backups and the "current" incremental backup.   Only the metadata for the Opt-Synth Full backup is produced, not a new collection of image data (as in Legacy Synthetic)

 - Once Accelerator is properly enabled and baselined for a client, this process happens during each subsequent Accelerator backup of that client.

Assuming the prior two points are correct (as I understand them to be), does this not mean that the catalog files for that client on the Master Server will always be "full" sized, recording all files in the Backup Selection range, even though only some files were actually involved in the client backup?  I have a student who wants to use Accelerator for a large volume of files, but it also concerned about his catalog size.

I'm also assuming that the Master Server catalogs for these Opt-Synths are still using the retention settings from the Incremental schedules in the backup policy, so they can be retained for a much shorter time.  Arguably as we now have a full restore point at each incremental, we don't need to keep a record of those incrementals for as long as in traditional backup plans

4 REPLIES 4

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Simple answer here is,  the Master doesn't know what storage is used for the backup....well it sorta does but that's not the point here. The storage could be basic disk, tape, or some flavour of OST. The master however has to keep record of the each file that needs to be restored, i.e. catalog every entry in every full. This is a downside of accelerator but you can use incrementals to reduce the catalog footprint.

 

If in future we could deduplicate the catalog that would be cool but also scary at the same time :)

Nicolai
Moderator
Moderator
Partner    VIP   

If you are using MSDP and accelerator and configure the schedule as full evry day (becuase incremental and full as the same runtime), then the catalog size will explode.

Key point: don't run full every day just because you can, remeber - if all backups are on disk there is nearly no overhed in "jumping backup", compared to the tape world where impact is big. 

Steven_Moran
Level 4
Accredited Certified

@Nicolai

Thats precisely why I asked my question.  From a schedule perspective a backup policy using Accelerator would follow a typical Full + Incrementals pattern, but the storage server is Opt-Synth-ing every incremental into a full, which then has to be effectively catalogued as another full by the Master server.  

I'll just tell my student to the retention period of his incremental schedules to something short, like maybe two or three days.

Nicolai
Moderator
Moderator
Partner    VIP   

I don't think your right in incremental is opt-synth'ing into full 

if running full backup every day and using MSDP and Accelerator, only changed block will be transferred, and a new "full" backup will be synthetic created (because the schedule if FULL) . Running a full every day will cause the Netbackup catalog to grow fast.

If running full and incremental schedules using MSDP and Accelerator, only the changes files in the incremental will be registered in the catalog, thus reducing the catalog footprint. But as I mentioned before - there is no impact in having incremental backup when they are located on disk compared to disk. So I recommend go to with full and incremental scheme.

Since incremental backup protect changed files, a short retention of days will cause loss of that information.