cancel
Showing results for 
Search instead for 
Did you mean: 

Deduplication & Optimized duplication - multiple jobs for one server?

321-b
Level 4

Hi There,

I have a number of servers that i want to back up with BE2014, using the deduplication add-on. I will also be adding a stage to duplicate certain backups to an offsite, shared dedupe store.

I have a query about using multiple jobs for the same backup server.

Say i create job A,(consisting of Full and Incrementals),  and this job backs up one server to the primary dedupe store, and this will be duplicated to another dedupe store offsite

Then, i go along and create job B, with different retention settings (also backing up the same server, to the same dedupe store), and this is also duplicated to an offsite dedupe store.

Will job B go off and create a completely new batch of files (eg full backups) on the primary dedupe store, or will everything be completely deduped and no duplicate of files created (, because it can 'see' all of the full backups that were previously created via Job A?

If new backups will be created, will the same thing happen in the offsite dedupe store, and the duplicate jobs will also create new backups in the offsite store?

 

Basically, i will be creating one job for daily/weekly, and the weekly will be duplicated offsite. I will be creating a seperate montly job, with suitable retention, and this will have a duplicate to send it offsite. I will also be creating seperate quarterly and annual jobs with suitable retention for them also.

Is this a good approach, and will this all work together and deduplicate well?

 

Thanks in advance for your advice

 

5 REPLIES 5

pkh
Moderator
Moderator
   VIP    Certified

Your questions should be answered by my article below

https://www-secure.symantec.com/connect/articles/deduplication-simplified-part-1-backup

321-b
Level 4

Thanks for the reply. I'm still looking for an answer though.

So my plan is good? Multiple seperate jobs are all tied-in to each other and even though a full backup might be in a completely different job, it is still fully aware of all of the data that was backed up by the other jobs?

It doesn't seem to be, as the full backups in a different job chain seem to take longer, and doesn't seem to be able to realise that a full backup has been completed recently in a different job spec..

Apologies if this is confusing, it is difficult to explain in writing i guess..

pkh
Moderator
Moderator
   VIP    Certified
The length of each job is not dependent on what is previously stored in the dedup folder. It depends on a lot of other factors like contention at the source, etc

LegAEI
Level 5

You don't need to create different jobs, you can run multiple stages in the same job, and add offsite stages to your backup stages.

My backup plan is the same as yours, except I duplicate my incr jobs offsite as well. My current job looks like this:

Source server -

  1. Full Weekly - 1st and 3rd Saturday (or 2nd and 4th, but weekly works)
    • Verify as separate job on completion
    • Duplicate to offite storage on completion (while verify runs)
  2. Incremental daily - Weekdays
    • Verify as separate job on completeion
    • Duplicate to offsite Storage on completion (while verify runs)
  3. Quarterly backup to tape
    • Duplicate from most recent backup to long term storage vault

Every one of my source servers has the same setup.

I've found a great key to running many jobs at the same time is to break out the verification job as a separate chained job instead of as part of the backup, that way you can save a lot of time pushing your duplicate.

Having multiple Jobs against a source server is rather redundant unless you need to do something like back a single service instance up (ADDS, sql) and even then it might not be worthwhile to have a separate job.

Also, BIG NOTE: Backup Exec will report the live data set size in the console NOT the backup set size, you will have to view the job history or run a deduplication report against the server to see the actual amount of data sent to the server.

NetworkCompany
Level 4

I am amazed by the efficiency of BE's dedup process.  The answer is yes except when dealing with compressed files.

BE will deduplicate all your backup jobs regardless of when or the type.  The dedup storage treats duplicate blocks of data from a single host the same way it treats them from multiple hosts.  To the dedup storage, when data comes in, if a block matches something already in storage it's deduplicated.

Because BE is so incredibly efficient we chose a slightly different approach to offsite storage.  We chose to use a huge dedup array and set the retention period to a year or more.  In this way, we always have local backup sets to restore from.  Each month we backup the entire BE server including the system state where the deduplicated data exists.  When backing up this way, the data stays deduplicated and streams in one shot to multiple tapes we send offsite.  We lose the ability to restore individual files from offsite tapes but our offsite container is a whole lot smaller and still serves the purpose of having an offsite copy of everything in the event of a disaster.  

Looking at our storage today, it only takes 4 tapes get all the dedup storage.  If we backed up all the storage un-deduplicated it would require 14 tapes per month.  It saves us a huge amount of time and tape storage to do it this way.

Its almost a GFS backup like I'm used to minus the yearly points in time.  If I could request a feature from Symantec for BE2016 it would be the ability to retain catalogs from the system state so we can restore individual files rather than a complete restore of the server from tape if all we want is a single file.