cancel
Showing results for 
Search instead for 
Did you mean: 

Backup Exec 2010 R2 duplicate job from a deduplication storage folder

Maynard01
Level 3

Sorry for the long title.

In our current environment we have a CASO server at the primary site and a Media server at an off-site. On the CASO I have a regular backup rotation where all of the jobs have their destination as a Deduplication Storage Folder on the CASO. This has been working very well. The problem I have is that I am running duplicate jobs for each server to create this data on the off-site media server as well and it is very slow. The process does work and only sends unique data across to the second media server but I cannot complete all of my duplicate jobs in my window. I would like to know what the typically bottlenecks are in running a duplicate job from a deduplication folder to another deduplication folder.

I show next to no CPU usage, plenty of physical memory left, and only using about a tenth of the available bandwidth.

Any configuration changes or performance enhancements would be greatly appreciated.

Oh, and the servers are Windows 2008 R2 with Backup Exec 2010 R2.

14 REPLIES 14

CraigV
Moderator
Moderator
Partner    VIP    Accredited

Hi Maynard,

 

It is slow as it is bbusy rehydrating your backups to their original size. The minute you run a replicate job on a dedupe folder, it does this.

Read the section on Deduplication in the Admin Guide, as it explains this and gives a bit more information.

 

Thanks!

Maynard01
Level 3

I read the section of the Admin Guide and it mentions that if you copy the data to tape that it will have to rehydrate. I'm sure this is the case with a backup-to-disk folder as well. However, I did not know if that is case for a duplicate job with a source and a destinaiton both being a Dedupicated Storage Folder.

Also, I am not trying to replicate the Depulicated Storage Folder and am trying to run a job that will duplicate the data that was just written by a backup job to another media server. That part is just for clarity. I've seen other posts about people trying to actaully replicate or backup the storage folder as one object, I'm not trying to do that.

So, if the data is rehydrated, then deduplicated, then sent accross the wire what is typically the bottleneck in that proccess?

teiva-boy
Level 6

When duplicating from one dedupe folder to another dedupe folder, and using CASO to manage it all, all the data should be in deduped form and NOT re-hydrated.  If you had duplicated from dedupe folder to just a B2D, it would.

You may also want to try and turn on compression in the PD.CONF file within each media server.  Turn the compression=0 value to 1

You may also want to try sending more concurrent jobs at the same time, if you have the available bandwidth?

Lastly, latency is the biggest killer for duplication/replication of data.  A T1 can do 8.5GB of transfer in theory, but real world with dropped packets, latency of around 40ms, can only do around 6GB of data in 24 hours.  And of course your duplication window is probably even less than that.  So perhaps the expectations you have may not be realistic?  (Note you didn't state your window duration or link type/quality, so this is just conjecture on my part ;) )  

Maynard01
Level 3

I did not know that compression was an option when using deduplication folders. I will try that.

The connection in question is a 50MBit MAN link with around a 5ms reponse time. I'm lucky if the replication uses 20% of the link. The replication window is 14 hours.

I figure I should be able to move 15-20 an hour accross that link with no issue (and I can if I'm doing CIFS file copies) but the duplicate jobs are not going anywhere near that.

teiva-boy
Level 6

If you have AV on the server, make absolutely sure to exclude the dedupe folders, the installation directory of BackupExec itself, the dedupe catalog location, and any running process of BackupExec (e.g beremote.exe, etc)

I'm not sure off hand where the slowdown is, I have to play with it in my lab and head to see where it could be improved.

Maynard01
Level 3

Please let me know what happens in your lab.

There is no AV on these servers until this issue gets resolved.

Maynard01
Level 3

Did you get a chance to set something similar up in your lab?

bktbo
Level 2

I would be interested to know how much data you are backing up?

My setup is

MMS at the server location and CASO at my office.
All jobs configured on the CASO but I choose the MMS PDDE as the destination device for the initial backup jobs.
Templates are configured for all jobs to: backup-verify-dedupe
Link is 100mb MOE
 

I find that verify takes the longest time and I wish I didn't have to do it. :)
However I do only verify one destination (not on CASO and MMS), though I think you are supposed to do both.
An example of a verify for 330GB job is 730MB/min in 7 hours. sad
The dedupe of the same job runs at a job rate of 17,000MB/min but takes 7 hours, as it is waiting for the verify to complete.

I am pusing close to 500GB each 12 hour window. Dedupe rates to the offsite CASO are fun to watch as they approach 24,000 MB/min! This is once I have done several backups of course and the data is not changing much.
On average we are achieving 100:1 dedupe rates night to night across all jobs

A metric you may be interested in (to gauge your link speed?) is I also backup a SQL .bak file across the network to a B2D folder. The bak file is 54GB; it runs at average 560MB/min and completes in 1:35.
 

That is interesting about the PD.conf file. I am going to check that.

If you would like more information let me know as I spent some time with support getting this set up. I don't say it's perfect so I am always looking for tweaks to the setup.
 

dgunner
Level 3
Partner

Interested to know how this goes.

I have my CASO at the remote (DR) site across a 10Mbps LES circuit.

I find that optimised deduplication often fails and is very slow to complete. I've done tests doing backup, then an optimised deduplication and then repeating and finding that the second optimised deduplication takes just as long as the first to complete despite their not being any changed data.

 

I'm not convinced that all the issues with dedupe have been fixed in R2 - even with the patches a few weeks ago.

I often get jobs failing with errors like: 


Source backup set had completed with following error/exceptions.

V-79-57344-33329 - Library - cleaning media was mounted.

Source backup set had completed with following error/exceptions.

V-79-57344-33329 - Library - cleaning media was mounted. 

or

V-79-57344-1543 - Backup Exec cannot copy the deduplicated data from the source device to the destination device. The maximum image size at which Backup Exec splits the data stream on the destination device is smaller than the image on the source device.

You can increase the size at which Backup Exec splits the data stream and spans to a new image, and then try the job again.  To edit this option, click Devices and then right-click the destination device and select Properties. Then on the properties dialog, click the Advanced tab.

 

Not wanting to hijack the thread but I have one of the top Symantec engineers due to look at this and would be happy to raise the speed issue. It's a great opportunity to raise any dedupe issues so if anyone wants me to ask anything please let me know.


 

 

 

 

teiva-boy
Level 6

I don't care what Symantec says in their best practice docs, but verifies on your dedupe storage are not so smart.  It's their way of covering their butt.  But the reality is that, BE has to reassemble all of that data to verify it, which greatly increases the backup times or policy duration.

Verify media that is most susceptible to corruption and that is tape, cheap consumer disk, or large >500GB drives.  Outside of that, just don't do it IMO.

bktbo
Level 2

Anything you can learn from Symantec to optimize PDDE backups would be great.

I have not had any of the errors you listed but please post any findings -whether in this, or another BE 2010 R2 thread. I'll do the same.

bktbo
Level 2

http://www.symantec.com/business/support/index?page=content&id=TECH127779

 

Setting compression value to 1.

NOTE:  This should only be done to a new deduplication folder before any backups have been written.  If this is applied after backups have been run, compression probably would not be realized as copies of much of the target data, may already exist in the deduplication folder and only parts of the modified data might be considered for compression.

 

I would have liked to have tried this but didn't know about it. My dedupe folders have been in place several months so I can't try it now. Perhaps in the lab.

Maynard01
Level 3

I've now got a case open with Symantec. I will let you know what I find and what recommendations are made.

I'm glad to see that there is interest in this issue. I've had a heck of a time finding good information on how this process works let alone how to refine the jobs/configuration to obtain good speed.

For our current setup we are backing up around 2.5TB of data with a daily change rate of between 50 and 100GB. This seems reasonable to move 100GB of changes across a 50Mbit link in a 16 hour window.

From the initial conversation with Symantec there seems to be a spoold.exe process that does most of the comparison to see what needs to be moved over. This process is single threaded and does not stay as busy as what I would expect.

Our disk latency and usage is low, there is plenty of free memory, and a great deal of CPU resources free. I hope to find out how to better utilize these resources.

teiva-boy
Level 6

Sorry I have not gotten into my lab for a few days due to travel and such.

However back on point, the dedupe process is single threaded.  Have dual CPU multiple cores, it'll only use a maximum 85% of one core on target dedupe, and only 75% of one core on client-side dedupe.  This is hard coded into the product and no way to adjust that I'm aware of.

You need to create multiple dedupe streams I believe between media servers, so multiple concurrent streams, to leverage more CPU power.