Forum Discussion

Maynard01's avatar
Maynard01
Level 3
14 years ago

Backup Exec 2010 R2 duplicate job from a deduplication storage folder

Sorry for the long title.

In our current environment we have a CASO server at the primary site and a Media server at an off-site. On the CASO I have a regular backup rotation where all of the jobs have their destination as a Deduplication Storage Folder on the CASO. This has been working very well. The problem I have is that I am running duplicate jobs for each server to create this data on the off-site media server as well and it is very slow. The process does work and only sends unique data across to the second media server but I cannot complete all of my duplicate jobs in my window. I would like to know what the typically bottlenecks are in running a duplicate job from a deduplication folder to another deduplication folder.

I show next to no CPU usage, plenty of physical memory left, and only using about a tenth of the available bandwidth.

Any configuration changes or performance enhancements would be greatly appreciated.

Oh, and the servers are Windows 2008 R2 with Backup Exec 2010 R2.

14 Replies

  • Anything you can learn from Symantec to optimize PDDE backups would be great.

    I have not had any of the errors you listed but please post any findings -whether in this, or another BE 2010 R2 thread. I'll do the same.

  • http://www.symantec.com/business/support/index?page=content&id=TECH127779

     

    Setting compression value to 1.

    NOTE:  This should only be done to a new deduplication folder before any backups have been written.  If this is applied after backups have been run, compression probably would not be realized as copies of much of the target data, may already exist in the deduplication folder and only parts of the modified data might be considered for compression.

     

    I would have liked to have tried this but didn't know about it. My dedupe folders have been in place several months so I can't try it now. Perhaps in the lab.

  • I've now got a case open with Symantec. I will let you know what I find and what recommendations are made.

    I'm glad to see that there is interest in this issue. I've had a heck of a time finding good information on how this process works let alone how to refine the jobs/configuration to obtain good speed.

    For our current setup we are backing up around 2.5TB of data with a daily change rate of between 50 and 100GB. This seems reasonable to move 100GB of changes across a 50Mbit link in a 16 hour window.

    From the initial conversation with Symantec there seems to be a spoold.exe process that does most of the comparison to see what needs to be moved over. This process is single threaded and does not stay as busy as what I would expect.

    Our disk latency and usage is low, there is plenty of free memory, and a great deal of CPU resources free. I hope to find out how to better utilize these resources.

  • Sorry I have not gotten into my lab for a few days due to travel and such.

    However back on point, the dedupe process is single threaded.  Have dual CPU multiple cores, it'll only use a maximum 85% of one core on target dedupe, and only 75% of one core on client-side dedupe.  This is hard coded into the product and no way to adjust that I'm aware of.

    You need to create multiple dedupe streams I believe between media servers, so multiple concurrent streams, to leverage more CPU power.