cancel
Showing results for 
Search instead for 
Did you mean: 

Backup Exec 2010 - Backup to Disk Speed

dennishdl
Level 2

Just finished evaluating Backup Exec 2010 using Backup to Disk and noticed something rather odd.   

The backup job speed starts off fantastic, going from 3,000MB/min to as high as 4,2000MB/min within the first 30 minutes.  But once it reaches that "cap" it slowly starts slowing down, one MB at a time.  Its quite strange - the job rate counter just slowly drops a MB ever 1 - 2 seconds.  When the job completes the speed is ~1,000MB / min for a ~600GB job.  

I tested backing up from a internal SATA HDD to another internal SATA HDD as well as to a  ESATA  and each time it did the same exact thing.   

This is a improvement though - I couldn't even get Backup Exec 12.5 to successfully complete a Backup to Disk job.  If I could only find a way for the speed to be consistent, this would be perfect.  

Anyone else experiencing anything like this?

3 REPLIES 3

CraigV
Moderator
Moderator
Partner    VIP    Accredited

SATA isn't the fastest disk around unfortunately.
If you're backing up large files like Exchange Information Stores or SQL DBs for example, you're going to get a far faster backup speed. The reason for this is that data being read and written is more consistent in terms of throughput. Smaller files make the disks seek a lot more, which causes them to drop off in speed.

Here are 2 links that can help you further...bit of background reading.

http://seer.entsupport.symantec.com/docs/285756.htm

http://seer.entsupport.symantec.com/docs/231488.htm

1 way to possibly speed this up is to investigate BEWS 2010's deduplication...it will cut a lot of the unnecessary time taken to backup data down. Test it and implement if it works for you.

Laters!

teiva-boy
Level 6
 data on a HDD is placed on a platter randomly on disk.  Data on the outer edges are faster, than on the inner tracks of the platter.  Data scattered on the disk is even slower as there are a lot of seeks going on.  It will never sustain a constant number.

Add to that the job rate is an average not a realtime number.  You want real-time, fire up PerfMon.

James_McKey
Level 5
Employee Certified
When the job first starts the throughput appears 'current' b/c it fluctuates rapidly. However, what's really happening is that the average is shifting quickly b/c there isn't much history yet. Once the initial burst speed is past the real time throughput is likely slower than what the initial peak pushed the average up to and so the average is slowly creeping down. When a the job throughput actually totally stops due to us having to wait on another operation (such as SQL creating a sparse file, or Exchange processing incremental logs) you'll see the average gradually climb down as well.