01-07-2013 03:38 PM
When watching an active job running, it'll usually start up at a high number and over time taper down and stabilize. Then when the job is done, you have a throughput number that is what looks to be the average throughput based on total data backed up over time.
How internally does BackupExec calculate this? This would be handy to understand when troubleshooting performance issues where you want to know the real throughput and not the calculated throughput.
01-07-2013 10:23 PM
No idea, but it would depend on what type of data you're backing up. Large files would backup faster meaning you would see an increase in speed to disk/tape. Smaller files to tape will drop the speed of the backup as the drive will continually "shoe-shine" and not maintain a consistent speed.
Troubleshooting-wise, you can always start at tweaking your backup target, and making sure server NICs and switch ports are hard-coded to the fastest speed available on the server NICs. CHeck for disk defragmentation; active AV scans etc.
Thanks!
01-08-2013 01:38 AM
Oh Craig, you should know me better than the average BE user ;)
Strictly speaking of the GUI here, and how it reports throughput; I'm trying to better understand what it's showing. I know the speed reported in MB/min does not match what you see in PerfMon in Real-time. Thus, I would like to know how it's calculated and shown in the GUI.
01-08-2013 01:43 AM
*sigh*...thanks for the -1, means so much! I actually didn't take much notice that it was you.
Open a call with Symantec if you have support. I am sure they can help you.
Thanks!
01-08-2013 01:27 PM
I didn't give you the -1. That's usually reserved for the folks thar give a blanket "read the SCL/HCL." without any direct link or explanation. They know who they are...