Can someone advise where I can download some recent metrics/statistics on the Backup Exec 11 for Windows software? I want to find out the standard time it takes to backup a certain amount of data, both full and incremental. That way, I can see how our numbers are matching up against the benchmark. If someone knows where I can download this informaiton, please advise.
I don't think that such statistics exists. The amount of time required to backup a certain amount of data is highly dependent on the particular computing environment. If you are using a faster server, it backs up faster. Likewise, a faster tape drive will help things. Even when the hardware environment is the same, the workload of the server at the time of the backup has an impact on the backup time. Given the high variability of the bases for such statistics, it will not be meaningful to collect them. It would be like comparing apples and oranges.
One of our incremental backups last night took 1.5 hours to process 1.7 million files. It didn't backup anything because nothing new was added. Now, is 1.5 hour for 1.7 million files a normal statistic?
1.7 million files is a lot of files, and depending on the size of each, BEWS needs to scan each and every file to check if it has changed.
So theoretically, it could take that long. Factors like when the backup is running (during application maintenance), running across a LAN/WAN, type of HDDs used etc. need to be factored in too...
I tested couple of backups and the results led me to ask about these metrics. Backup #1 was for 68 GB of data (800k files). A full backup completed in 1.5 hours and incremental run (no new data backup) completed in 10 minutes. Backup #2 was for 128 GB of data (1.7 million files). A full backup completed in 10 hours and incremental run (no new data backup) completed in 1.5 hours. The files are all .tiff images. My question is why such disparity in back up time when the data is just doubled?
Well that depends if you are using the "archive bit," or "modified time" using NTFS change journal (preferred) One is much faster than the other by not having to scan the entire volume at the file level.
You could try short stroking your HDD, providing you can predict your growth accurately so as to not run out of space. That would help contain your files, and use only the outer edges, giving you maximum disk throughput.
For you incr/diff's use modified time with the change journal, that should be faster providing we're talking NTFS.