09-04-2012 09:57 AM
We are using BE 12.5 to backup about 1TB of data D2D using USB drives. About 60% of the time the backup is quick, 2000MB/min. or faster, but occasionally it runs under 1,000MB/min. and doesn't complete in under 24 hours. I have found no correlation between the slow backups and which physical USB drive we use or which USB 3.0 port we use. The drives are all USB 3.0. The most recent backup that was running was pegging CPU utilization on one core of the server for bengine.exe. It also seemed like Kernel memory utilization was high at around 300MB. I have tried defragmenting the .bkf file, but that seems to make no difference. I checked a few drives and natively I can copy large files to the drives well over the 2000 MB/min. we get on a quick backup, so it doesn't look like a drive throughput issue. Other than upgrading to a newer version of BE, is there something else I should try? Thanks,
Jim
Solved! Go to Solution.
09-07-2012 05:35 AM
...it's a very fine line to get this right. Make your size too big and you risk losing a lot of data if 1 *.bkf file is corrupt, and you MIGHT have speed issues; make the size too small and you're going to build up a lot of small files which might also impact speed.
Just ensure that your AV is also not scanning your dedupe folder!
09-04-2012 10:07 AM
Updated
09-04-2012 01:23 PM
Hi,
Turn off (put in exclusions) for the B2D folders and the BE services, and make sure your slow backups aren't being done during some sort of maintenance (AV scan for example).
Also, I take it you defragged the whole drive...not just the B2D file? What size have you set them too as well?
Thanks!
09-04-2012 01:44 PM
Hi Craig,
We intentionally do not have any anti-virus locally on the server we use for BE. I also checked to make sure there aren't any conflicting Scheduled Tasks, which there aren't. I have looked for patterns, such as day of the week, physical drive, USB 3.0 port or USB cable, but can't find a pattern.
I just recreated a B2D drive from scratch to make sure it isn't a fragmentation issue and I am testing it currently. They can grow up to 1,800 GB. The backup is typically just over 1TB. Thansk,
Jim
09-04-2012 01:48 PM
...what''s your B2D file size set too?
09-04-2012 01:51 PM
Total capacity is listed as 1.8 TB.
09-04-2012 01:56 PM
...not capacity of the B2D folder, the size of each B2D file that BE creates. The standard is 4GB but this can be set. If you're creating 1 large 1800GB file you're going to run into issues. if that file corrupts you'd kiss all your data goodbye.
Some recommend no larger than 50GB for a B2D file before BE creates another...
09-04-2012 02:01 PM
This BE server is setup to create a single file, typically 1 TB. If that is an issue I will reconfigure one of the disk to test smaller B2D files. I inherited this server as it is setup currently. Thanks for the advice.
09-04-2012 09:47 PM
Hello Jim
Below TN explains how to find B2D file size and how to change the file size too.
http://www.symantec.com/docs/TECH35131
09-07-2012 05:20 AM
Hello Amol,
Thanks for the link!
I have completed 2 backups since limiting the file size to 100 GB. One backup averaged 1.4 GB/min and the one today averaged 2.5 GB/min. This is typical that the backup rate can vary so widely and things like enabling or disabling compression have not made a difference. Some of the things I have tried to resolve the highly variable backup rates without success:
I am not sure what to try next. Verification is always very speedy, so it seems to either be an issue with reading the files from the remote servers or with writing the files to the USB drive. I have seen pretty high numbers for disk queue length on the USB drive when a backup is taking place.
09-07-2012 05:27 AM
...any reason for you having a 100GB B2D file size and not trying something like 20/50/65GB for instance?
Drop the file size smaller and see if this resolves the issue.
Thanks!
09-07-2012 05:32 AM
With over 1 TB of data I was trying to limit the number of files. I had heard 25 GB was a good upper limit, so I will try that this weekend and see how that works. Thanks.
09-07-2012 05:35 AM
...it's a very fine line to get this right. Make your size too big and you risk losing a lot of data if 1 *.bkf file is corrupt, and you MIGHT have speed issues; make the size too small and you're going to build up a lot of small files which might also impact speed.
Just ensure that your AV is also not scanning your dedupe folder!
09-07-2012 12:07 PM
Thanks, there is no AV running on this server.
09-11-2012 11:19 AM
25 GB has not worked well, so I will try cranking up the file size. The backups are slower than ever and even the verification has slowed down significantly, used to be around 4 GB/hr now around 2 GB/hr.
09-12-2012 05:44 AM
50 GB is also slow, going to try cranking up the file size more.
09-13-2012 07:57 AM
75 GB is more like normal, at least for this first backup/verify.
09-18-2012 06:01 AM
Speeds are back to "normal" once I go to 75 GB or larger file size. Unfortunately they still vary a lot. The lowest was 600 MB/min and the highest 2.5 GB/min.
09-18-2012 06:11 AM
...could be the type of data being backed up, but if you're happy with the speeds you're getting you can then close this forum query off.
09-18-2012 07:30 AM
I would like to be able to have consistent speeds. The nature of the data being backed up is relativey static, these huge variations from one day to the next don't make sense.