cancel
Showing results for 
Search instead for 
Did you mean: 

Slow Backup of Large files on LTO drive

GreedyGreen
Level 3
I'm running an IBM x255 with a Ultra320 SCSI supporting 2 LTO3 drives and am having a problem with backup speed. Strangely it seems to slow down to a crawl when the backup encounters a large file size and runs OK with smaller files (which is contrary to what I expect from reading other forum entries)

I've got Block size at 64k
Buffer size at 1024K
Buffer Count 10
High Water Count 0

Read single block mode - unchecked
Write single block mode - checked
Read SCSI pass-through mode - unchecked
Write SCSI pass-through mode - checked

If I take the Write single block mode check off then I get an "invalid command was sent to the storage device" error when I try to backup.

I've also tried turning of compression and also altering the buffer size to both 256K and 64k, with no effect.

Always when I start the backup, it kicks off going through the C: drive with over 2gig/min and then hits a 2GB file and the rate plumets down to about 200MB/min until the file has finished and it then picks up a bit for the smaller files

Any ideas?

The server is running Win2003sp2 and Backup Exec is version 10d
15 REPLIES 15

CraigV
Moderator
Moderator
Partner    VIP    Accredited
I'd leave those settings as is. If you increase the block size, you run the risk of having restore problems. In fact, when you increase this, you are warned about this.
Are you backing up a remote server, or is that drive locally attached to the server being backed up?
If it's a network server, you can look at hard-coding your NICs to 100/1000MB FULL (depending on what speed your switches are), and make sure your switch ports are hard-coded on the ports the server connects into.
Look into updating your device drivers for Backup Exec, as well as the drivers for your SCSI card. There could be fixes for something like this.

Ben_L_
Level 6
Employee
Also try running the same backup to a different device (a backup to disk folder) to rule out any problems with the device.

GreedyGreen
Level 3
All the backups I handle are local drive to a local tape unit.

Drivers for the LTO3 are loaded automatically by Windows Update via device manager. I may be really thick here but can you load specific BackupExec drivers that are better suited? If so how?

GreedyGreen
Level 3
Doing a backup to disk effectively is copying data from C: drive to D: drive (one array to another) over the same RIAD card and runs really slowly - much slower than backup to Tape

CraigV
Moderator
Moderator
Partner    VIP    Accredited
In Backup Exec, click Tools --> Wizards - Device COnfiguration Wizard

Click Next twice, and select the option to Install tape device drivers. This will load Symantec's drivers for the tape drive. Symantec recommend using their drivers for the device.

GreedyGreen
Level 3
Thanks for the "how to" - I'm trying that now

GreedyGreen
Level 3
The Veritas Drivers are loaded now, but still the same grinding to a snail pace when it hits the big file.

Ben_L_
Level 6
Employee
just curious, what kind of file is that big file?
Also how fragmented is the drive?

GreedyGreen
Level 3
The big file is a RAID log (RaidDP.log), which I can delete, but it serves at the moment as a good test for the backup speed. It seems symptomatic that if I watch the backup proceed the throughput dips as it encounters a larger file. This particular file is enormous and the rate just continues to plumet for the 30 or 40 minutes it take to back it up.

As to fragmentation, that could well be the issue - my C: drive is 25% fragmented and that particular file has over 849,000 fragments. This would then begin to make sense as it reacting like many small files (effectively 849,000 files at an average of about 2.5kb each)
I'm upgrading the D: drive (larger data volume) over the weekend so it will automatically be defragmented once I have restored the data - I can then see what difference it makes to the total backup time. The D: drive currently runs at about 9 hours to backup 447GB a rate of only 921MB/min. which could well be as that drive is at 37% fragmentation.

Thanks for that train of thought

Ken_Putnam
Level 6

Both single block settings should be UNCHECKED, unless you are troubleshooting, since they effectively disable buffering

See the Admin Guide or
http://seer.entsupport.symantec.com/docs/259030.htm

Hywel_Mallett
Level 6
Certified
I reckon it's simply down to the fragmentation of that file.
I find that large log files which receive lots of small writes can get heavily fragmented, and slow down your backups. Symantec's Storage Foundation HA for Windows is particularly guilty of this!

CraigV
Moderator
Moderator
Partner    VIP    Accredited
If that file isn't needed, either exclude it or delete it from the selection list. It's symptomatic of a problem on that file only. It might not be the case with another file.
If  you have another static file that's large (ISO, database etc) I'd copy that on and backup it up, measuring the speed. This may be giving you the impression you have an issue when in fact you might not have.

GreedyGreen
Level 3
OK - after defragmentation and getting rid of the problem file, the C: drive backup is now running OK. But I'm now having problems with my d: Drive which consists of about 750GB of Domino Mailfiles. The D: drive used to have about 300GB of normal F&P data plus about 50GB of Notes databases. That used to run in about 4 hours, but now takes about a day. I've got other Notes servers backing up at about 2.5GB/min but this one is now running below 0.5GB/min.

The things I'm about to try is move the pagefile off the drive being backed up
Install SP4 onto Backup Exec 10d

but any other ideas would be appreciated. Again I'm puzzled as Mailfiles are generally large and so should be backing up faster than most.

GreedyGreen
Level 3
Ah ha! It looks like the Notes setup might be wrong as Full Text Indexing is turned on which is using up a large part of the CPU utilization. So my Notes team are going to get that turned off and I'll see if that effects Backup speed.

GreedyGreen
Level 3
I at least have a workable backup now as without the CPU being thrashed it manages a little under 1GB/min averaged over the 800GB backup. But that's still fairly slow compared to other servers I have running at 2.5GB/min.

One thing I have noticed is that when I load up Backup Exec, this server seems to spend a long time with the message "Connecting to local media server..." on the screen before it's fully initialised. When I say a long time I mean about 5 seconds, but again compared to other servers where if that message does pop up, it's too fast to notice. The other point is that if I try to do a tape inventory just after loading up the backup exec services, then the job sits in the queue for ages with a message of the nature that it cannot connect to a media server. Eventually the inventories run, but it takes a while. Subsequent inventories run OK, it's only just after startng the services.

Given that everything is local, is there any reason for these delays that anyone can think of? Perhaps it might be part of the speed issue.