cancel
Showing results for 
Search instead for 
Did you mean: 

D2T Duplicate jobs very slow....

Matt_Stanyon-Ta
Level 3
I have a Dell 2950 with an array attached via SCSI, and a Pv132T tape library. All hardware seems to be working fine, and I have been testing the system with Dell, all seems fine.

However, I have been experiencing some speed issues with the D2T Duplicate jobs.
If I setup some test jobs, and just select a file on the local array, the tape drives perform as expected, however, when a duplicate job runs after a D2D job, the drives run at about 100MB/min, which is REALLY slow.

I have no idea what would be causing this. For the dup. job it is just accessing local files, the same as my test jobs, therefore there should be no difference.

I have tried this with a number of jobs, and all with the same result.

PLEASE HELP ME!!!

Many thanks in advance.
Matt
9 REPLIES 9

Ken_Putnam
Level 6
Is the library on it's own controller or at least it's own channel?

Matt_Stanyon-Ta
Level 3
It certainly is!

As I said, the hardware is functioning fine, the only issue is with the D2T portion of a job. If I create a new job which simply backs up data to the tape drives there are no issues at all.

I don't get it :o|

Ta
MattMessage was edited by:
Matt Stanyon-Tall

Deepali_Badave
Level 6
Employee
Hello,

Please try installing the latest drivers:

http://seer.support.veritas.com/docs/285593.htm

Regards,

Matt_Stanyon-Ta
Level 3
Hey there

OK, I updated to these drivers, and ran the following tests:

D2T Job, with a local source. Tape speed about 2900MB/sec
D2D2T Job (remote sever, obviously D2T portion will be local). Tape speed 500MB/sec.

It's seems as though BE is doing something different with the duplicate jobs. The hardware and drivers all seems fine, and the tape speeds are fine on everything BUT a duplicate job.

It's getting quite desperate now, as I am unable to perform verifications now due to these speed constrains....

Any suggestions very welcome :o)

Thanks everyone
Matt

shweta_rege
Level 6
Hello,


Kindly change Scsi id of the Medium changer to SCSI id 2 and the drive to SCSI id 3.

Stop "Removable storage" service and change startup type to manual.



Thank You,

Shweta

Matt_Stanyon-Ta
Level 3
Actually I just looked at the fragmentation of this drive.....I have never seen it report a whole screen of red! There wasn't one contiguous file there.

So, I have reduced the size to about 500 gigs and am defragmenting now.

I will test once this is complete (and also schedule a weekly task!), and report back.

If not I will change the SCSI IDs, as suggested.

Thanks
Matt

Asma_Tamboli
Level 6
Hello Matt,

Do keep us updated on the issue.

Thanks!

David_Sanz
Level 6
Partner
Matt

When you duplicate a backup job, the original files are not accesed. A duplicate job is a backup of the backup. So it has to do with your backup to disk folder.

It is a good idea defragmenting it. The way Backup Exec 10 performs B2D makes fragmentation an issue as space is taken as it is needed. This problem is fixed in Backup Exec 11d, because there is the option that the bkf files take their hole size from the beginning, losing some space but also not getting fragmentation. Consider an upgrade to version 11d.

Regards

Not applicable
Hi Matt - we have EXACTLY the same issue here, except we are certain that it is not fragmentation in our case, as we have diskeeper running 24/7 on our BTD server, and all our BTD folders show a little or no fragmentation.

We haven't tried changing the SCSI id settings as mentioned in the previous post - have you had any luck resolving this yet ?

Our BTD job takes appx 6hrs to do about 360GB (this is compressed into a 180GB bkf file) - but the BTT duplicate jobs are taking about 12 hrs to copy that 180GB file to our tape device (Exabyte 1x10 VXA2 autoloader)


cheers
Dominic