cancel
Showing results for 
Search instead for 
Did you mean: 

3x faster duplication throughput if I down half the tape drives - why?

IanB
Level 4

We run NetBackup 6.5 Enterprise Server on Windows 2003 x64, firstly onto SAN disk, then duplicating the images onto two fibre-attached Dell ML6020 tape libraries, each with two IBM LTO4 tape drives.  Throughput when duplicating images, with all tape drives up (two jobs running at once), is about 120Gb/hr (60Gb/hr to each library) - not too bad, some might say.  However, if I down one tape drive in each library, throughput rockets to about 400Gb/hr (200Gb/hr to each library), which is pretty spectacular.   Why would the 2nd tape drive be affecting performance so badly?  Any ideas?

Many thanks in advance.

Ian

13 REPLIES 13

marekkedzierski
Level 6
Partner

What type of duplication job you are running ? Disk staging relocation or standard duplication ?

IanB
Level 4
Disk staging, from the images on the SAN disk arrays.

marekkedzierski
Level 6
Partner
If you are running disk staging relocation NetBackup in standard configuration uses all available tape drives and creates few duplication jobs. Each job reads images from disk and sometimes reducing number of read streams is a solution. Probably it's a problem with your disk array which slows down when few threads trying to read with maximum throughput.

IanB
Level 4

Thanks, Marek - I think you've found the problem.  We have 3 DSSUs (RAID-3 arrays). One DSSU was on its last image, so I upped the other 2 tape drives and they started processing the queued jobs from the next DSSU - total throughput is now even more phenomenal, between 7.4 & 10.4 Gb/minute (i.e. 450-620Gb/hr).  I guess when the first DSSU is done and the 2 free drives start the next job for the second DSSU, I'll see performance drop right down.  It must be like multi-streaming single drives - i.e. don't do it, it reduces performance.  Now I need to find a way to force the tape drives to process jobs from separate DSSUs.  At worst, I could re-carve the SAN arrays into smaller LUNs (1 for each of our servers' disk images)  and create a DSSU for each server.  Hopefully there's a better way...

Thanks again

Ian

marekkedzierski
Level 6
Partner
You can create new storage unit and specify for example "1" as number of total tape drives, and change storage unit in disk staging schedule.. Now duplication jobs will be queued and there will by only one process which reads data from disk.

marekkedzierski
Level 6
Partner
... or create special file for disk staging which will force NB to create one duplication job instead of for example 4.

IanB
Level 4

Thanks, Marek.  I can't see where to set the number of tape drives for a DSSU.  On the DSSU I could set the maximum concurrent jobs to 1, so the backups to disk will queue up and (hopefully) the disk staging will also queue as I want. 

What do you mean by a special file for disk staging - do you mean a script to control it precisely?

Thanks

Ian

IanB
Level 4
 

IanB
Level 4

Reply to self:-

Setting maximum concurrent jobs = 1 for the DSSU seems only to apply to backup jobs - it still tries to multi-stream the duplication jobs.

I suppose I could create a MediaManager storage unit and specify the max number of tape drives as 1, but I still want to backup to disk (for lightning-fast restores).

marekkedzierski
Level 6
Partner

Ian,

Don't change maximum concurent jobs in DSSU properties because this property doesn't affect speed of relocation job. If you set this to "1" only one backup job can write to disk, other jobs will be queued. I think you don't want to do this.

 

So..

Create new Media Manager Storage Unit and set max number of tape drives to 1.

Then change storage unit in dssu schedules to newly created storage unit.

 

When relocation jobs from 3 DSSU will start, only one job will be active (other will be queued) so only 1 read thread will have access to disk array.

If you want to run 3 relocation jobs, each from different DSSU, create 3 Media Manager Storage Units each limited to 1 tape drive. But if disk image is spanned between 2 DSSUs, 2 threads will read from the same DSSU.. so try to reduce size of disk image in DSSU properties.

 

Different method to limit DSSU relocation jobs is to incerase total size of images included in relocation job:

 

A. How To Change The Maximum Amount of Data in a DSSU Duplication Job
By default, a DSSU relocation job will limit the size of duplication jobs to no more than 25 gigabytes of data, but NetBackup allows the user to override this default behavior.  

The bpbrmds process will look for a text file named STAGING_JOB_KB_LIMIT in the NetBackup directory on the media server that originally wrote to the DSSU. This text file should contain a single line with a single value: the desired data limit in kilobytes.  This value can range from 1024 (1 megabyte) to 2147483647. (2 terabytes)

For example, to configure NetBackup to allow DSSU duplication jobs to occur in batches of up to 50 gigabytes per batch, the following command should be run:

(UNIX media server)
echo 52428800 > /usr/openv/netbackup/STAGING_JOB_KB_LIMIT

(Windows media server)

echo 52428800 > %systemdrive%\Program Files\VERITAS\NetBackup\STAGING_JOB_KB_LIMIT

 

 

 

marekk

 

 

 

IanB
Level 4

Many thanks for all the information, Marek.  I'll try your suggestions.

Your assistance is much appreciated.

Ian

J_H_Is_gone
Level 6

In the manuals it says to what how many jobs you have running from the same disk at the same time as it will cause thrasing and slow down the backups.

 

If all your duplicating was coming from the same harddrives it could have slowed it down trying to get data from so many differnt areas on the same disk.

VipinP
Not applicable
Partner

Hi IanB,

I am about to put a similar architecture to yours in place and wanted to clarify a couple of things.

1) are using Fibre Channel LTO4s in the MSL6010?

2) are you running Windows 2003 SE or EE for the NBU master server?

 

can you please email me directly to {removed}

 

many thanks

VipinP

 

[Edited: Removed personal information per the community rules and regulations.] 

Message Edited by Brad_C on 10-30-2008 08:09 PM