Forum Discussion

policeman51's avatar
12 years ago

Netbackup slow SQL jobs

We are running Netbackup 7.5.0.4 backing up SQL VMs to a Datadomain.

We are experiencing slow speeds with the SQL database and trans log jobs.  We basically have 2 types of SQL servers; ones with a bunch (110) small databases, and others with a few, but large (1.5TB) databases.  The jobs are running successfully, just taking forever.  

I also see these messages in the jobs:  Created VDI object for SQL Server instance <server_1>. Connection timeout is <300> seconds.

Two things:

Can someone tell me how to configure the script/job/etc to kick off all database backups at once?

Can someone explain the MAXTRANSFERSIZE and NUMBUFS and how they effect the SQL servers with small or large databases.

Here is an example of our scripts.

 

OPERATION BACKUP
DATABASE $ALL
SQLHOST "server_1"
NBSERVER "master_server"
MAXTRANSFERSIZE 6
BLOCKSIZE 7
NUMBUFS 2
ENDOPER TRUE
 
 
OPERATION BACKUP
DATABASE $ALL
EXCLUDE "model"
EXCLUDE "msdb"
EXCLUDE "master"
SQLHOST "server_1"
NBSERVER "master_server"
MAXTRANSFERSIZE 6
BLOCKSIZE 7
OBJECTTYPE TRXLOG
NUMBUFS 2
ENDOPER TRUE
 

 

  • small suggestion if database is too small like u mentioned using stripes parameter will not make much difference. 

    backup may fail as well. I know this from own experience.

     a bunch (110) small databases- use BATCHSIZE as 5 or 10 or 20 then calculate total backup time, then decide watever gives u better performance n time.

    2 large (1.5TB) databases -- use STRIPES parameter as 2 or 3 or 4 then check performance and backup time.

6 Replies

  • Hello Policeman,

    Can someone tell me how to configure the script/job/etc to kick off all database backups at once?

    Examples you show above seems to be able to backup all the databeses at once. What you want is the way to backup multiple databeses simultaneously with mutiple streams?

    Can someone explain the MAXTRANSFERSIZE and NUMBUFS and how they effect the SQL servers with small or large databases.

    These parameters are passes and used in SQL Server as those of backup interface. Here is an document that mentions to SQL Server backup tuning. Please have a look and try!

    http://www.symantec.com/docs/TECH33423

  • Perfect, thanks.

    Also, do you know the max for BATCHSIZE and STRIPES.

    I can't find that in the docs.

  • I'm not sure hard limit of these, but it has no meaning to increase these parameters to too large - many backup jobs will be created, but just bacone queued if you don't have enough free streams in storage unit.

  • OK, that makes sense.  We may just have too much data going.  I'll also look at increase the amount of streams to the datadomain.

  • small suggestion if database is too small like u mentioned using stripes parameter will not make much difference. 

    backup may fail as well. I know this from own experience.

     a bunch (110) small databases- use BATCHSIZE as 5 or 10 or 20 then calculate total backup time, then decide watever gives u better performance n time.

    2 large (1.5TB) databases -- use STRIPES parameter as 2 or 3 or 4 then check performance and backup time.

  • Thanks, I appreciate the response.

    I think we're just backing up too much data in the timeframe the DBAs are giving us.  

    Something has to give.