Forum Discussion

chack22's avatar
chack22
Level 4
12 years ago
Solved

SQL Agent Transaction Log backups

What are people doing for transaction log backups? We have an environment with close to 1500 servers (mix of physical and virtual,) have jobs running 24x7, and have recently started using the SQL Agent instead of the dump and scoop method. Our DBA's insist on running a transaction log backup on all SQL databases every 10 minutes. Previously they dumped to a CIFS share that was scooped weekly. Now with converting to the agent, we've added close to 600 jobs per hour with this configuration, and it seems to be causing some additional scheduling issues. It seems excessive to me, and array based snapshots are not an option. Any tips/tricks/suggestions that could be shared regarding this would be beneficial.

I tried to find a SQL Agent best practice guide, but all I kept stumbling across was an HP version.

  • 15000 jobs per day - not great - and not great for the catalog and its file system i guess!

    My largest clients runs about 7000 jobs per day every day and that one copes but has been heavily tuned to cope with running out of desktop heap etc.

    It may cause you issues but you will only know if you try - when desktop heap runs out it is easy to tell as it just stopps working with no event logs and nothing logged anywhere - it just dies! (Sometimes you are luck with a w2k error in the event logs)

    The trouble is with WIndows is that you can have 128GB RAM but desktop heap is still tiny!!

    I would be inclined to ask the SQL Admins if they really need it - or maybe a full backup nightly and let them dump to their own disk every 15 minutes and backup those dumps every hour or two!

10 Replies

  • u can find netbackup MS SQL guide

    sample script

    OPERATION BACKUP
    OBJECTTYPE TRXLOG -- transaction log
    DATABASE Northwind
    MAXTRANSFERSIZE 7
    BLOCKSIZE 2
    SQLHOST "CADOO"
    SQLINSTANCE "SECOND"
    NBSERVER "CHISEL"
    BROWSECLIENT "CADOO"
    ENDOPER TRUE 
     
     
  • The question here is:

    Our DBA's insist on running a transaction log backup on all SQL databases every 10 minutes. 

    Now with converting to the agent, we've added close to 600 jobs per hour with this configuration, and it seems to be causing some additional scheduling issues. It seems excessive to me....
    ... 
    Any tips/tricks/suggestions ....?

     

    I agree - every 10 minutes seems to be excessive.
    Most of our customers run log backups every hour.

    The schedule tab in the GUI also allows smallest frequency as every 1 hour. 
    It is possible to reduce frequency via cmd but hardly every necessary (the following will change the existing schedule to every 15 minutes):

    bpplschedrep <POLICY_NAME> <SCHEDULENAME> -freq 900
     

  • Hi Chack22,

    To add to Marianne's comment, I guess setting your DBA's expectation is equally important. Might need to educate them that they are sharing finite backup resources. I don't think the other server owners will be happy if the DB backups cause other backups to fail or miss the window.   

    All the best!

     

  • Thank you for the replies. SQL Transaction log backups using the agent are configured and working, These are all kicked off via cmd script on the database, not via NBU scheduling (DBA wants to be able to control their scheduling.) I just thought that addiing that many jobs would create additional problems and wanted to know what other people were doing. 

  • The real problem comes if a job takes longer than the interval between it being fired off in which case you will get some overlap

    NetBAckup shouldnt have an issue - the workload is all on your SQL servers really

  • @Mark-

    So other than cluttering the activity monitor with an additional 15000+ jobs daily, NetBackup shouldn't really have problems with it? Our master server has plenty of horsepower behind it (we planned for a bunch of expansion,) so from a physical standpoint, I think we're fine. Filtering the activity monitor can fix the clutter, I just want to make sure we're not going to be taxing the application (scheduling really) with the additional application backups.

     

  • 15000 jobs per day - not great - and not great for the catalog and its file system i guess!

    My largest clients runs about 7000 jobs per day every day and that one copes but has been heavily tuned to cope with running out of desktop heap etc.

    It may cause you issues but you will only know if you try - when desktop heap runs out it is easy to tell as it just stopps working with no event logs and nothing logged anywhere - it just dies! (Sometimes you are luck with a w2k error in the event logs)

    The trouble is with WIndows is that you can have 128GB RAM but desktop heap is still tiny!!

    I would be inclined to ask the SQL Admins if they really need it - or maybe a full backup nightly and let them dump to their own disk every 15 minutes and backup those dumps every hour or two!

  • I didn't bring it up initially, since I didn't think it would matter (more of a best practices query, other suggestions, etc.) The master server in this case is running Solaris . Mix of Solaris, Windows, Linux media servers scattered (7.5.0.6 across the board.) This environment does close to 35000 jobs a day now with the catalog at about 700 GB. Luckily the retention for everything (with 1-2 small exceptions) is only 6 months. Another topic, but we're going to be looking at catalog archving to get that number down to help with catalog backups.

  • OK - looks like you system is setup to cope well then and being Solaris eliminates the desktop heap issue

    Good luck!