Forum Discussion

Julie_Barnes's avatar
15 years ago

Backup-to-Disk Settings

Can anyone give me some advice for setting up my backup-to-disk folder for the most efficient use? I'm currently backing up about 600 Gb per night. I currently have it set to use a maximum of 5 Gb for the backup-to-disk files. Would it be more efficient to create one large file? With my current settings, I'm backing up 600 Gb in a little over 9.5 hours (backup and verify). The same data takes just under 13.5 hours to backup to tape.  I plan on setting up a B2D2T job, but wanted to make sure I'm running my B2D job in the shortest amount of time possible.

Any help will be greatly appreaciated.

Thanks!
  • I should also say that I'm backing this 600 Gb up to a RAID 5 with 800+ Gb available. Therefore, I can only have one day's worth of data on disk at a time. I don't really mind that so much, but how do I need to set things up so that it will automaticall rewrite over the files within that B2D folder? I've set my media with 0 hours append and overwrite. My job ran fine last night, but I just performed a test run and was notified that the capacity check failed.

    Online append capacity     : 0.00 MB
    Online overwrite capacity  : 266,240.00 MB
    Total append capacity      : 0.00 MB
    Total overwrite capacity   : 942,080.00 MB

    So I've obviously got something screwed up somewhere. Maybe the Maximum number of backup sets per backup-to-disk file? I currently have it set to 150. B2D file currently has 75 files in it.

    I guess I'm just not grasping the concept. I know what I want the job to do, I'm just not able to correcty configure they system to do what I want.

    Thanks again!
  •  If you set the Append and/or Overwrite periods to 0 that means you can't append or overwrite them. If you are writing 1 backup a day and want to be able to overwrite the previous file each day you should set your overwrite period to >1 day (12 hours, 20 hours, whatever). 

    Are you backing up multiple systems each night to make this 600g, or is it one protected system? Personally, I keep each system separate so that they have their own BKF files, one per day. If it's a full backup of a 200gb system, i will have a 200gb bkf for that one day single backup. It seems to work fine for me on my primary media server that backups up almost all of my servers. I have another issue on another server, but that's going to be its own thread...
  • Thanks, Scott.

    I'm backing multiple servers up with one job. I suppose I could split it up and make a separate job for each system. Hadn't really thought of doing that.
  • I currently have it set to use a maximum of 5 Gb for the backup-to-disk files. Would it be more efficient to create one large file?

    The problem with one large BKF file is that if you ever exceed the capacity, then a new one is created with a whopping amount of wasted space

    but how do I need to set things up so that it will automaticall rewrite over the files within that B2D folder?

    Set the media set OPP to something like 12 hours and verify that the Global Overwrite is set to "Use overwriteable  media in the target media set before scratch media"  and the next nights backup should overwrite the first

    Maybe the Maximum number of backup sets per backup-to-disk file?

    this determines the number of different resources that can be placed into one BKF file.  Each share, System State and Database counts as one resource. Again if you have a 5 GB BKF file and only backup a 1GB database, then you have wasted 4GB
  •  If you have it set to not allocate the maximum size when creating a backup-to-disk file (it's on the backup-to-disk options page) it won't make the BKF any larger than it has to be. I'm not sure why you would want it to chunk out the max size anyways...maybe something about chunking out contiguous disk space?

  • I'm not sure why you would want it to chunk out the max size anyways...maybe something about chunking out contiguous disk space?

    Exactly

    if you do not set to allocate the entire segment size, B2D target fragmentation goes through the roof, with the associated slower disk access.  Also for larger backups it does actually take longer, because the backup has to stop, wait for the allocation, then write, then stop etc