cancel
Showing results for 
Search instead for 
Did you mean: 

B2D file sizes and number of backup sets pr. file

bjorn_b
Level 6
Partner Accredited Certified

I have a question about file sizes and number of backup sets pr. file.

I see a couple of scenarios:

1. 4GB / max 40 backup sets pr. file, Defrag once or twice a month:

The file size is good, and controllable amount of backup sets pr. file. However, we need to defrag the volume once in a while

2. 4GB / allocate maximum space

Every file gets 4 GB, but we have no control how many backup sets pr. file. No defragmentatation though.

3. 30 GB / 1 backup set pr. file

(This is a scenario I have discussed with a customer) I don't like this configuration, Most of the reason is that if a backup fails, it is a huge chance that you loose everything, but if you have 4GB Files, you might have some backup, am I right?

The customer feels that this gives him good control of the backup, and what backup job that is connected to the different files. It's hard to argue on that, but I guess there is a good(!) reason why this is not the default way of doing it. 

 

Any comments on these three solutions?

One interresting thing, is that even with alternative 3 we see several files (a lot smaller than 30 GB) pr. backup job. How come?

 

/bjorn

1 ACCEPTED SOLUTION

Accepted Solutions

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

As mostly covered by pkh - how you configure this is kind of up to you

However if you reduce the backup sets per file to a small number, increase the maximum BKF size and enable the allocate maximum space option then you are likley to run into a scenario where the BKF can't be written to even though it contains empty space (because of not understanding what a backup set is and the maximum sets per file limit.)

Similarly there is a  performance related part to what are asking about that can only be answered by testing in your environment as different hardware specifications affect the answer.

i.e. a Fragmented disk as the b2d target can slow down backups and restores, however, creating the BKF files with allocate maximum BKF size enabled takes a bit longer in the media mount phase as each new BKF is created which slows down the job. A such performance of B2D is a balancing act between fragmentation, bkf file size and whether or not is is a good idea to enable the maximum allocation setting.

You should also bear in mind that GRT enabled backups (Exchange, SQL , AVVI etc) don't really use BKF files (except as a placeholder during the backup) as they create IMG folders. IMG folders are not subject to the allocate maximum setting which only applies to BKF files. I haven't checked this next statement but if you have allocated 30GB enabled and do GRT backups it might create a 30GB placeholder file as such the recommendation would be don't allocate the maximum if GRT is used.

View solution in original post

6 REPLIES 6

bjorn_b
Level 6
Partner Accredited Certified
Really noone that has anything to say about this? Or maybe a TechNote?

pkh
Moderator
Moderator
   VIP    Certified

I don't understand your term "pr. file".  I will presume that you are talkiing about "per .bkf file".

You got to be careful when you are referring to the parameter "maximum number of backup sets per file".  The backup set here refer to a resource that is backed up, e.g. C: drive, D: drive, SQL database, system state, etc.  If your job backup from the C: drive, D: drive, SQL database, system state from Machine A and C: drive, system state from Machine B, this counts as 6 backup sets and not 1.

To put all the backup sets from a single job in one .bkf file, you need to set the max number of backup sets parameter high enough and then specify overwrite for each job.

 if a backup fails, it is a huge chance that you loose everything, but if you have 4GB Files, you might have some backup, am I right?

This is not true.  When a job fails, whatever is backed up previously is usually unusable.  You probably would not be able to read the .bkf file.  Even if you are able to read the file, it is incomplete.  For example, your backup job failed when it backup the system state.  Even if you can restore the C: drive, you are left with an unusable system.

bjorn_b
Level 6
Partner Accredited Certified
Thanks for the information. How about allocating the whole space at once and the size of the backup file? (yes, I mean the .bkf file) The default is 4 gb and 100 sets, but I see a lot of fragmented file systems around. /bjorn

pkh
Moderator
Moderator
   VIP    Certified

It is up to you.  You can pre-allocate the .bkf files to decrease fragmentation, but you may be wasting space if you over-allocate.  There is no specific rule/recommendation as to the size of the .bkf files.

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

As mostly covered by pkh - how you configure this is kind of up to you

However if you reduce the backup sets per file to a small number, increase the maximum BKF size and enable the allocate maximum space option then you are likley to run into a scenario where the BKF can't be written to even though it contains empty space (because of not understanding what a backup set is and the maximum sets per file limit.)

Similarly there is a  performance related part to what are asking about that can only be answered by testing in your environment as different hardware specifications affect the answer.

i.e. a Fragmented disk as the b2d target can slow down backups and restores, however, creating the BKF files with allocate maximum BKF size enabled takes a bit longer in the media mount phase as each new BKF is created which slows down the job. A such performance of B2D is a balancing act between fragmentation, bkf file size and whether or not is is a good idea to enable the maximum allocation setting.

You should also bear in mind that GRT enabled backups (Exchange, SQL , AVVI etc) don't really use BKF files (except as a placeholder during the backup) as they create IMG folders. IMG folders are not subject to the allocate maximum setting which only applies to BKF files. I haven't checked this next statement but if you have allocated 30GB enabled and do GRT backups it might create a 30GB placeholder file as such the recommendation would be don't allocate the maximum if GRT is used.

bjorn_b
Level 6
Partner Accredited Certified
Just a skall correction: If you enable to allocate max size, the max number of backup sets pr file is disabled. Which makes sense. Thanks for clarifying. I have used 4 Gb and allocated the max size for a while (with good speed) but tend to go for a default setup (4GB/100 sets) and enable automatic defrag at least once or twice a month. I have experienced 99 % fragmentation on a 6 TB volume, and that was no fun at all (and duplication speed to tape was a mess)