05-31-2011 03:16 AM
I have a question about file sizes and number of backup sets pr. file.
I see a couple of scenarios:
1. 4GB / max 40 backup sets pr. file, Defrag once or twice a month:
The file size is good, and controllable amount of backup sets pr. file. However, we need to defrag the volume once in a while
2. 4GB / allocate maximum space
Every file gets 4 GB, but we have no control how many backup sets pr. file. No defragmentatation though.
3. 30 GB / 1 backup set pr. file
(This is a scenario I have discussed with a customer) I don't like this configuration, Most of the reason is that if a backup fails, it is a huge chance that you loose everything, but if you have 4GB Files, you might have some backup, am I right?
The customer feels that this gives him good control of the backup, and what backup job that is connected to the different files. It's hard to argue on that, but I guess there is a good(!) reason why this is not the default way of doing it.
Any comments on these three solutions?
One interresting thing, is that even with alternative 3 we see several files (a lot smaller than 30 GB) pr. backup job. How come?
/bjorn
Solved! Go to Solution.
06-06-2011 04:17 AM
As mostly covered by pkh - how you configure this is kind of up to you
However if you reduce the backup sets per file to a small number, increase the maximum BKF size and enable the allocate maximum space option then you are likley to run into a scenario where the BKF can't be written to even though it contains empty space (because of not understanding what a backup set is and the maximum sets per file limit.)
Similarly there is a performance related part to what are asking about that can only be answered by testing in your environment as different hardware specifications affect the answer.
i.e. a Fragmented disk as the b2d target can slow down backups and restores, however, creating the BKF files with allocate maximum BKF size enabled takes a bit longer in the media mount phase as each new BKF is created which slows down the job. A such performance of B2D is a balancing act between fragmentation, bkf file size and whether or not is is a good idea to enable the maximum allocation setting.
You should also bear in mind that GRT enabled backups (Exchange, SQL , AVVI etc) don't really use BKF files (except as a placeholder during the backup) as they create IMG folders. IMG folders are not subject to the allocate maximum setting which only applies to BKF files. I haven't checked this next statement but if you have allocated 30GB enabled and do GRT backups it might create a 30GB placeholder file as such the recommendation would be don't allocate the maximum if GRT is used.
06-03-2011 09:26 AM
06-03-2011 07:51 PM
I don't understand your term "pr. file". I will presume that you are talkiing about "per .bkf file".
You got to be careful when you are referring to the parameter "maximum number of backup sets per file". The backup set here refer to a resource that is backed up, e.g. C: drive, D: drive, SQL database, system state, etc. If your job backup from the C: drive, D: drive, SQL database, system state from Machine A and C: drive, system state from Machine B, this counts as 6 backup sets and not 1.
To put all the backup sets from a single job in one .bkf file, you need to set the max number of backup sets parameter high enough and then specify overwrite for each job.
if a backup fails, it is a huge chance that you loose everything, but if you have 4GB Files, you might have some backup, am I right?
This is not true. When a job fails, whatever is backed up previously is usually unusable. You probably would not be able to read the .bkf file. Even if you are able to read the file, it is incomplete. For example, your backup job failed when it backup the system state. Even if you can restore the C: drive, you are left with an unusable system.
06-04-2011 05:19 AM
06-04-2011 06:27 PM
It is up to you. You can pre-allocate the .bkf files to decrease fragmentation, but you may be wasting space if you over-allocate. There is no specific rule/recommendation as to the size of the .bkf files.
06-06-2011 04:17 AM
As mostly covered by pkh - how you configure this is kind of up to you
However if you reduce the backup sets per file to a small number, increase the maximum BKF size and enable the allocate maximum space option then you are likley to run into a scenario where the BKF can't be written to even though it contains empty space (because of not understanding what a backup set is and the maximum sets per file limit.)
Similarly there is a performance related part to what are asking about that can only be answered by testing in your environment as different hardware specifications affect the answer.
i.e. a Fragmented disk as the b2d target can slow down backups and restores, however, creating the BKF files with allocate maximum BKF size enabled takes a bit longer in the media mount phase as each new BKF is created which slows down the job. A such performance of B2D is a balancing act between fragmentation, bkf file size and whether or not is is a good idea to enable the maximum allocation setting.
You should also bear in mind that GRT enabled backups (Exchange, SQL , AVVI etc) don't really use BKF files (except as a placeholder during the backup) as they create IMG folders. IMG folders are not subject to the allocate maximum setting which only applies to BKF files. I haven't checked this next statement but if you have allocated 30GB enabled and do GRT backups it might create a 30GB placeholder file as such the recommendation would be don't allocate the maximum if GRT is used.
06-06-2011 02:42 PM