cancel
Showing results for 
Search instead for 
Did you mean: 

.img and .bkf sizes

suren424
Level 5

What i see from our IMG folders is they doesn't have a specific size limit. Is that how it is ?

if yes what is use of creating a B2D with size limits ?

attached screenshots show our b2d properties which we have created during setting up B2D disk using wizard.Could you guys help me understand the "backup to disk file management and concurrent operations" sections.

Does those screenshots look good in settings wise.

We have 5TB B2D folder. can we split that into two and make full utilization of it.Suggest me in these.

Thanks.

 

1 ACCEPTED SOLUTION

Accepted Solutions

pkh
Moderator
Moderator
   VIP    Certified

The .img folders contains the image of the resource which is backed up by a GRT backup.  Hence it is as big or as small as the resource.  For example, if you have a 500GB VM and you back this VM up using GRT, then the .img folder would be around 500GB.

The .bkf files are serial media and it can grow endlessly.  Hence there is a need to specify a limit for each .bkf before a new .bkf file is created.

The max size has already been explained above.  The allocate max size parameter is to specify that each .bkf is created with the max size.  This means that even if the max size is 5GB and you need to write 5MB, BE will allocate 5GB when the allocate max size is specified.

The max number of backup sets is the number of resources that can be contained in the .bkf before a new one is allocated.  You should leave this parameter at its default of 100.

The concurrent jobs parameter specify the number of jobs that can write to the B2D folder at the same time.

View solution in original post

3 REPLIES 3

pkh
Moderator
Moderator
   VIP    Certified

The .img folders contains the image of the resource which is backed up by a GRT backup.  Hence it is as big or as small as the resource.  For example, if you have a 500GB VM and you back this VM up using GRT, then the .img folder would be around 500GB.

The .bkf files are serial media and it can grow endlessly.  Hence there is a need to specify a limit for each .bkf before a new .bkf file is created.

The max size has already been explained above.  The allocate max size parameter is to specify that each .bkf is created with the max size.  This means that even if the max size is 5GB and you need to write 5MB, BE will allocate 5GB when the allocate max size is specified.

The max number of backup sets is the number of resources that can be contained in the .bkf before a new one is allocated.  You should leave this parameter at its default of 100.

The concurrent jobs parameter specify the number of jobs that can write to the B2D folder at the same time.

suren424
Level 5

"The max number of backup sets" : backup sets means . i am confused here.

If we don't do a GRT backup and if it's a  full/incremental backup without GRT there would be no IMG folders ...right ?

if concurrent jobs parameter =2 ,we can perform only 2 jobs to that B2d disk ..right ?

if we want to do more backup jobs can we increase the limit ...what should be taken into consideration before increasing the concurrent jobs parameter. which is recommended ?

thanks.

 

 

pkh
Moderator
Moderator
   VIP    Certified

"The max number of backup sets" : backup sets means . i am confused here.

A backup set is a resource to be backed up, like the C: drive, system state, SQL database, etc.  If you backup a server's C: drive, system state and SQL database, then you would have 3 backup sets.  When the max is exceeded, a new .bkf file would be created even if the max size is not exceeded.

Don't worry about this parameter.  Leave it at the default of 100 and don't append to any disk media.

If we don't do a GRT backup and if it's a  full/incremental backup without GRT there would be no IMG folders ...right ?

Yes.

if concurrent jobs parameter =2 ,we can perform only 2 jobs to that B2d disk ..right ?

Yes.  

You can increase this parameter and run more jobs, but more concurrent jobs does not mean that the overall time will decrease.  The jobs would have to contend for resources, like CPU, I/O's, network bandwidth, etc.