cancel
Showing results for 
Search instead for 
Did you mean: 

Wisdom of SLP.JOB_SUBMISSION_INTERVAL=1440

Ron_Cohn
Level 6

Per the documentation:

"By default, all jobs are processed before more jobs are submitted. Increase this interval to allow NetBackup to submit more jobs before all jobs are processed"

There are multiple ways to throttle NBU jobs.  For me, I restrict the number of current streams at the device level .  What is the downside if I change the default value (which is 5 minutes) to some greater value - or even to disable this parameter altogether?

Many thanks in advance!
 

1 ACCEPTED SOLUTION

Accepted Solutions

RamNagalla
Moderator
Moderator
Partner    VIP    Certified

if you are referring SLP.JOB_SUBMISSION_INTERVAL , 5 min is default value, and for disabling you probabliy need to set is more than life time of the Mater server . 

the minimum value that you can give is 10 sec,  and max 2147483647 (around 68 years)

this is actaually a pass or break that SLP process take to allow more backups jobs to come in before it starts new sessions.


changing this value to more than 5 min works good when you have more number of small images and would like to batch them in single stream 

using the parameters MAX_SIZE_PER_DUPLICATION_JOB 

for example:-
lets say you are making 100 backup images in the span of 1 hour(60 min) and each images size is 1 GB all together 100GB

and when you keep the SLP.JOB_SUBMISSION_INTERVAL as 5 min , it will tirgger all 100 jobs in 12 sessions.. all jobs would be in smaller size.
but when you keep the SLP.JOB_SUBMISSION_INTERVAL as 60 min, it will trigger 100 jobs in only one session and as one stream.

so you can have a look into to these and do some testing to find which values do fit best for your enviornment.

View solution in original post

5 REPLIES 5

RamNagalla
Moderator
Moderator
Partner    VIP    Certified

if you are referring SLP.JOB_SUBMISSION_INTERVAL , 5 min is default value, and for disabling you probabliy need to set is more than life time of the Mater server . 

the minimum value that you can give is 10 sec,  and max 2147483647 (around 68 years)

this is actaually a pass or break that SLP process take to allow more backups jobs to come in before it starts new sessions.


changing this value to more than 5 min works good when you have more number of small images and would like to batch them in single stream 

using the parameters MAX_SIZE_PER_DUPLICATION_JOB 

for example:-
lets say you are making 100 backup images in the span of 1 hour(60 min) and each images size is 1 GB all together 100GB

and when you keep the SLP.JOB_SUBMISSION_INTERVAL as 5 min , it will tirgger all 100 jobs in 12 sessions.. all jobs would be in smaller size.
but when you keep the SLP.JOB_SUBMISSION_INTERVAL as 60 min, it will trigger 100 jobs in only one session and as one stream.

so you can have a look into to these and do some testing to find which values do fit best for your enviornment.

Deeps
Level 6

This parameter lets you decide the time you want to defer the SLP duplication for.Longer submisison interval means a longer backlog and when these jobs get submitted, they might burden the available resources .You can not disable this paramter , by default it will take value = 5 minutes.Disabling this means disabling SLP job submission.

DeepS

Ron_Cohn
Level 6

Nagalla,

That is a wonderful explanation.  Now, can I ask you about 001 more parameter:

SLP.MAX_SIZE_PER_DUPLICATION_JOB

" The largest batch size that can run as a single duplication job"

I have images to replicate that are close to 2TB in size.  So, could you please explain the nuances of this parameter.  Everything else makes sense (I think).

Many thanks - again...
 

RamNagalla
Moderator
Moderator
Partner    VIP    Certified

MAX_SIZE_PER_DUPLICATION_JOB

is the one that applicable when it is batching the mulitiple small number of images which are smaller than the size specified in MAX_SIZE_PER_DUPLICATION_JOB.

but when the images size is more than the size specified in MAX_SIZE_PER_DUPLICATION_JOB, then it will only trigger the duplication job for one images

lets say

MAX_SIZE_PER_DUPLICATION_JOB is set to 100 GB

you have  200 images, that needs to run duplication job,and out of which 5 are more then 100 GB size, so those 5 images tigger seperate jobs, and rest 195 images get batched up based on their size untill they get 100 GB( which is max size in above parameter) and then tirgger it.

hope i make it clear...

Ron_Cohn
Level 6

Perfect and Thank You!