cancel
Showing results for 
Search instead for 
Did you mean: 

NDMP backup not using all tape drives

chris86
Level 3

I have recently being getting issues with NDMP backup with jobs queuing up with "Limit has been reached for the logical resource" even though all drives are up and plenty of blank media ready to go.

I have read and performed steps of the articles below but no success:
https://vox.veritas.com/t5/NetBackup/Limit-has-been-reached-for-the-logical-resource/td-p/770815

https://vox.veritas.com/t5/NetBackup/196-because-limit-has-been-reached-for-requested-resource/td-p/...

https://vox.veritas.com/t5/NetBackup/limit-has-been-reached-for-the-logical-resource-netbackup-7/td-...

I have also done the following:

- Shutdown my library (Dell ML6000) and re-seated drives.

- Increaed network speed by setting up NIC teaming as originally i thoght it was network performance issue.

- Upgraded netbackup from 7.7.3 to 8.2

Not sure what else i should be looking at. I can't share most logs publicaly due to security procedures of my orgaisation but i can try filter some things to share if applicable.

10 REPLIES 10

chris86
Level 3

Sorry i should also add i have inserted "NEW_STREAM" entries in Backup Selections between volume paths but im not sure if i should be abusing that feature.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

@chris86 

All of the posts above are about Client Max Job: 

"Limit has been reached for the logical resource master.NBU_CLIENT.MAXJOBS.client-name   " 

Can you show us details of your queued jobs? 
You can replace hostnames with generic names like I have done above. 

Have you cheched for orphaned device allocations or performed a reset of nbrb? 
Please post  'MdsAllocation' section of 'nbrbutil -dump' output.

Below is some details of queued job. As you can see it will be queued for a few hours and that whole time there will be un-used drives. It will wait till another job from that same policy is finished then move on to the next one at a time but i do get at least 4 streams to one drive thats about it.

23/08/2019 2:00:09 PM - Info nbjm (pid=5912) starting backup job (jobid=90411) for client netapp.cluster, policy netbackup.policy, schedule Weekly-Full
23/08/2019 2:00:09 PM - Info nbjm (pid=5912) requesting STANDARD_RESOURCE resources from RB for backup job (jobid=90411, request id:{9EF89CF6-622E-416F-AFE1-968E3C4C2C5E})
23/08/2019 2:00:09 PM - requesting resource  __ANY__
23/08/2019 2:00:09 PM - requesting resource  netbackup.server.NBU_CLIENT.MAXJOBS.netapp.cluster
23/08/2019 2:00:09 PM - requesting resource  netbackup.server.NBU_POLICY.MAXJOBS.netbackup.policy
23/08/2019 2:00:10 PM - Info nbrb (pid=3876) Limit has been reached for the logical resource netbackup.server.NBU_CLIENT.MAXJOBS.netapp.cluster
24/08/2019 4:14:46 PM - granted resource  netbackup.server.NBU_CLIENT.MAXJOBS.netapp.cluster
24/08/2019 4:14:46 PM - granted resource  netbackup.server.NBU_POLICY.MAXJOBS.netbackup.policy
24/08/2019 4:14:46 PM - granted resource  RW4174
24/08/2019 4:14:46 PM - granted resource  IBM.ULTRIUM-TD6.002
24/08/2019 4:14:46 PM - granted resource  netbackup.servername-hcart3-robot-tld-0
24/08/2019 4:14:46 PM - estimated 3285696204 kbytes needed
24/08/2019 4:14:46 PM - Info nbjm (pid=5912) started backup (backupid=netapp.cluster_1566627286) job for client netapp.cluster, policy netbackup.policy, schedule Weekly-Full on storage unit netbackup.servername-hcart3-robot-tld-0
24/08/2019 4:14:47 PM - Info bpbrm (pid=18596) netapp.cluster is the host to backup data from
24/08/2019 4:14:47 PM - Info bpbrm (pid=18596) telling media manager to start backup on client
24/08/2019 4:14:47 PM - Info bptm (pid=17396) using 12 data buffers
24/08/2019 4:14:47 PM - Info bptm (pid=17396) using 65536 data buffer size
24/08/2019 4:14:51 PM - Info bpbrm (pid=29300) sending bpsched msg: CONNECTING TO CLIENT FOR netapp.cluster_1566627286
24/08/2019 4:14:51 PM - Info bpbrm (pid=29300) start bpbkar32 on client
24/08/2019 4:14:51 PM - Info bpbkar32 (pid=27988) Backup started
24/08/2019 4:14:51 PM - Info bpbrm (pid=29300) Sending the file list to the client
24/08/2019 4:14:51 PM - Info ndmpagent (pid=27988) PATH(s) found in file list = 1
24/08/2019 4:14:51 PM - Info ndmpagent (pid=27988) PATH[1 of 1]: /netbackup.policy/transport
24/08/2019 4:14:51 PM - Info ndmpagent (pid=27988) NDMP Remote tape
24/08/2019 4:14:51 PM - connecting
24/08/2019 4:14:51 PM - connected; connect time: 0:00:00
24/08/2019 4:14:52 PM - begin writing

MDS allocations in EMM:

MdsAllocation: allocationKey=8916 jobType=1 mediaKey=4000181 mediaId=RW4
181 driveKey=2000016 driveName=IBM.ULTRIUM-TD6.001 drivePath={4,0,1,0} stuName=r
netbackup.servername-hcart3-robot-tld-0 masterServerName=netbackup.servername
mediaServerName=netbackup.servername ndmpTapeServerName= diskVolumeK
ey=0 mountKey=0 linkKey=0 fatPipeKey=0 scsiResType=1 serverStateFlags=1

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

" Limit has been reached for the logical resource netbackup.server.NBU_CLIENT.MAXJOBS.netapp.cluster "

Your jobs are queued because the limit for Client Maxjobs has been reached.
So, if not enough streams for the client can be generated, there will be unused drives as a result. 

What is the value for  Maximum jobs per client on your master server? 
Host Properties.-> Select Master Server.
In the right pane, double-click the server icon.
Click Global Attributes.
What is the value of  Maximum jobs per client 

The next place to check is Master server Properties -> Client Attributes
Does 'netapp.cluster' exist under the list of client names?
 If not, ignore the following. 
If so, select the name.
Is there a value for Maximum data streams?
This value should not be less than the Global Attributes setting.

Maximum jobs per client = 6

Maximum data streams = netapp.cluster does not exist under client attributes.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Are you not seeing 6 active jobs? 

I get 5 max jobs that i can see from NDMP policies writing to one tape drive while the other 3 drives do nothing. Then when other policies start that are VMware or client based ones start they use the other drives and the NDMP backup will continue to use one drive.

The VMware and client based backups will usualy finish and NDMP is still running. We have full backups start on a friday afternoon which run over the weekend and usualy finish sunday night but it its now running into business hours through monday and tuesday.

Am i misunderstood that maybe this is how it actually works and that the NDMP backup data from our netapp is now too big?


Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

If memory serves me right, the parent job unfortunately counts as 1 job.
So, 5 jobs will actually write data.

If you want to activate more jobs, you need to increase Max Jobs Per Client.

So i recreated the policy from scratch and spilt it in 2 and changed Max Jobs Per Client to 8 and i seem to have got it to use another tape drive but it still wasnt fast enough to complete in time.

I am going to try up the Max Jobs Per Client to 12 and see how it goes.

Thank so far for the advice @Marianne 

So i upped max job per client to 16 and all drives came into use! but... i have realised that im breaking down the bandwidth across all the jobs which may have made it even worse.

I now have a data size/bandwidth problem which is Netapp related. I discovered that the polices are setup to point through the netapp cluster management port (1Gb) instead of the directly to the SVM's which have a larger network port (10Gb).

working with our netapp admins to rectify the issue and will post my update.