01-27-2020 05:19 AM
Hello everyone,
I'm facing an issue during a backup of a policy type DB2.
The backup jobs are started by a script on the client (RHEL 6.6 and NBU 8.1.1).
The error is the following:
Jan 26, 2020 9:41:16 PM - Error bprd (pid=23436) Unable to write progress log </usr/openv/netbackup/logs/user_ops/dbext/logs/27107.0.1580071151> on client client_name. Policy=policy_name Sched=NONE
Jan 26, 2020 9:41:16 PM - Error bprd (pid=23436) CLIENT client_name POLICY policy_name SCHED NONE EXIT STATUS 61 (the vnetd proxy encountered an error)
Normally this kind of backup creates 5 different jobs. In this case I have 4 jobs successfully completed, but the last one has failed with the message above. I checked in the mentioned path and I found that there is a log file for each process, including the one failed, so I can't explain the why of the error message.
This is the content of the log file of the failed job:
cat 27107.0.1580071151
Backup started Sun Jan 26 21:39:11 2020
21:41:16 INF - Server status = 61
Do you have any idea?
Thank you
01-27-2020 08:22 AM
Hello,
enable logging on DB2 client side (bphdb, bpbkar, etc. - see NetBackup DB2 Admin Guide). Review DB2 logs, too. I guess that cause will be on DB2 side.
Can you specify which change led to 1 failing jobs when it was earlier working?
Regards
Michal
01-28-2020 12:24 AM
Hello @Michal_Mikulik1 ,
The bpbkar and bphdb directories are already created. The bpbkar directory contains the log of yesterday 27th January so I can't see what happened on 26th January.
No change have been made before. The backup processes have ended successfully on 24th January; one of them have failed on 26th January with error code 61 as already explained; yesterday have ended successfully again. I don't understand the cause of the issue.
Why do you guess the issue is DB2 side?
Thank you
Regards
01-28-2020 12:56 AM
I would personally start with bprd log on the master server, since it is bprd that reported the errors.
Can you perhaps share all text in the Job Details?
The Status Code Manual says that we should look for another error preceeding the status 61:
Message: the vnetd proxy encountered an error
Explanation: The command or job was not able to perform secure communication operations. NetBackup may report additional information in a related code in the 76xx range of codes.
Recommended Action: Examine one of the following for a 76xx code that precedes the status code 61, and then look up the explanation for that 76xx code:
For more information, review this technical article: https://www.veritas.com/support/en_US/article.100039945
You may want to add vnetd to the logs to check on the db2 client.
01-28-2020 04:42 AM
hello @Marianne , these are the job details:
Jan 26, 2020 9:41:16 PM - Error bprd (pid=23436) Unable to write progress log </usr/openv/netbackup/logs/user_ops/dbext/logs/27107.0.1580071151> on client client_name. Policy=policy_name Sched=NONE
Jan 26, 2020 9:41:16 PM - Error bprd (pid=23436) CLIENT client_name POLICY policy_name SCHED NONE EXIT STATUS 61 (the vnetd proxy encountered an error)
Unfortunatelly I don't have anymore bprd log on the master of that date.
Intore
01-28-2020 04:50 AM
Really only 2 line in Job Details?
Without more info or bprd log, there is really nothing to help with troubleshooting.
You will have to ensure that all the log folders are in place and then collect logs directly after the next failure.
01-28-2020 05:02 AM
Hello,
several notes:
- your logging retention is probably very short, please extend it (see Host Properties\Master Server\Logging)
- consult your DB2 admin (if you have any) for possible issues
- I suspect client side because only one job and only sometimes is failing. Server-side problem would usually led to all-jobs failure
- what is in failing jobs filelist? Some examples that I think about which could have led to the error : LOGARCHMETH parameter change, DB configuration change, missing OFFLINE DB backup, DROP/RECREATE DB etc. But we really have almost no information about client/DB2 side , we dont know DB2 version, either.
Regards
Michal
01-28-2020 06:14 AM
Thank you @Marianne @Michal_Mikulik1 ,
I'll check the log files if the issue will occur again.
Thanks for your support