cancel
Showing results for 
Search instead for 
Did you mean: 

ERR - Unable to NFS mount the required file system.

marcusb12
Level 3

Hi,

We are trying to do an exchange GRT backup. Exchange 2019, Windows Server 2019 and a Media Server running RHEL with Netbackup 8.2 Clients.

In doing so our backups are only partially successful with the following error:

02/11/2020 2:49:49 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Unable to NFS mount the required file system.
02/11/2020 2:50:59 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 2:51:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB001\Logs_1604288407\
02/11/2020 2:51:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 2:53:09 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 2:53:47 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB002\Logs_1604288407\
02/11/2020 2:53:47 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:00:57 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:01:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB003\Logs_1604288407\
02/11/2020 3:01:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:04:27 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:05:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB004\Logs_1604288407\
02/11/2020 3:05:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:06:08 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:06:47 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB005\Logs_1604288407\
02/11/2020 3:06:47 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:07:18 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:07:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB006\Logs_1604288407\
02/11/2020 3:07:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:08:16 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:08:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB007\Logs_1604288407\
02/11/2020 3:08:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:09:16 PM - Info bpbrm (pid=3513266) DB_BACKUP_STATUS is 0
02/11/2020 3:09:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Error encountered while attempting to get additional files for Microsoft Information Store:\MYDB008\Logs_1604288407\
02/11/2020 3:09:46 PM - Error bpbrm (pid=3513266) from client <mydag>: ERR - Exchange granular restore from this image may not work.
02/11/2020 3:09:47 PM - Info bptm (pid=3513457) waited for full buffer 10933 times, delayed 32049 times
02/11/2020 3:09:48 PM - Info bptm (pid=3513457) EXITING with status 0 <----------
02/11/2020 3:09:48 PM - Info bpbrm (pid=3513266) validating image for client <mydag>
02/11/2020 3:09:48 PM - Info bpbkar (pid=36536) done. status: 1: the requested operation was partially successful
02/11/2020 3:09:48 PM - end writing; write time: 0:21:01
The requested operation was partially successful  (1)

I have followed the instructions in this document: https://www.veritas.com/content/support/en_US/article.100000686 and when doing so receive the following error:

C:\Program Files\Veritas\NetBackup\bin>nbfs mount -server mymediaserver-cred <cred> *
connect to mymediaserver failed
EXIT_STATUS=25

Any ideas?

1 ACCEPTED SOLUTION

Accepted Solutions

Fixed by opening the following ports inbound on the media server AWS security group:

1556 - Netbackup

7394 - Netbackup NBFSD

111 - Netbackup GRT

View solution in original post

11 REPLIES 11

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Lowell_Palecek
Level 6
Employee

Assuming you have NFS configured on the Exchange server and media server per the documentation, I would look for the nbfsd log on the media server. This is a difficult log to read. Try to line up timestamps and look for requests received and results returned. Either you'll find an error, or you'll find that the request doesn't make it to nbfsd. In the latter case, get your system admins involved to troubleshoot NFS.

Hi Marianne,

Yep the clients have Client for NFS installed.

Hi Lowell,

Thanks for the tip - i've had a look but can't seem to recognise any errors in there.

 

cat root.110320_00001.log
09:46:04.189 [3603068] <2> debuglog: <2> common_init: nbfsd: starting...  @(#) NetBackup_8.2  mymediaserver  (2019062501, 1604357164 201103.094604)
09:46:04.189 [3603068] <2> debuglog: <2> common_init: nbfsd: unix - Linux, mymediaserver, 4.18.0-193.14.3.el8_2.x86_64, #1 SMP Mon Jul 20 15:02:29 UTC 2020, x86_64
09:46:04.189 [3603068] <2> logparams: nbfsd -d -v 5 -j 4 -B client01_123456789
09:46:04.190 [3603068] <2> debuglog: <2> ivfs_parse_use_options: bid client01_123456789, type , options
09:46:04.190 [3603068] <2> debuglog: <2> ivfs_parse_use_options: generic use type
09:46:04.190 [3603068] <2> debuglog: <2> ivfs_parse_use_options: returning useinfo 0x25a6f90 -
09:46:04.190 [3603068] <2> debuglog: <2> setup_pmap_method: IPV6_NBFSD is enabled. Selecting RPCB methods
09:46:04.190 [3603068] <2> debuglog: <2> init_bpconf_str : tested for nbfsd_codepage
09:46:04.190 [3603068] <2> debuglog: <2> init_bpconf_bool: tested for nbfsd_diag_mode
09:46:04.190 [3603068] <2> debuglog: <2> init_touchfiles: initial daemon timeout values - idle 10, inactivity 60
09:46:04.190 [3603068] <2> debuglog: <2> selfcheck_xdr: 32: 00001112 21222324 31323334 35363738 41424344 45464748 51525354 00006162
09:46:04.190 [3603068] <2> debuglog: <2> main: nbfsd port: 7394
09:46:04.191 [3603068] <2> debuglog: <4> my_rpcb_dump: done: rv 0, num_found 0/12
09:46:04.191 [3603068] <2> debuglog: <2> swprc_init: NFS max fileop datasize is 32768
09:46:04.191 [3603068] <2> debuglog: <2> swprc_init: NFS max dirop  datesize is 32768
09:46:04.191 [3603068] <2> debuglog: <2> swprc_init: selecting embedded RPC environment
09:46:04.191 [3603068] <2> debuglog: <2> rpc_init: FD_SETSIZE 1024
09:46:04.191 [3603068] <2> debuglog: <2> rpc_update_fdset: nfds 0, max 1
09:46:04.192 [3603068] <2> debuglog: <2> rpc_add_xp: adding ADDR=(none) 0  S=5 FAM=10 TCP=1 B=1 HIST=0/0 XP=0x25a8d80
09:46:04.192 [3603068] <2> debuglog: <2> rpc_update_fdset: nfds 1, max 6
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100005, ver 1 on 0x25a8d80 as 0x444094
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100005, ver 3 on 0x25a8d80 as 0x444094
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100003, ver 3 on 0x25a8d80 as 0x4453e2
09:46:04.192 [3603068] <2> debuglog: <2> rpc_add_xp: adding ADDR=(none) 0  S=6 FAM=10 TCP=0 B=1 HIST=0/10 XP=0x25a8e80
09:46:04.192 [3603068] <2> debuglog: <2> rpc_update_fdset: nfds 2, max 7
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100005, ver 1 on 0x25a8e80 as 0x444094
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100005, ver 3 on 0x25a8e80 as 0x444094
09:46:04.192 [3603068] <2> debuglog: <2> rpc_reg_prog: registered prog 100003, ver 3 on 0x25a8e80 as 0x4453e2
09:46:04.192 [3603068] <2> debuglog: <2> rpc_init: done
09:46:04.192 [3603068] <2> debuglog: <4> main: using IVFS
09:46:04.192 [3603068] <2> debuglog: <2> mount_init: preloaded 100 export names
09:46:04.192 [3603068] <2> debuglog: <2> ivfs_init_once: ivfs log mask set of 0x3e
09:46:04.192 [3603068] <2> debuglog: <4> ivfs_init_once: instance id 0xbe6b086 0xf416bed2
09:46:04.192 [3603068] <2> debuglog: <2> remove_tempdir: tempdir is /usr/openv/tmp/nbfsd_ivfs_tempfiles
09:46:04.192 [3603068] <2> debuglog: <2> remove_tempdir: no tempdir to remove
09:46:04.192 [3603068] <2> debuglog: <2> create_tempdir: created tempdir - /usr/openv/tmp/nbfsd_ivfs_tempfiles
09:46:04.192 [3603068] <2> debuglog: <2> NEW_xmiglist: setting on-demand image blk cache size to 20 MB
09:46:04.192 [3603068] <2> debuglog: <2> vfs::vfs: new vfs 0x25bcff0
09:46:04.192 [3603068] <2> debuglog: <2> vfs::_ialloc: new vino 0x25c5140, vfs 0x25bcff0, inum 1, name (root)
09:46:04.192 [3603068] <2> debuglog: <2> vfs::_init_rootdir: new root vip 0x25c5140, vfs 0x25bcff0
09:46:04.192 [3603068] <2> debuglog: <2> vfs::_ialloc: new vino 0x25c5270, vfs 0x25bcff0, inum 10, name backups
09:46:04.192 [3603068] <2> debuglog: <2> vfs::_ialloc: new vino 0x25c5490, vfs 0x25bcff0, inum 11, name inprogress
09:46:04.192 [3603068] <2> debuglog: <2> vfs::_ialloc: new vino 0x25c56b0, vfs 0x25bcff0, inum 12, name aif
09:46:04.192 [3603068] <2> debuglog: <4> ivfs_init_once: image options set to 0103
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:   VFS VFS { vfs 0x25bcff0, dev 1, options 0, nextinum 13, root 0x25c5140, temppath /usr/openv/tmp/nbfsd_ivfs_tempfiles }
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:       DIR VINO { vino 0x25c5140, vfs 0x25bcff0, DIR, inum 1/1, size 8192, parent (nil), name (root) }  /
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           VINO { vino 0x25c5270, vfs 0x25bcff0, DIR, inum 1/10, size 8192, parent 0x25c5140, name backups }  backups
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           VINO { vino 0x25c5490, vfs 0x25bcff0, DIR, inum 1/11, size 8192, parent 0x25c5140, name inprogress }  inprogress
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           VINO { vino 0x25c56b0, vfs 0x25bcff0, DIR, inum 1/12, size 8192, parent 0x25c5140, name aif }  aif
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           DIR VINO { vino 0x25c5270, vfs 0x25bcff0, DIR, inum 1/10, size 8192, parent 0x25c5140, name backups }  /backups
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           DIR VINO { vino 0x25c5490, vfs 0x25bcff0, DIR, inum 1/11, size 8192, parent 0x25c5140, name inprogress }  /inprogress
09:46:04.192 [3603068] <2> debuglog: <2> dirdump:           DIR VINO { vino 0x25c56b0, vfs 0x25bcff0, DIR, inum 1/12, size 8192, parent 0x25c5140, name aif }  /aif
09:46:04.192 [3603068] <2> debuglog: <2> cred_decode_and_register: decode successful: issue 1604357164, pid 3603068, tmo 10, job_id 4, bid client01_123456789, media , use
09:46:04.192 [3603068] <2> debuglog: <2> ivfs_parse_use_options: bid client01_123456789, type , options
09:46:04.192 [3603068] <2> debuglog: <2> ivfs_parse_use_options: generic use type
09:46:04.192 [3603068] <2> debuglog: <2> ivfs_parse_use_options: returning useinfo 0x25c6e30 -
09:46:04.192 [3603068] <2> debuglog: <2> cred_decode_and_register: accepted new cred: acp 0x25c6f00
09:46:04.192 [3603068] <2> debuglog: <2> cred_add_new_clientname: added new client client01 to the list
09:46:04.192 [3603068] <2> debuglog: <2> ivfs_cred_add: client01_123456789 (0 0)
09:46:04.192 [3603068] <2> debuglog: <2> cred_dump1: ACP 0x25c6f00:  0 0 0  (nil) (nil)    0 0    1604357164 1604357164 1604357164 (0)  4  (none)  client01_123456789  (none)  (none)
09:46:04.192 [3603068] <2> debuglog: <4> cred_dump: 1 creds (1604357164)
09:46:55.327 [3603068] <2> debuglog: <4> handle_signal: signal 2 caught
09:46:55.327 [3603068] <2> debuglog: <4> daemon_exit: exiting 2
09:46:55.327 [3603068] <2> debuglog: <2> imglist::~imglist: this 0x25bcfc0
09:46:55.327 [3603068] <2> debuglog: <2> remove_tempdir: tempdir is /usr/openv/tmp/nbfsd_ivfs_tempfiles
09:46:55.327 [3603068] <2> debuglog: <2> remove_tempdir: removed tempdir - /usr/openv/tmp/nbfsd_ivfs_tempfiles
09:46:55.327 [3603068] <2> debuglog: <2> nbufs_user_tmpdir_cleanup: entering ...
09:46:55.327 [3603068] <2> debuglog: <2> nbufs_user_tmpdir_cleanup: no leftover user tmp files to remove
09:46:55.327 [3603068] <2> debuglog: <2> remove_remaining_staging_files: entering ...
09:46:55.327 [3603068] <2> debuglog: <2> daemon_exit: removed file </usr/openv/netbackup/bin/nbfsd_key>
09:46:55.327 [3603068] <2> debuglog: <4> daemon_exit: NBFSD EXIT_STATUS 2

marcusb12
Level 3

I realised that the NFS client was stopped on the media server. After starting that, the log looks better but i'm still getting the same errors.

Actually when I follow the comands in this document: https://www.veritas.com/content/support/en_US/article.100000686

I can never mount the drive, it gets stuck at this point indefinitely.

C:\Program Files\Veritas\NetBackup\bin>nbfs mount -server mymediaserver -cred BBCB97CF22B6C74CFA2738BC3C4F69E8CF7F31ECC069D672239D977B2FB3E2B288872549C0EC2F0FD83B540AA109000E373FCF7E1549FE8366D43A91BA4412BB *
BASEDIR=
MNTPATH=/NBUFS_A_EA9CC093F145072D_001
MOUNTPOINT=

Which looks like this on the media server:

16:45:36.605 [3637197] <2> debuglog: <2> common_init: nbfsd: starting...  @(#) NetBackup_8.2  mymediaserver (2019062501, 1604382336 201103.164536)
16:45:36.605 [3637197] <2> debuglog: <2> common_init: nbfsd: unix - Linux, mymediaserver, 4.18.0-193.14.3.el8_2.x86_64, #1 SMP Mon Jul 20 15:02:29 UTC 2020, x86_64
16:45:36.605 [3637197] <2> logparams: nbfsd -d -v 5 -j 5 -B client02_123456789
16:45:36.605 [3637197] <2> debuglog: <2> ivfs_parse_use_options: bid client02_123456789, type , options
16:45:36.605 [3637197] <2> debuglog: <2> ivfs_parse_use_options: generic use type
16:45:36.605 [3637197] <2> debuglog: <2> ivfs_parse_use_options: returning useinfo 0x3000f90 -
16:45:36.605 [3637197] <2> debuglog: <2> read_shared_memory: Failed to get shared memory segment
16:45:37.606 [3637197] <2> debuglog: <2> cred_load_keyfile: read nbfsd_key file
16:45:57.125 [3636296] <8> vnet_get_user_credential_path: [vnet_vxss.c:1620] status 35 0x23
16:45:57.125 [3636296] <8> vnet_check_user_certificate: [vnet_vxss_helper.c:4122] vnet_get_user_credential_path failed 35 0x23
16:45:57.125 [3636296] <2> ConnectionCache::connectAndCache: Acquiring new connection for host forge, query type 81
16:45:57.125 [3636296] <2> logconnections: BPDBM CONNECT FROM 10.66.4.50.43739 TO 10.66.4.50.13721 fd = 10
16:45:57.130 [3636296] <2> closeConnection: Caching connection for query type 81 for reuse
16:45:57.130 [3636296] <2> debuglog: <16> imgrec::refresh: failed to get image record for client02_123456789 (227)
16:45:57.130 [3636296] <2> debuglog: <16> imgrec::refresh: THROWING 22 failed to get image record

Is this a DAG? If so, do you have the DAG name mapped to your Exchange server names in the Distributed Application Restore Mapping? This is a host configuration setting for the master server.

The most recent error you're getting from your nbfsd mount attempt says that the master server is not providing the client about the backup image. Look in the bprd log to find out why. Bprd is the request daemon on the master server. I think the request is bpclimagelist. I am running a GRT backup to confirm it.

In the bprd log look for "process_request: imagelist". Copy the lines for that pid off into a separate editor window. Look for a line like the following:

>is_redirected_restore_allowed: Component host <vm562dag> is validated for application host <rsvlmvc01vm562>

Where the component host is my DAG and the application host is the Exchange server where bpbkar32 ran. The function "is_redirected_restore_allowed" is the DARM lookup.

Check the exit status:

>process_request: EXIT STATUS 0

You may need to temporarily set logging level to 3 on the master server to find these log entries. I have mine set to 5.

Hi Lowell,

In response to your other post, yes I have the DAG mapped as a host mapping.

I can't see those lines in my bprd log, and i've turned the bprd logging up to 5 also.

The only obvious errors I can see when trying manually test a GRT backup are:

10:29:31.700 [3947558] <2> debuglog: <16> imgrec::refresh: failed to get image record for client01_123456789 (227)
10:29:31.700 [3947558] <2> debuglog: <16> imgrec::refresh: THROWING 22 failed to get image record
10:29:31.700 [3947558] <2> debuglog: <2> ivfs_cred_activate: caught from image refresh - 22 failed to get image record

Well bizarre fix.. I started by going back to basics to try and rule out what might be causing the issue.

I opened up our AWS security group for the media server to any inbound / any outbound traffic and tried again - fixed. When i removed it and allowed only port 1556 and 7394 it now works, despite that being the config before.

I will try again on Monday and see if it is still working.. if it is then I will consider this fixed, but I can't see how that might've fixed the problem when I changed it back to what it was before!

Fixed by opening the following ports inbound on the media server AWS security group:

1556 - Netbackup

7394 - Netbackup NBFSD

111 - Netbackup GRT