cancel
Showing results for 
Search instead for 
Did you mean: 

NBE_CatImage::DataMatches: mode changed for inode 6366518 (old: 0100600, new: 0100664)

scottji
Not applicable

Hi,

Since a few days we fail to make NDMP backups of one partition containing user home directories.

In the BPDBM logs I find 'duplicate NODE' errors.
The inode displayed is always different and is a new file that was created during backup time.


First attempt:

10:13:30.842 [3884.3072] <2> db_get_image_info: Job in progress, found image file D:\Program Files\Veritas\NetBackup\db\images\hostname\1302000000\tmp\NDMP-UserData_data01_1302075113_FULL
10:13:30.873 [3884.3072] <8> NBE_CatImage::nbdmp_addNode: duplicate NODE for inode 3011574
10:13:30.873 [3884.3072] <16> NBE_CatImage::DataMatches: mode changed for inode 3011574 (old: 0100664, new: 0100600)
10:13:30.982 [3884.3072] <32> add_files: File: nbe_cat_image.cpp Line: 1101: invalid input data, Function: NBE_CatImage::nbdmp_addNode
10:13:30.998 [3884.3072] <2> process_request: request complete: exit status 13 file read failed; query type: 78


Second attempt:

10:44:27.492 [4332.5312] <2> db_get_image_info: Job in progress, found image file D:\Program Files\Veritas\NetBackup\db\images\hostname\1302000000\tmp\NDMP-UserData_data01_1302078266_FULL
10:44:27.492 [4332.5312] <8> NBE_CatImage::nbdmp_addNode: duplicate NODE for inode 6366518
10:44:27.492 [4332.5312] <16> NBE_CatImage::DataMatches: mode changed for inode 6366518 (old: 0100600, new: 0100664)
10:44:27.649 [4332.5312] <32> add_files: File: nbe_cat_image.cpp Line: 1101: invalid input data, Function: NBE_CatImage::nbdmp_addNode
10:44:27.649 [4332.5312] <2> process_request: request complete: exit status 13 file read failed; query type: 78


I suspect following happens:
- NBU backups user X
- user X removes a file and the inode becomes available
- user Y creates a file and the file is assigned this free inode
- NBU backups user Y and fails with duplicate inode error

Which checks are performed to decide it's a duplicate and not a newer file?
Is it based on the mtime of both files? Does it compare the ctime?

The partition contains ~6.000.000 files (~500GB total).
Backup server: NBU 6.5.6 on Windows 2003 Server (32-bit)
Client: EMC Celerra (NAS 5.6.40)


Thanks!
Kind regards,
...

4 REPLIES 4

puneeta
Level 3

I have also faced same issue. ndmp/netbackup full backup job fails with file system with large directory tree with "NBE_CatImage::ndmp_addNode: duplicate NODE for inode -1833494262" error message in bpdbm log.

Files are not getting changed during backup.

 

I am using netbackup-7.0.1.

From nebackup console logs:

5/11/2011 12:04:23 AM - requesting resource Any
5/11/2011 12:04:23 AM - requesting resource win-qk9yt8yx248.NBU_CLIENT.MAXJOBS.ibrix-node1
5/11/2011 12:04:23 AM - requesting resource win-qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248.NBU_CLIENT.MAXJOBS.ibrix-node1
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
5/11/2011 12:04:23 AM - granted resource B46E51
5/11/2011 12:04:23 AM - granted resource HP.ULTRIUM3-SCSI.001
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248-hcart3-robot-tld-0-pnode1
5/11/2011 12:04:23 AM - estimated 242 Kbytes needed
5/11/2011 12:04:23 AM - started process bpbrm (216)
5/11/2011 12:04:24 AM - connecting
5/11/2011 12:04:24 AM - connected; connect time: 00:00:00
5/11/2011 12:04:24 AM - mounting B46E51
5/11/2011 12:04:31 AM - mounted; mount time: 00:00:07
5/11/2011 12:04:32 AM - positioning B46E51 to file 1
5/11/2011 12:04:33 AM - positioned B46E51; position time: 00:00:01
5/11/2011 12:04:33 AM - begin writing
5/11/2011 12:05:30 AM - Error bpbrm(pid=216) db_FLISTsend failed: file read failed (13)       
5/11/2011 12:05:30 AM - Error ndmpagent(pid=5008) terminated by parent process         
5/11/2011 12:05:30 AM - Error ndmpagent(pid=5008) NDMP backup failed, path = /ifs2/nfstestdir       
5/11/2011 12:05:38 AM - end writing; write time: 00:01:05
file read failed(13)

 

 

From bddbm logs:

23:54:11.578 [5072.4372] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:11.578 [5072.4372] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:11.625 [5072.4372] <2> process_request: request complete: exit status 0  ; query type: 78
23:54:13.687 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55713
23:54:13.765 [604.3820] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55713 TO 10.3.12.50.1556 fd = 644
23:54:13.781 [604.3820] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:13.781 [604.3820] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:13.890 [604.3820] <2> process_request: request complete: exit status 0  ; query type: 78
23:54:15.484 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55715
23:54:15.547 [4712.2168] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55715 TO 10.3.12.50.1556 fd = 644
23:54:15.562 [4712.2168] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:15.562 [4712.2168] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:15.609 [4712.2168] <8> NBE_CatImage::ndmp_addNode: duplicate NODE for inode -1833494262
23:54:15.609 [4712.2168] <16> NBE_CatImage::DataMatches: mtime changed for inode -1833494262 (old: 1302731016, new 1302731007)
23:54:15.625 [4712.2168] <32> add_files: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode
23:54:15.625 [4712.2168] <2> process_request: request complete: exit status 13 file read failed; query type: 78
23:56:22.312 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55804
23:56:22.359 [700.4152] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55804 TO 10.3.12.50.1556 fd = 644
23:56:22.359 [700.4152] <2> process_request: request complete: exit status 0  ; query type: 98
23:57:04.797 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55837
23:57:04.859 [1060.4540] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55837 TO 10.3.12.50.1556 fd = 644
23:57:04.875 [1060.4540] <2> process_request: request complete: exit status 0  ; query type: 91

Files are not getting changed during the backup and it is 32bit file system

NAS Filer is HP X9000.

puneeta
Level 3

after using set Type =tar backup happend successfully.

But why its failed for default value"dump". For other backup solution i.e commvault i didn't faced this issue.

puneeta
Level 3

with native backup I didn't face this issue. It seems this issue is occuring only with NDMP+netbackup.

Android
Level 6
Partner Accredited Certified

First by splitting the large stream into smaller streams, all of my volumes were successful.

 

In order to not have to split them and back up at the root level of the volume I did the following:

The "set TYPE=tar" directive worked for some of the volumes but not all.

For other volumes I had to use the "set snapsure=yes" to make them work. 

 

Hope this helps anyone who is running into this issue.  If this fixes your issues please let me know.