cancel
Showing results for 
Search instead for 
Did you mean: 

nbe_cat_image.cpp Line: 1075: invalid input data, Function:NBE_CatImage::ndmp_addNode

I get following error while backing up data from a linux system. It says the data input is invalid.  The data is backup history. What is invalid is not clear.

nbu-vm-03 998 998 0 minime-25 bptm begin writing backup id
minime-25_1298969675, copy 1, fragment 1, destination path /opt/naba
1298969682 1 2 32 nbu-vm-03 0 0 0 minime-25 add_files File:
../nbe_cat_image.cpp Line: 1075: invalid input data, Function:
NBE_CatImage::ndmp_addNode
1298969683 1 4 16 nbu-vm-03 998 998 0 minime-25 bpbrm db_FLISTsend failed: file
read failed (13)

7 Replies

db_Flist err 13 <--- failed in ndmp_addNode

The issue is 64 bit inode number. on NBU side it gets truncated and matched with a different inode number previously added. Now there is attribute mismatch and it fails.

 

23:25:42.351 [10403] <8> NBE_CatImage::ndmp_addNode: duplicate NODE for inode 2

23:25:42.351 [10403] <16> NBE_CatImage::DataMatches: mode changed for inode 2 (old: 040777, new: 0100644)  <----- this is actually for a different inode (1154047404513689602)

23:25:42.352 [10403] <4> db_error_add_to_file: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode

23:25:42.353 [10405] <2> image_db: ?

23:25:42.356 [10403] <32> add_files: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode

23:25:42.356 [10403] <2> add_files: 7 files added to /usr/openv/netbackup/db/images/minime-25/1300000000/tmp/full_1300170337_FULL.f

23:25:42.356 [10403] <2> db_ImgUnlock: db_ImgUnlock(0x6c9cd60) unlocked

More info please

What is the NBU version on your master, media server and Linux client?

Please share OS versions as well.

Hi,   I have also faced same

Hi,

 

I have also faced same issue. ndmp/netbackup full backup job fails with file system with large directory tree with "NBE_CatImage::ndmp_addNode: duplicate NODE for inode -1833494262" error message in bpdbm log.

Files are not getting changed during backup.

 

I am using netbackup-7.0.1.

From nebackup console logs:

5/11/2011 12:04:23 AM - requesting resource Any
5/11/2011 12:04:23 AM - requesting resource win-qk9yt8yx248.NBU_CLIENT.MAXJOBS.ibrix-node1
5/11/2011 12:04:23 AM - requesting resource win-qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248.NBU_CLIENT.MAXJOBS.ibrix-node1
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
5/11/2011 12:04:23 AM - granted resource B46E51
5/11/2011 12:04:23 AM - granted resource HP.ULTRIUM3-SCSI.001
5/11/2011 12:04:23 AM - granted resource win-qk9yt8yx248-hcart3-robot-tld-0-pnode1
5/11/2011 12:04:23 AM - estimated 242 Kbytes needed
5/11/2011 12:04:23 AM - started process bpbrm (216)
5/11/2011 12:04:24 AM - connecting
5/11/2011 12:04:24 AM - connected; connect time: 00:00:00
5/11/2011 12:04:24 AM - mounting B46E51
5/11/2011 12:04:31 AM - mounted; mount time: 00:00:07
5/11/2011 12:04:32 AM - positioning B46E51 to file 1
5/11/2011 12:04:33 AM - positioned B46E51; position time: 00:00:01
5/11/2011 12:04:33 AM - begin writing
5/11/2011 12:05:30 AM - Error bpbrm(pid=216) db_FLISTsend failed: file read failed (13)       
5/11/2011 12:05:30 AM - Error ndmpagent(pid=5008) terminated by parent process         
5/11/2011 12:05:30 AM - Error ndmpagent(pid=5008) NDMP backup failed, path = /ifs2/nfstestdir       
5/11/2011 12:05:38 AM - end writing; write time: 00:01:05
file read failed(13)

 

 

From bddbm logs:

23:54:11.578 [5072.4372] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:11.578 [5072.4372] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:11.625 [5072.4372] <2> process_request: request complete: exit status 0  ; query type: 78
23:54:13.687 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55713
23:54:13.765 [604.3820] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55713 TO 10.3.12.50.1556 fd = 644
23:54:13.781 [604.3820] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:13.781 [604.3820] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:13.890 [604.3820] <2> process_request: request complete: exit status 0  ; query type: 78
23:54:15.484 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55715
23:54:15.547 [4712.2168] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55715 TO 10.3.12.50.1556 fd = 644
23:54:15.562 [4712.2168] <2> image_db: Q_IMAGE_ADD_FILES (locking)
23:54:15.562 [4712.2168] <2> db_get_image_info: Job in progress, found image file C:\Program Files\Veritas\NetBackup\db\images\ibrix-node1\1305000000\tmp\policy_34_1305096770_FULL
23:54:15.609 [4712.2168] <8> NBE_CatImage::ndmp_addNode: duplicate NODE for inode -1833494262
23:54:15.609 [4712.2168] <16> NBE_CatImage::DataMatches: mtime changed for inode -1833494262 (old: 1302731016, new 1302731007)
23:54:15.625 [4712.2168] <32> add_files: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode
23:54:15.625 [4712.2168] <2> process_request: request complete: exit status 13 file read failed; query type: 78
23:56:22.312 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55804
23:56:22.359 [700.4152] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55804 TO 10.3.12.50.1556 fd = 644
23:56:22.359 [700.4152] <2> process_request: request complete: exit status 0  ; query type: 98
23:57:04.797 [3088.3092] <2> vnet_pbxAcceptSocket: Accepted sock[644] from 10.3.12.50:55837
23:57:04.859 [1060.4540] <2> logconnections: BPDBM ACCEPT FROM 10.3.12.50.55837 TO 10.3.12.50.1556 fd = 644
23:57:04.875 [1060.4540] <2> process_request: request complete: exit status 0  ; query type: 91

 

Any one figured out a solution or workaround?

We are experiencing the same problem with an NDMP (Celerra) device and Windows 2003 server that is noth the master and media server. NBU 7.0.1

11:46:15.151 [724.2864] <8> NBE_CatImage::ndmp_addNode: duplicate NODE for inode 401992311:46:15.151 [724.2864] <16> NBE_CatImage::DataMatches: mtime changed for inode 4019923 (old: 1305126398, new 1305127438)11:46:15.151 [724.2864] <4> db_error_add_to_file: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode11:46:15.151 [724.2864] <32> add_files: File: ../nbe_cat_image.cpp Line: 1075: invalid input data, Function: NBE_CatImage::ndmp_addNode

The NDMP job fails with a status 13.  I found another forum entry with someone discussing the same problem in April, but there is no solution or work around there either here:

http://news.support.veritas.com/connect/ja/forums/nbecatimagedatamatches-mode-changed-inode-6366518-...

Anyone have any input on this? It seems to be a common problem with NDMP backups as of 7.0.1.  As a test I think we may try to split the backup job into multiple smaller streams instead of one large stream and see if processing the data set in smaller chunks does away with the behavior.

workarround is using set TYPE=tar in the Backup Selections tab

workarround is using set TYPE=tar in the Backup Selections tab backup happened 
successfully.

 3/6/2011 7:11:05 PM - requesting resource Any
 3/6/2011 7:11:05 PM - requesting resource win-
 qk9yt8yx248.NBU_CLIENT.MAXJOBS.ibrix-node1
 3/6/2011 7:11:05 PM - requesting resource win-
 qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
 3/6/2011 7:11:05 PM - granted resource win-qk9yt8yx248.NBU_CLIENT.MAXJOBS
 .ibrix-node1
 3/6/2011 7:11:05 PM - granted resource win-
 qk9yt8yx248.NBU_POLICY.MAXJOBS.policy_34
 3/6/2011 7:11:05 PM - granted resource 4BC1B3
 3/6/2011 7:11:05 PM - granted resource HP.ULTRIUM3-SCSI.001
 3/6/2011 7:11:05 PM - granted resource win-qk9yt8yx248-hcart3-robot-
 tld-0-pnode1
 3/6/2011 7:11:05 PM - estimated 234 Kbytes needed
 3/6/2011 7:11:05 PM - started process bpbrm (1396)
 3/6/2011 7:11:06 PM - connecting
 3/6/2011 7:11:06 PM - connected; connect time: 00:00:00
 3/6/2011 7:11:06 PM - mounting 4BC1B3
 3/6/2011 7:11:15 PM - mounted; mount time: 00:00:09
 3/6/2011 7:11:16 PM - positioning 4BC1B3 to file 1
 3/6/2011 7:11:17 PM - positioned 4BC1B3; position time: 00:00:01
 3/6/2011 7:11:17 PM - begin writing
 3/6/2011 8:16:32 PM - end writing; write time: 01:05:15  the requested
operation was successfully completed(0)




what is root casuse of this duplicate node issue

after using set Type =tar backup happend successfully.

But why its failed for default value"dump". For other backup solution i.e commvault i didn't faced this issue.

A combination of work arounds

First by splitting the large stream into smaller streams, all of my volumes were successful.

 

In order to not have to split them and back up at the root level of the volume I did the following:

The "set TYPE=tar" directive worked for some of the volumes.

For other volumes I had to use the "set snapsure=yes" to make them work. 

 

Hope this helps anyone who is running into this issue.  If this fixes your issues please let me know.