NDMP Backup with NETAPP v9
Hi, I trying to add a Netapp filer as NDMP data server into Backup Exec 16 (fully updated on 2016-02-21). I always get an error message "permission denied". See attachment of a print screen. The filer hosts several SVMs. ndmp node-scope-mode is disabeled. I have enabled debug mode on the filer. The error is: 00000010.00044551 023277ac Tue Feb 21 2017 10:36:54 +01:00 [kern_ndmpd:info:4145] A [src/rdb/SLM.cc 2511 (0x810404200)]: SLM 1000: current _sitelist is now: SLV <<1,1193>, 9b38433d-b29d-11e4-a313-00a0987d2136>. 00000010.00044552 023277ac Tue Feb 21 2017 10:36:54 +01:00 [kern_ndmpd:info:4145] [41423] ERROR: MD5 AUTH FAILED MD5 digest mismatch for 'backupexec' -- bad password I have tried with different password length without any success (8 and 16 chars). The NDMP backup is working fine with another backup solution (IBM TSM). Any idea is welcome on this topic. Thank you for your support. Regards, EricSolved7.1KViews0likes12CommentsRestore NDMP Tape to Windows
I have Backup Exec 2010 R3 installed on Windows 2008 R2. This is our legacy tape backup solution before we went to a disk based system. I have hundreds of tapes and keep Backup Exec around for old restores. We used to have a NetApp with CIFS shares and backed them up to tape via NDMP backups. The NetApp has been replaced with a new Nimble SAN. I am trying to do a restore of files from the NetApp tapes. Since I don't have a NetApp anymore I am trying to restore to a Windows box. I get the error below. How can I restore NetApp NDMP tapes to a Windows system? e0001315 - An invalid destination was specified for the redirected restore. NDMP backup sets can only be redirected to an NDMP server of the same brand.Solved5KViews0likes6CommentsUnimplemented error code (114)
Last week, I have migrated master server to new hardware. After restored, everything works fine except my one of the NDMP host backup keep failing. Please see error logs below: Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) starting backup job (jobid=2197) for client 10.105.33.14, policy NDMP_ston-mykul-p3n4, schedule Daily_backup Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) requesting STANDARD_RESOURCE resources from RB for backup job (jobid=2197, request id:{194631FA-8736-11E7-BDB8-408725342EC5}) Aug 22, 2017 8:33:28 PM - requesting resource MPX-TAPE_TLD3-HCART_HCART2 Aug 22, 2017 8:33:28 PM - requesting resource nbms-mykul-p3n1.NBU_CLIENT.MAXJOBS.10.105.33.14 Aug 22, 2017 8:33:28 PM - requesting resource nbms-mykul-p3n1.NBU_POLICY.MAXJOBS.NDMP_ston-mykul-p3n4 Aug 22, 2017 8:33:28 PM - awaiting resource MPX-TAPE_TLD3-HCART_HCART2. Waiting for resources. Reason: Drives are in use, Media server: mykulsnbmd001, Robot Type(Number): TLD(3), Media ID: N/A, Drive Name: N/A, Volume Pool: NDMP, Storage Unit: mykulsnbmd001-hcart-robot-tld-3-mpx, Drive Scan Host: N/A, Disk Pool: N/A, Disk Volume: N/A Aug 22, 2017 8:33:28 PM - granted resource nbms-mykul-p3n1.NBU_CLIENT.MAXJOBS.10.105.33.14 Aug 22, 2017 8:33:28 PM - granted resource nbms-mykul-p3n1.NBU_POLICY.MAXJOBS.NDMP_ston-mykul-p3n4 Aug 22, 2017 8:33:28 PM - granted resource TB1116 Aug 22, 2017 8:33:28 PM - granted resource HP.ULTRIUM5-SCSI.004 Aug 22, 2017 8:33:28 PM - granted resource mykulsnbmd001-hcart-robot-tld-3-mpx Aug 22, 2017 8:33:28 PM - estimated 7986768 kbytes needed Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) started backup (backupid=10.105.33.14_1503405208) job for client 10.105.33.14, policy NDMP_ston-mykul-p3n4, schedule Daily_backup on storage unit mykulsnbmd001-hcart-robot-tld-3-mpx Aug 22, 2017 8:33:29 PM - connecting Aug 22, 2017 8:33:29 PM - connected; connect time: 0:00:00 Aug 22, 2017 8:33:29 PM - begin writing Aug 22, 2017 8:42:35 PM - end writing; write time: 0:09:06 Aug 22, 2017 8:45:46 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: creating "/svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/../snapshot_for_backup.2340" snapshot. Aug 22, 2017 8:45:46 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Using inowalk incremental dump for Full Volume Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Using snapshot_for_backup.2340 snapshot Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Date of this level 9 dump: Tue Aug 22 20:33:35 2017. Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Date of last level 8 dump: Tue Aug 15 20:03:08 2017. Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Dumping /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/ to NDMP connection Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: mapping (Pass I)[regular files] Aug 22, 2017 8:45:52 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: mapping (Pass II)[directories] Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: No available buffers. Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: DUMP IS ABORTED Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) read from socket returned -1 104 (Connection reset by peer) Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) MOVER_HALTED unexpected reason = 4 (NDMP_MOVER_HALT_CONNECT_ERROR) Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) NDMP backup failed, path = /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/ Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Deleting "/svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/../snapshot_for_backup.2340" snapshot. Aug 22, 2017 8:46:13 PM - Error ndmpagent (pid=26939) 10.105.33.14: DATA: Operation terminated (for /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/). Aug 22, 2017 8:54:27 PM - Info bpbrm (pid=25638) sending message to media manager: STOP BACKUP 10.105.33.14_1503405208 Aug 22, 2017 8:54:36 PM - Info bpbrm (pid=25638) media manager for backup id 10.105.33.14_1503405208 exited with status 0: the requested operation was successfully completed unimplemented error code (114) NetBackup version: 8.0 NDMP: NetApp Release 8.2.1P1 Cluster-Mode Backup Method: MPX3.5KViews0likes3CommentsBE16 NetApp Cluster Aware Backups
Hi All, Been a while since I have been on here. Does anyone know whether BE16 supports NDMP CAB or in other words SVM NDMP options. Managed to get it working on the older products using node-scope mode, however I see it does not support browsing of the volumes, which is only available in CAB setups, which previous BE products dont support. Thanks Oh, I have looked through the docs that are available so far, but I have not seen anything specific other than "NDMP is supported"2.2KViews0likes2CommentsNDMP NetApp Backup Error - 0xe0001309
Hi Community, i have a problem with my NDMP NetApp Backup. Everytime when the Backupsession runs it backup Data but finished with an error: NDMP Log Message: DUMP: Total Dir to FH time spent is greater than 15 percent of phase 3 total time. Please verify the settings of backup application and the network connectivity. Can you help my by this problem?Solved2KViews0likes1CommentOrchestrating NetApp cDOT Snapshots with NetBackup 7.7
With the release of NetBackup 7.7 last month we introduced support for NetApp Clustered Data ONTAP (cDOT). While the cDOT environment provides a highly-available, scale-out storage platform, it can be tricky to backup using storage-level technologies such as NDMP or NetApp Snapshot, SnapMirror, and SnapVault.How do I rebuild a whole device's hash?
I am failing to retrieve volumes on a SVM ERROR: V-378-1339-4104: #{7340} [set_data_vserv_hash: 2232] Failed to get volumes info for VServer ERROR: V-378-1339-4077: #{7340} [process_vservers: 3091] Failed to add VServer to allowed hash. WARNING: V-378-1318-2118: #{5232} [get_cmod_shares: 4029] NetAPP ONTAP API Err Errstr: No records found for vserver How do I get the classes to run via the command line to rebuild the hash? ________________________________________________________________________________________ Is it possible to utilize the binaries on the command line to reset the hash on the suspect SVM filer? The only ones I am aware of are --rehash which will rehash all indexes and of course --dirhash Build directory hash (use with caution) --filehash Build file hash (use with caution) but these are not at the device scope. But since it is a device I would not know which index to aim at. CLi discovery fails netapp_util.exe -i 3 –v <name> 2022-06-09 14:59:06 INFO: V-378-1318-1000: #{15148} [__initlog: 386] Logging Initialized successfully 2022-06-09 14:59:06 ERROR: V-378-1318-1055: #{15148} [main: 1259] Error in parsing cmdline args 2022-06-09 14:59:06 INFO: V-378-1318-1058: #{15148} [main: 1273] netapp_util exit code: 1 ref: -i <discovery option> {-v <vserver name>}: Get list of CMod Shares and NFS exports for all VServers or for a specific VServer if -v is provided. Discovery options: 1: Full discovery for both CIFS shares and NFS exports. 2: Test discovery for both CIFS shares and NFS exports. 3: Full discovery for CIFS shares alone. 4: Test discovery for CIFS shares alone. 5: Full discovery for NFS exports alone. 6: Test discovery for NFS exports alone. Any other value results in full discovery of both CIFS & NFS. Appears to be no option on the new service. The knowledgebase returns do not show any value to me in this regard. The Data Insight Hash Utility Tool is to encrypt or decrypt the passwords. not what I am looking for. Article - High sustained latency after ONTAP upgrade to 9.7 P14 and above due to FPolicy EAGAIN (veritas.com) talks about the buffer size we were told to disable in favor of the fix. We have upgraded to fix version PixSolved1.7KViews0likes3CommentsNDMP, Netapp cDOT 8.2, and parent policy to coordinate snapshot creation?
Hi, I'm trying to coordinate the creation of backup Snapshots when I backup two seperate, but related volumes of data. Because we're running NBU 7.7.3 on Solaris, and Netapp cDOT 8.2, we cannot do CAB (Cluster Aware Backups) until we upgrade to cDOT 8.3 sometime in the future. Which is a pain. I've got two volumes, Foo & Bar, which need to be snapshotted at the same time, but are on seperate nodes. Would using a parent policy with two sub-policies be the way to make this happen? We don't need the actual NDMP backups to be in parallel at all. I've been looking at the docs, and we've done bpstart_notify.<POLICY> scripts in the past, but I'm a bit stumped on how I should do it now? If we put both volumes into one policy, then it won't run properly because the policy is bound to a single Node on the Netapp. And of course we have the volumes spread across multiple nodes. Thanks, John1.7KViews0likes5CommentsVeritas at NetApp Insight Berlin 2017
This year, #NetAppInsight takes place on November 13th through the 16th, and Veritas is proud sponsor of the Berlin-based event. Please visit the Veritas booth at B6 at Insight Central on the event floor to learn more about our joint solutions with NetApp. Let's discuss how Veritas can help guarantee your data is compliant, protected, and visible.