Replication Director SLPs and Netapp cDot
Hello I have a question to the SLP behavior with Replication Director on cDot Netapps. On 7mode Netapps it was possible to have multiple SLPs (daily, weekly, monthly,...) with the same destination volume on a backup netapp. For example. I have a daily SLP which replicates the VMs to a secondary site and I have a monthly SLP which does the same but also do a tape out once a month for long term retention. In 7 mode I had 1 destination volume for that. In cMode now the RD creates destination volumes based on SLP names. So If i have 3 SLPs I need 3 times the space on the destination Netapp. I wonder if no one else has this problem???904Views0likes4CommentsUnimplemented error code (114)
Last week, I have migrated master server to new hardware. After restored, everything works fine except my one of the NDMP host backup keep failing. Please see error logs below: Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) starting backup job (jobid=2197) for client 10.105.33.14, policy NDMP_ston-mykul-p3n4, schedule Daily_backup Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) requesting STANDARD_RESOURCE resources from RB for backup job (jobid=2197, request id:{194631FA-8736-11E7-BDB8-408725342EC5}) Aug 22, 2017 8:33:28 PM - requesting resource MPX-TAPE_TLD3-HCART_HCART2 Aug 22, 2017 8:33:28 PM - requesting resource nbms-mykul-p3n1.NBU_CLIENT.MAXJOBS.10.105.33.14 Aug 22, 2017 8:33:28 PM - requesting resource nbms-mykul-p3n1.NBU_POLICY.MAXJOBS.NDMP_ston-mykul-p3n4 Aug 22, 2017 8:33:28 PM - awaiting resource MPX-TAPE_TLD3-HCART_HCART2. Waiting for resources. Reason: Drives are in use, Media server: mykulsnbmd001, Robot Type(Number): TLD(3), Media ID: N/A, Drive Name: N/A, Volume Pool: NDMP, Storage Unit: mykulsnbmd001-hcart-robot-tld-3-mpx, Drive Scan Host: N/A, Disk Pool: N/A, Disk Volume: N/A Aug 22, 2017 8:33:28 PM - granted resource nbms-mykul-p3n1.NBU_CLIENT.MAXJOBS.10.105.33.14 Aug 22, 2017 8:33:28 PM - granted resource nbms-mykul-p3n1.NBU_POLICY.MAXJOBS.NDMP_ston-mykul-p3n4 Aug 22, 2017 8:33:28 PM - granted resource TB1116 Aug 22, 2017 8:33:28 PM - granted resource HP.ULTRIUM5-SCSI.004 Aug 22, 2017 8:33:28 PM - granted resource mykulsnbmd001-hcart-robot-tld-3-mpx Aug 22, 2017 8:33:28 PM - estimated 7986768 kbytes needed Aug 22, 2017 8:33:28 PM - Info nbjm (pid=12373) started backup (backupid=10.105.33.14_1503405208) job for client 10.105.33.14, policy NDMP_ston-mykul-p3n4, schedule Daily_backup on storage unit mykulsnbmd001-hcart-robot-tld-3-mpx Aug 22, 2017 8:33:29 PM - connecting Aug 22, 2017 8:33:29 PM - connected; connect time: 0:00:00 Aug 22, 2017 8:33:29 PM - begin writing Aug 22, 2017 8:42:35 PM - end writing; write time: 0:09:06 Aug 22, 2017 8:45:46 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: creating "/svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/../snapshot_for_backup.2340" snapshot. Aug 22, 2017 8:45:46 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Using inowalk incremental dump for Full Volume Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Using snapshot_for_backup.2340 snapshot Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Date of this level 9 dump: Tue Aug 22 20:33:35 2017. Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Date of last level 8 dump: Tue Aug 15 20:03:08 2017. Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Dumping /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/ to NDMP connection Aug 22, 2017 8:45:48 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: mapping (Pass I)[regular files] Aug 22, 2017 8:45:52 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: mapping (Pass II)[directories] Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: No available buffers. Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: DUMP IS ABORTED Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) read from socket returned -1 104 (Connection reset by peer) Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) MOVER_HALTED unexpected reason = 4 (NDMP_MOVER_HALT_CONNECT_ERROR) Aug 22, 2017 8:46:12 PM - Error ndmpagent (pid=26939) NDMP backup failed, path = /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/ Aug 22, 2017 8:46:12 PM - Info ndmpagent (pid=26939) 10.105.33.14: DUMP: Deleting "/svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/../snapshot_for_backup.2340" snapshot. Aug 22, 2017 8:46:13 PM - Error ndmpagent (pid=26939) 10.105.33.14: DATA: Operation terminated (for /svm01-mykul-p1/DEPTSHARE_CIFS_OP_VOL001/). Aug 22, 2017 8:54:27 PM - Info bpbrm (pid=25638) sending message to media manager: STOP BACKUP 10.105.33.14_1503405208 Aug 22, 2017 8:54:36 PM - Info bpbrm (pid=25638) media manager for backup id 10.105.33.14_1503405208 exited with status 0: the requested operation was successfully completed unimplemented error code (114) NetBackup version: 8.0 NDMP: NetApp Release 8.2.1P1 Cluster-Mode Backup Method: MPX3.5KViews0likes3CommentsNetBackup 7.7 NetApp cDOT 9.1 NDMP Backup Schedules and Types
Ideally should each NetBackup policy have three schedules? Full Backup (NDMP level 0) Cumulative Incremental Backup (NDMP level 1) Differential Incremental Backup (NDMP last level +1 < 9) For example I have an NDMP backup policy with a single volume. I run #1 the first week, #2 the second week, and #3 the third through nineth week. Does this make sense? We have NetApp cDOT 9.1, NetBackup 7.7.3 (configured for CAB). I am using this Veritas article too: https://www.veritas.com/support/en_US/article.TECH143455874Views0likes1CommentNDMP, Netapp cDOT 8.2, and parent policy to coordinate snapshot creation?
Hi, I'm trying to coordinate the creation of backup Snapshots when I backup two seperate, but related volumes of data. Because we're running NBU 7.7.3 on Solaris, and Netapp cDOT 8.2, we cannot do CAB (Cluster Aware Backups) until we upgrade to cDOT 8.3 sometime in the future. Which is a pain. I've got two volumes, Foo & Bar, which need to be snapshotted at the same time, but are on seperate nodes. Would using a parent policy with two sub-policies be the way to make this happen? We don't need the actual NDMP backups to be in parallel at all. I've been looking at the docs, and we've done bpstart_notify.<POLICY> scripts in the past, but I'm a bit stumped on how I should do it now? If we put both volumes into one policy, then it won't run properly because the policy is bound to a single Node on the Netapp. And of course we have the volumes spread across multiple nodes. Thanks, John1.7KViews0likes5Comments