Duplication: nbstlutil report numbers
Hello, I would like to ask you or better clarify command "nbstlutil report". What exactly is considered in the number of copies? Only "images to be duplicated" or "images to be duplicated together with images that are already duplicated and waiting e.g. for expiration"? From nbstlutil report -lifecycle xxxx I got following: Backlog of incomplete SLP Copies In Process (Storage Lifecycle State: 2): Number of copies: 4028 Total expected size 22996802 MB SLP Name: (state) Number of copies:4028 Size:22996802MB Becasue when I want to cancel whole processing via "nbstlutil cancel -lifecycle xxxx" I got only 1876 images instead of total 4028 images, that's why I'm wondering what's included in the "total number". Thank you for any clarification. Tom935Views0likes5CommentsUnable to modify a previous version of SLP on Netbackup 8.0
When trying to modify a previous version of SLP to change the duplication residence, it fails at the command line. The previous Storage Unit definition was very long and doesn't fully show up in the listing. The full name is "grs-nbus2-hcart2-robot-tld-0". I've created a shorter Storage Unit called "nbus2-hcart3", but the nbstl utility still doesn't work. All Storage Unit names are lower case only. Successful outcomes are: - Find a way to change the residence in the older version of the SLP weekly-nbus2. - Stop the Duplication jobs from being processed (and subsequently failing due to incorrect Storage Unit density parameter) Regards, Wayne ----------------------------- PS > .\nbstl weekly-nbus2-d2t -U -version 16 weekly-nbus2-d2t 16 backup stu-nbus2-msdp Default_24x7_Win Fixed duplication grs-nbus2-hcar Default_24x7_Win Fixed PS > .\nbstl weekly-nbus2-d2t -modify_version -version 16 -residence stu-nbus2-msdp,nbus2-hcart3 nbstl: unrecognized option nbus2-hcart3 Exit status: 20(invalid command parameter) USAGE: nbstl [storage_lifecycle_name] [options] REF: https://www.veritas.com/support/en_US/article.100038486Solved1.8KViews0likes8CommentsDuplications failed with code 191 - NetBackup 8.1
Dear contributors, Hoping that you are well, I write this post because I have a problem with Duplications, I am new to this NetBackup world, so maybe it is a simple mistake.Unfortunately all Duplications have begun to fail .. I need help. - NetBackup 5230 - Appliance ver. 3.1 - NetBackup ver. 8.1 LOG: _____________________________________________________________________ Jan30,20202:09:12PM-requestingresourceLCM_CLESCNBU20010-hcart-robot-tld-1 Jan30,20202:09:12PM-Infonbrb(pid=281640)LimithasbeenreachedforthelogicalresourceLCM_CLESCNBU20010-hcart-robot-tld-1 Jan30,20202:58:55PM-grantedresourceLCM_CLESCNBU20010-hcart-robot-tld-1 Jan30,20202:58:55PM-startedprocessRUNCMD(pid=304107) Jan30,20202:58:56PM-beginDuplicate Jan30,20202:58:56PM-requestingresourceCLESCNBU20010-hcart-robot-tld-1 Jan30,20202:58:56PM-requestingresource@aaaab Jan30,20202:58:56PM-reservingresource@aaaab Jan30,20202:58:56PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Nodrivesareavailable. Jan30,20202:58:56PM-endedprocess0(pid=304107) Jan30,20202:59:25PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Waitingforresources. Reason:Drivesareinuse,Mediaserver:CLESCNBU20010.dys.corp, RobotType(Number):TLD(1),MediaID:N/A,DriveName:N/A, VolumePool:Pool_Semanal,StorageUnit:CLESCNBU20010-hcart-robot-tld-1,DriveScanHost:N/A, DiskPool:N/A,DiskVolume:N/A Jan30,20202:59:26PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Nodrivesareavailable. Jan30,20203:15:21PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Waitingforresources. Reason:Drivesareinuse,Mediaserver:CLESCNBU20010.dys.corp, RobotType(Number):TLD(1),MediaID:N/A,DriveName:N/A, VolumePool:Pool_Semanal,StorageUnit:CLESCNBU20010-hcart-robot-tld-1,DriveScanHost:N/A, DiskPool:N/A,DiskVolume:N/A Jan30,20203:15:23PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Nodrivesareavailable. Jan30,20203:15:39PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Waitingforresources. Reason:Drivesareinuse,Mediaserver:CLESCNBU20010.dys.corp, RobotType(Number):TLD(1),MediaID:N/A,DriveName:N/A, VolumePool:Pool_Semanal,StorageUnit:CLESCNBU20010-hcart-robot-tld-1,DriveScanHost:N/A, DiskPool:N/A,DiskVolume:N/A Jan30,20203:15:39PM-awaitingresourceCLESCNBU20010-hcart-robot-tld-1.Nodrivesareavailable. Jan30,20203:26:28PM-resource@aaaabreserved Jan30,20203:26:28PM-grantedresourceWC0179 Jan30,20203:26:28PM-grantedresourceHP.ULTRIUM7-SCSI.001 Jan30,20203:26:28PM-grantedresourceCLESCNBU20010-hcart-robot-tld-1 Jan30,20203:26:28PM-grantedresourceMediaID=@aaaab;DiskVolume=PureDiskVolume;DiskPool=dp_disk_CLESCNBU20010;Path=PureDiskVolume;StorageServer=CLESCNBU20010.dys.corp;MediaServer=CLESCNBU20010.dys.corp Jan30,20203:26:29PM-Infobpduplicate(pid=304107)windowclosebehavior:Suspend Jan30,20203:26:29PM-Infobptm(pid=314191)start Jan30,20203:26:29PM-startedprocessbptm(pid=314191) Jan30,20203:26:30PM-Infobptm(pid=314191)startbackup Jan30,20203:26:30PM-Infobpdm(pid=314250)started Jan30,20203:26:30PM-startedprocessbpdm(pid=314250) Jan30,20203:26:30PM-Infobpdm(pid=314250)readingbackupimage Jan30,20203:26:30PM-Infobpdm(pid=314250)using30databuffers Jan30,20203:26:30PM-Infobpdm(pid=314250)requestingnbjmformedia Jan30,20203:26:30PM-Infobptm(pid=314191)mediaidWC0179mountedondriveindex1,drivepath/dev/nst0,drivenameHP.ULTRIUM7-SCSI.001,copy2 Jan30,20203:26:30PM-Infobptm(pid=314191)INF-WaitingforpositioningofmediaidWC0179onserverCLESCNBU20010.dys.corpforwriting. Jan30,20203:26:32PM-beginreading Jan30,20203:38:11PM-endreading;readtime:0:11:39 Jan30,20203:38:11PM-beginreading Jan30,20203:49:03PM-endreading;readtime:0:10:52 Jan30,20203:49:03PM-beginreading Jan30,20203:59:49PM-endreading;readtime:0:10:46 Jan30,20203:59:49PM-beginreading Jan30,20204:11:49PM-endreading;readtime:0:12:00 Jan30,20204:11:49PM-beginreading Jan30,20204:22:58PM-endreading;readtime:0:11:09 Jan30,20204:22:58PM-beginreading Jan30,20204:36:42PM-endreading;readtime:0:13:44 Jan30,20204:36:42PM-beginreading Jan30,20204:46:01PM-endreading;readtime:0:09:19 Jan30,20204:46:01PM-beginreading Jan30,20204:56:42PM-endreading;readtime:0:10:41 Jan30,20204:56:42PM-beginreading Jan30,20205:06:46PM-endreading;readtime:0:10:04 Jan30,20205:06:46PM-beginreading Jan30,20205:06:46PM-Criticalbpdm(pid=314250)StorageServerError:(Storageserver:PureDisk:CLESCNBU20010.dys.corp)PdvfsRead:Failedtoreadfromspoold(Connectionresetbypeer).Ensurestorageserverservicesarerunningandoperational.V-454-19 Jan30,20205:06:46PM-Criticalbpdm(pid=314250)sts_read_imagefailed:error2060019erroroccurredonnetworksocket Jan30,20205:06:46PM-Criticalbpdm(pid=314250)imagereadfailed:error2060019:erroroccurredonnetworksocket Jan30,20205:06:46PM-Errorbpdm(pid=314250)cannotreadimagefromdisk,Invalidargument Jan30,20205:06:48PM-Errorbptm(pid=314191)mediamanagerterminatedbyparentprocess Jan30,20205:06:51PM-InfoCLESCNBU20010.dys.corp(pid=314250)StorageServer=PureDisk:CLESCNBU20010.dys.corp;Report=PDDOStatsfor(CLESCNBU20010.dys.corp):read:460800078KB,CRreceived:342190618KB,CRreceivedoverFC:0KB,dedup:0.0% Jan30,20205:07:19PM-Errorbpduplicate(pid=304107)hostCLESCNBU20010.dys.corpbackupiddysf-ex10mbx1.dys.corp_1580004201readfailed,mediareaderror(85). Jan30,20205:07:19PM-Errorbpduplicate(pid=304107)hostCLESCNBU20010.dys.corpbackupiddysf-ex10mbx1.dys.corp_1580004201writefailed,mediamanagerkilledbysignal(82). Jan30,20205:07:20PM-Errorbpduplicate(pid=304107)Duplicateofbackupiddysf-ex10mbx1.dys.corp_1580004201failed,mediamanagerkilledbysignal(82). Jan30,20205:07:20PM-Errorbpduplicate(pid=304107)Status=noimagesweresuccessfullyprocessed. Jan30,20205:07:20PM-endDuplicate;elapsedtime2:08:24 noimagesweresuccessfullyprocessed (191) _____________________________________________________________________ Please indicate if you need any more details.1.2KViews0likes2CommentsVault duplication failing after we move the robot controller to other media server: EXIT STATUS 303
Hi All, I just moved our NetBackup controller from mserver11 to mserver12. The robot can inventory fine, all other operations are running fine so far, except the Vault duplication. It gave me: "EXIT STATUS 303: error encountered executing Media Manager command" I checked in the log on /usr/openv/netbackup/logs/vault I found out, it still wants to connect to the old media server (mserver11) instead the new robot controller media server (mserver12). ... 00:59:32.680 [3890] <16> vltrun@BldRobotInventory()^39: command /usr/openv/volmgr/bin/vmcheckxxx -rt tld -rn 0 -rhmserver11 -h nbubck k -list 2>&- returned [42] <F0><D0>,u<FC>^? 00:59:32.680 [3890] <16> vltrun@BldRobotInventory()^39: FAILed MM_EC=42 MM_MSG=cannot connect to robotic software daemon 00:59:32.680 [3890] <16> vltrun@BldRobotInventory()^39: FAILed NB_EC=303 NB_MSG=error encountered executing Media Manager command 00:59 ... Which configuration I need to change to so the Vault know th new media server with robot contoller? Thank you, Iwan TamimiSolved2KViews0likes3CommentsDisk staging to tape causes status 196 on regular policies
Weekly full backups are staged to an EMC Data Domain basic disk staging storage unit. Staging to a 6 drive LTO4 robot is triggered by a schedule at 3PM which takes more than the sheduled window to complete. Tape-only policies that are sceduled at 11PM fail with status 196 because all drives are occupied by the staging. How can I ensure that the staging has less priority than regular backup policies? Is this recommended? I want to avoid tweeking with policy priorities which are all at 0. The staging schedule has a priority of 90000 which makes no sense to me.3.3KViews0likes22CommentsSLP not duplicating
Hi. So we got a problem with our SLP doing duplications. We are currently running Netbackup 7.6.1, one masterserver, and 3 media servers. Windows 2008 R2 We are running all duplications to tape, a dell TL4000. We run 7 different SLP's where 2 of these duplicates to tape. Here comes our problem, one of these two SLP jobs wont duplicate the data to tape. It just keeps putting the data in backlog. The backupjob runs fine, but when its time for the SLP to run duplication, it nevere starts. nbstlutil stlilist -image_incomplete -U Image fil4.hedmark.org_1510305239 for Lifecycle DAS102-ADV-Year is IN_PROCESS Copy to nb-media1-hcart-robot-tld-0 of type DUPLICATE is NOT_STARTED The problem is, the backupjob is done, but it says "IN_PROCESS". I've tried the following: Cancled all jobs running on the SLP. Then manually run them again. Delete the SLP and created it again. Booted all master and media servers. When running the duplication manually from "Catalog" it works fine. And the other SLP runs just fine, duplicates to tape no problem. Any ideas?3.4KViews0likes3Comments"Postpone creation" function for SLP disappears after setting
Hi, First I would like to thank you all for this forum: you saved my day so many times during last years! This is however my first post, and it's regarding an odd behaviour of SLP definition and processing for wich I haven't found any related report. As many other customers I'm using NetBackup v 7.7.2 on a Windows Server 2008 R2 cluster environment (as Master) with two Windows Server 2008 R2 hosts as Media and a third Windows Server 2012 ad another Media with dedicated backup pools. Associated with the two "main" Media Servers are two MSDP disk pool sized about 20TB each, serving as a temporary stage area (from a minumum of 3 to a maximum of 8 days) before defined SLPs duplicate the backup images to tape for the remaining retention periods. Almost all SLPs are responsible for a second duplication to a separate tape pool for each backup image, in order to copy data for Disaster Recovery purpose. So the workflow should run like that: - Take the backup image and write it to MSDP pool with retention time X => COPY1 - Immediately after that, as per SLP window, duplicate the backup image to the DR tape pool with retention time Z (tapes are automatically counted and ejected during the next working day) => COPY3 - After time X, before primary copy expiration, duplicate the backup image to the PROD tape pool with retention time Z => COPY2 During the past few years (and incarnations) of NBU this strategy worked well, as intended: every SLP had the flag set on "Postpone creation of this copy until the source copy is about to expire" for every COPY2 duplication type. Recently I noticed that all COPY2 duplication types, inside all defined SLP, had that flag unsetted. Thinking of someone's misleading action, I personally flagged that option again (only for the COPY2 duplication job types). After about ten minutes, I checked SLP definitions, and all those settings were gone... I repeated the "flag" operation again, and after some time the setting was gone again. The worst thing is that this is not only a "display" issue, because SLP are not working as intended too: without the "Postpone creation of this copy until the source copy is about to expire" option, duplications for COPY2 starts in conjunction with duplications for COPY3, immediately after production of COPY1, resulting in higher SLP backlog and drives/tapes workload. Attached is a screen of one of those odd-acting SLPs, with evidence on the option property that simply refuses to stay flagged... Any suggestions? Thanks again and sorry for my bad english ^^' M.1.9KViews0likes3CommentsBackup Exec server cannot access the specified storage device (tape library) in CAS/MBES environment
We have many backup policies in our environment and almost all of them are having the same configuration. It may sound odd but we have one backup for each VM in this environment. All of them are having a weekly job and daily incremental job. As the second stage, each of these jobs is linked to a duplication job which duplicates the correspondent backup set to a DR open storage device. Every first Monday of the month we get the last full backup set and duplicate it to LTO5 tapes and keep them for long retention as monthly backups. The tape library is attached to one of the MBES servers. The problem is that there are a few jobs that cannot see the tape library during monthly tape duplications on each month. The jobs that cannot access the tape library are completely random. So, if one backup policy cannot see the tape library in one month it may work fine next month. Refreshing the Backup Exec services on CAS and two MBESs have not helped in our case. The only way we used to get the full backup sets duplicated to tape was to manually find the backup sets in the deduped OST and duplicated them to the tape library. Just recently, I have found a work around to make the backup policies which have trouble accessing the library to find the device and kick off the monthly duplication. However, after a few jobs, they go to the same state and I have to do the same process again until all of thebackuppolicies are duplicated to tape successfully. We also get "ODBC access error. Possible lost connection to database or unsuccessful access to catalog index in the database" error message on CAS. Apparently, this happens when there is a communication drop out between CAS and one of the MBES servers. I guess this might be causing the monthly duplication issue we are facing in the environment. This is already escalated to Veritas support and is still under investigation. My question is if anybody knows a permanent fix for this issue?1.3KViews0likes2CommentsDLM issues - Duplicated backup sets
Hi all, I have a complex issue with regards to DLM and duplicated backup sets I am hoping you may be able to help with. My current setup is as follows: We have two sites; Production and DR At each site we have a DataDomain setup as OST storage The production BE implementation is a CASO server and DR the slave. Each night we backup the production servers to the Production OST device and then duplicate to the DR OST device. The problem we are facing is that once a backup set has been duplicated to the DR OST device, both the production incrementals and DR incrementals become dependant on the last duplicated Full backup. The Full backups at the Production site display no dependant backup sets even though incrementals have since been taken. The issue with this is that when full backups expire on the Production OST they are removed regardless of there being any incremental backups still on the Production OST which would be dependant on the Full backup. Usualy DLM would recognise that there are dependent incrementals and expire but not delete. However as the dependent sets are being associated with the duplicated copy on the DR OST this does not happen. This means that in the event of a restore, I would have to rely on the DR OST backup sets which are not local to site, meaning a slow restore. I have a case raised with Veritas support who have replicated this issue in the lab and said this is "by design". I have asked them to tell me the logic behind this but whilst I wait, I was hoping I might get some insight here and also any particular workarounds (other then making my incrementals expire before my full set). Thanks in advance!Solved1.4KViews0likes3CommentsNetbackup: Can you restrict the number of "read" drives of a duplication job
Hi, I have a NetBackup 7.7.3 environment with a media and master server and am using a Quantum i500 tape drive unit which has three drives dedicated to the NetBackup environment. All of our backup and duplication jobs go directly to tape (no intervening disk backup). The current problem I'm having is that when a duplication job kicks off the job will occupy all three drives if there is more than one tape used in the original backup job. Ultimately, what I'd like to do is restrict the duplication process to only use two drives concurrently so that if there are any backup jobs that need to run there is a drive available (note we aren't using multiplexing so I understand that if there are multiple backup jobs they would queue up after the first started). Is there a way to restrict the number of drives used by a duplication job? Also, if there are two or more tapes in a backup job are they simultaneously read and then written to the duplication tape? I looked at creating a separate storage unit for the duplications so that it is only allowed to use two drives, but this says two "write" drives which doesn't indicate whether or not that rule applies to drives used for reading. I have a ticket open with Veritas but the engineer wasn't able to give me a straightforward answer and is researching this issue. Any help is greatly appreciated. Chris1.1KViews0likes3Comments