How to identity Status of a backup using Netbackup CLI ?
I launch 'bpbackup' command in my program and it just gives the return code. There is no way to know the corresponding JobId. Before I launch the next backup, I want to check the status of the Previous backup and make some decision. 'bpimagelist' command can be used with 'keyword' as filter so that I can search my backup job uniquely. Problem is, it only lists the Successful backups. 'bpdbjobs' command can be used to list out all jobs (successful, failed, in progress) but there is no way to uniquely find the backup job because it does not support filtering the result with 'keyword' attribute. Also Want to filter the status of individual client what I still not able to get using bpdbjobs. Any help is highly appreciated. DebSolved6.4KViews0likes5CommentsReporting on user fails to bulk add users in Data Insight
When testing addition in the lab where I would have User1@productionand User1@Labthe user fails to add as unable to resolve: 2023-10-26 14:29:39 WARNING: #{421} [ServerUtils.getUserByLogonName] Unable to resolve user Java.lang.Exception: Unable to resolve user at com.symc.matrix.ui.server.ServerUtils.getUserByLogonName(ServerUtils.java:462) at com.symc.matrix.ui.server.ServerUtils.getUserByLogonName(ServerUtils.java:449) at com.symc.matrix.report.util.ReportsUtils.getUsers(ReportsUtils.java:6559) at com.symc.matrix.report.util.ReportsUtils.setCSVValues(ReportsUtils.java:7912) at com.symc.matrix.ui.server.servlets.FileUploadServlet.processReportsCsv(FileUploadServlet.java:776) at com.symc.matrix.ui.server.servlets.FileUploadServlet.processUpload(FileUploadServlet.java:264) at com.symc.matrix.ui.server.servlets.FileUploadServlet.doPost(FileUploadServlet.java:173) at javax.servlet.http.HttpServlet.service(HttpServlet.java:681) at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at com.symc.matrix.ui.server.filters.LoginFilter.doFilter(LoginFilter.java:110) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at com.symc.matrix.ui.server.filters.CacheControlFilter.doFilter(CacheControlFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) 2023-10-26 14:29:39 WARNING: #{421} [ReportsUtils.getUsers] Not able to resolve user/group However if I type in the string used and filter both are returned. Is there a flag to add all found users? Is this a bug or assumption that a large enterprise does not have >1 user with the same name? Is there a known workaround to add the users to the report in bulk? Pix939Views0likes1CommentSQL-Query listing ArchiveName, ArchivedItems, ArchivedItemsSize for users in certain OU
Hi, Since I'm not a genius with the SQL querys I'm asking your help. I need a query that lists ArchiveName, ArchivedItems, ArchivedItemsSize for users in certain OU. Thereare 3 Vaultdatabases that this query has to take in to count. I have no idea how to put this query together, please help. Sani B.Solved785Views1like4CommentsLow Scratch Script
Hello, I am looking for a Script or way to monitor our scratch poolto send an alert(email) when remaining available tapes reach a threshold of 8. We are running NB 7.6.0.1. I am New to the Netbackup world any advice or direction would be appreciated. ThanksSolved2.6KViews2likes7CommentsMonitoring unfinished replication progress
Hello Have somebody found a way to monitor the progress of unfinished replication ? Know there is some reports in OpsCenter, but they all seem be on the complete images. Where I am looking for: Kilobytes transferred, number of fragments transferred or simllar to tell me how far a long the image in progress is. Have looked at nbstlutil/nbstl list options, the support site, connect and of course google search without luck. Regards Michael1.5KViews1like16CommentsOpscenter/netbackup - SQL query to have a weekly report: Policy with start Time and End Time
Hello, The goal is to have a gantt diagramm (made with excel)for the last weekthat show for each policy the start time and the end time. I found some report on opscenter to have this kind of information but only by client. I would like to have a SQL query that give me these informations on a policy basis. Exemple: Policy name Start Time End Time (or duration) Policy 1 27.07.2015 22:00 27.07.2015 23:18 Policy 2 27.07.2015 22:30 27.07.2015 23:01 Policy 1 28.07.2015 22:00 28.07.2015 23:19 .... Do you have any idea on the best/easiest way to do that? Regards JaySolved4.2KViews0likes12CommentsNetBackup "Status of Backups" report generating for 6 days only
Hi All, We have a master server based on Solaris platform(6.5.6 version). I know we need to upgrade it to 7.x. :) We were trying to genearate a months backup status report and its showing only 6 days record but when we are running "Client Backups" report, it generating all possible data. Can someone please help me understand why is it so and how to fix it? Earlier, "Status of Backups" report was working fine. Thanks. Best Regards, DanSolved1.7KViews0likes5CommentsAPTARE Consumption Reporting for NBU Accelerator-enabled Policies
We Use NBU 812 and 82 as well as APTARE reporting. We also use quite a few Acclereator enabled policies. With NBU accelereator, all increnmetnal jobs are reported as a full: https://www.veritas.com/support/en_US/article.100038344 Thus, when APTARE picks up this data for consumption, it completely skews the data - since its picking up every incr. as a full image size... And while APTARE certainly has theaccelereator bytes sent filed that I can add to a job summary, the canned consumption report does not take this into considreation nor can you really modify it to add the accelreator bytes sent as the consumption... this it takes every job's incr size as the full and the consumption is way overblown because of the use of acclerator... What SHOULD be happening is a global option in NBU itself (under master server properites) to report Acclereator incrs as the incremtnal size / bytes transfered (nbu just converts the backup into what would be the actual incr size) thus APTARE or any other reporting tool does not have to care about it being Acclerator enabled or not) but this sadly does not exist, so you have to use the reporitng tool to covnert it for you.... So, for any other APTARE users out there, how have you delt with consumption for accelreator enabled policies? Anyone have a report they wouldn't mind sharing?1.3KViews0likes0CommentsOpscenter Custom Report to exclude few error codes
Hi Team, I already have an opscenter cutom report with below columns as part of the report. Now I need to exclude few error codes from this report, in the very first tab of the edit report section just let me filter once on status code parameter. Is it possible to achieve it? Client Name Policy Name Schedule Name Job Start Time Job End Time Job Duration Job File Count Post Deduplication Size(MB) Status Code Job Status Now when checked the sql query on this report from "Show Report Query" i gave me a query which I tried to run from run sql query section of report creation but it throws the error "OpsCenter-10881:Failed to execute specified SQL-- SQL Anywhere Error -143: Column 'connection_level_dst_global_query_for_bigint' not found." I tried so because i tought I can exclude error codes through below query which is the edited original query. select TOP 100 START AT 1 domain_JobArchive.clientId as "domain_JobArchive.clientId",NOM_DateDiff(domain_JobArchive.startTime, domain_JobArchive.endTime) as "JobDuration",domain_JobArchive.filesBackedUp as "domain_JobArchive.filesBackedUp",domain_MasterServer.friendlyName as "domain_MasterServer.friendlyName",domain_JobArchive.policyName as "domain_JobArchive.policyName",domain_JobArchive.scheduleName as "domain_JobArchive.scheduleName",adjust_timestamp_dst(domain_JobArchive.startTime ) as "domain_JobArchive.startTime",adjust_timestamp_dst(domain_JobArchive.endTime ) as "domain_JobArchive.endTime",domain_JobArchive.bytesWritten as "domain_JobArchive.bytesWritten",(case when domain_JobArchive.state =106 then 3 when domain_JobArchive.state =3 and domain_JobArchive.statusCode =0 then 0 when domain_JobArchive.state =3 and domain_JobArchive.statusCode =1 then 1 when domain_JobArchive.state =3 and domain_JobArchive.statusCode>1 then 2 else -1 end) as "jobExitStatus",domain_JobArchive.statusCode as "domain_JobArchive.statusCode" from domain_JobArchive , domain_MasterServer where domain_MasterServer.id = domain_JobArchive.masterServerId and ( (domain_JobArchive.isValid = '1') ) AND ( ( (domain_JobArchive.endTime BETWEEN '136645617124190000' AND '136672437124190000') ) AND ( (domain_JobArchive.scheduleId NOT IN (-2147483633 )) ) AND ( (domain_JobArchive.statusCode NOT IN (174,288,196,50,811 )) ) AND ( ((case when domain_JobArchive.state =106 then 3 when domain_JobArchive.state =3 and domain_JobArchive.statusCode =0 then 0 when domain_JobArchive.state =3 and domain_JobArchive.statusCode =1 then 1 when domain_JobArchive.state =3 and domain_JobArchive.statusCode>1 then 2 else -1 end) NOT IN (-1 )) ) AND ( ( (domain_JobArchive.masterServerId IN (61,22437 )) ) ) ) ORDER BY "domain_JobArchive.startTime" ASC Please suggest. Thanks SidSolved2.8KViews0likes15CommentsSMTP Email notifications on a non-standard SMTP port
My ISP recently blocked all SMTP traffic on port 25, so I no longer receive my BESR reports via e-mail. I need to change Backup Exec System Recovery 2010 to use SMTP port 587, but this is not an option in the software. I searched the registry to try and find a way to adjust it, but came up empty handed. I tried using an SMTP server of mail.server.com:587, but that didn't work. I tried changing the default SMTP port for Windows (C:\Windows\System32\Drivers\etc\services) and that didn't work... Symantec - HELP!Solved2.7KViews0likes5Comments