Netbackup incorrectly assigning MediaIDs at first but corrects when deleted & re-scanned.
Netbackup v10.2.0.1 - We're importing & cataloging (P1/P2) 1000s of tapes and we've got a script that does inventory (vmupdate) when the tapes are loaded for multiple media agents. The MediaID generation rule are applied, for few random tapes it does not add the initials eg.: NS is not added for tape NS1234 but it is updated as 1234L6 (default inventory) which fails the import jobs as it cannot find the tape. The same tapes [updated as 1234L6] when deleted from console and re-scanned manually from console gets the correct barcode NS1234. It is random & intermittent (but happens atleast once in 2 days) so I was hoping to get more details from Logs when it happens. It will be good to know which logs are required apart from bptm on media agents and if there are any other robot/inventory specific logs with higher debug ? Thank you.625Views0likes4CommentsTwo different credentials for vCenter
Hi Using Netbackup 10 I create backup of my virtual hosts via connecting to vSphere vCenter infrastructure. As we all know It is not safe to use a user with full permission (Write permission) to create backups while it is mandatory to use a write-enabled user for restoring backups. The problem is every time I need to do a restore I should change the credentials added to Netbackup to another user. Isn't it possible to add two different credentials for one vCenter and choose which one to use in backup and restore operations? Regards,Solved714Views0likes6Comments[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!416Views0likes1CommentOracle to Netbackup Copilot
Hello, I'm trying to implement Copilot for Oracle. I've set up the SLPs and registered the test instance, but NBU is unable to perform a backup with the error: Unable to perform a manual backup with policy "test". The policy does not have a list of files to back up. The setup: Oracle Linux 7.7, NBU 10.2, StoreOnce 5260 (4.3.6), Catalyst 4.4.0. In short, I'm trying to implement NBU accelerator for faster backups. If there is another way, please refer to the guide. Thank you in advance.Solved794Views0likes6CommentsHow to identity Status of a backup using Netbackup CLI ?
I launch 'bpbackup' command in my program and it just gives the return code. There is no way to know the corresponding JobId. Before I launch the next backup, I want to check the status of the Previous backup and make some decision. 'bpimagelist' command can be used with 'keyword' as filter so that I can search my backup job uniquely. Problem is, it only lists the Successful backups. 'bpdbjobs' command can be used to list out all jobs (successful, failed, in progress) but there is no way to uniquely find the backup job because it does not support filtering the result with 'keyword' attribute. Also Want to filter the status of individual client what I still not able to get using bpdbjobs. Any help is highly appreciated. DebSolved6.2KViews0likes5CommentsMS SQL VADP Application State Capture (ASC) Backups are Fully Recoverable in NetBackup 10.4
Starting with NetBackup 10.4, VMware VADP Microsoft SQL full backups can be configured for complete application restoration. Before this release, users had just copy-only full backups combined with differential and incremental transaction log backups to restore their MS SQL applications. Copy-only full backups perform ASC, but do not allow for complete application recoveries. Also, copy-only can’t recover the transaction logs on top of those style backups like the new style full backups can. Now users have the option to configure their VMware MS SQL data protection two different ways as shown in the policy and protection plan figures below: The new Microsoft Transact-SQL (T-SQL) full backups configured for full application recoveries. Subsequent differential backup benchmarks are reset so differential backups don’t grow to become the size of fulls The traditional copy-only full backups with incremental and transaction log backups in separate schedules NetBackup 10.4 and onward makes this is possible because MS-SQL full backups are now cataloged as assets in the NetBackup Web UI. Users can now select MS SQL recovery points in the Web UI from either type of backup displayed in the MS SQL assets as shown in the figure below.So, restores work the same way they always have, just with the option of T-SQL full recovery points. Incremental and transaction log backups continue to use the NetBackup client agent. There are a few important notes to consider when using the new fully recoverable MS SQL full backups: This new full backup takes advantage of the T-SQL snapshot feature that begins in MS SQL 2022 but does so in compliance with that snapshot feature’s limitations. Those limitations include that no earlier MS-SQL versions are allowed, or that more than sixty-four (64) databases are included in a snapshot. Microsoft will most likely increase the sixty-four-database limit in the future This new full backup is an “opt in” feature and not the default full. The traditional copy-only full method is offered with the option to use the T-SQL snapshot as an alternative. Neither the “Truncate logs” or “Enable T-SQL snapshots” option are enabled by default Selecting the T-SQL option in a VMware (VADP) policy automatically sets snapshot handling to “Stop the backup if any snapshots exist.” This is a Microsoft requirement to ensure that the database is not in a suspended or inoperable state during the backup All NetBackup servers involved in T-SQL full backups or restores must be at version 10.4 or higher677Views1like0CommentsNetBackup 10.4 Adds K8s Malware Scanning Support and New Malware Scanning Features
Previous versions of NetBackup offered great backup data malware scanning options. But version 10.4 adds a litany of great new malware scanning features you’ll want to have. Just upgrade and they’re yours today: Scanning of K8s unstructured namespace data at the file system level A new malware scanning job configuration validation tool. Pre-test job configurations with a test/dry run of the malware scanning job. A configuration validator screen lets you run a quick scan of a few files specified in the job to see if the configuration will work the way you want it to. VMWare single file restore (SFR) can now skip infected files, not just flag them Separate malware scan jobs now appear in Activity Monitor instead of being part of other jobs. This allows scan jobs to be managed separately with Activity Monitor controls Scan host failover. Teamed scan hosts can take over scanning jobs for each other if a scan job fails for any reason on the first host. A notification of failover is posted Additional fields are added in scan results and view details for those running security operations center (SOC) as a service (SOCaaS). SOC is a cloud-based subscription threat detection service. Host and policy-specific messages are added so it’s easier to identify which system(s) contain malware threats Past scanning results can now be deleted An automatic Ansible script for silent malware scanning configuration will be available on GitHub once the script is past Open Source Review Board (OSRB) approval.477Views0likes0CommentsNetBackup 10.4 Standardizes Security Event Pushes to Common External Platforms Using the OCSF Schema
Standardizing on one digital disc format (the Phillips standard) for audio and then video was the key to building the great leap forward for the audio-video industry back in the 1980’s. It saved all the audio vendors an ongoing battle between disc formats, which would make massive headaches for recording artists and disc manufacturers. It also paved the way for digital video disc (DVD) technology that followed. NetBackup 10.4+ takes the same approach with pushing audit security message events to external products/platforms. Using the OCSF message schema/format, 10.4+ can update external security and monitoring products and platforms in one standardized format. The result is far fewer headaches and much better security and reporting compliance for your IT staff. The OCSF is a big deal for the following reasons: Broad security partner Integration potential: The OCSF project was initiated by a partnership between Splunk and AWS, which built on the ICD Schema developed at Symantec—now part of Broadcom There are now 15 additional members, including some of the biggest names in technology and cybersecurity: Cloudflare, CrowdStrike, DTEX, IBM Security, IronNet, JupiterOne, Okta, Palo Alto Networks, Rapid7, Salesforce, Securonix, Sumo Logic, Tanium, Trend Micro, and ZScaler OCSF allows customers to avoid vendor lock-in by using a widely supported format As a result, NetBackup 10.4+ users get the following benefits: Three log push format options, with one option only usable at a time: generic NetBackup, OCSF, and Microsoft Sentinel Advanced Security Information (ASIM) Simultaneous NetBackup and OCSF pull formats are also available, with one option only usable at a time Security solutions that utilize the OCSF schema produce data in a consistent format. At the same time they accurately capture the full meaning and relevance of audit security message event information OCSF helps security teams simplify the ingestion and exchange of data between security tools. This produces faster and more accurate threat detection and investigation Security vendors and other data producers adopt and extend the OCSF schema for their specific domains Make your event monitoring and maintenance a big win by upgrading to NetBackup 10.4+ today.406Views0likes0CommentsWhen it comes to SECRETS, how secure is yourapplication?
Introduction Enterprises running various heterogeneous workloads ranging from on prem applications to applications spread across various cloud service providers, oftenstruggle to manage credentials securely. We’ve seen a lot of technical debates about how to find a perfect balance between security and flexibility, but there’s no de facto standard hack which fits in for all. We’ve seen (sometimes radically) different opinions on “the right way” to manage secrets: “You should always use vault”, “You should encrypt creds” and the list is never ending! To cope up with these challenges, Veritas introduces Alta Recovery Vault short lived token-based authentication. For us, your data’s security is paramount to us. Prior to short lived tokens, Veritas provided ability to connect to Alta Recovery Vault with Standard Credentials (access and secret keys) as shown below : Diagram1: Creating a Credential with the Storage Account and Traditional Credentials (Access key and secret) given by Veritas Disadvantages of using Standard Credentials in Recovery Vault These standard credentials are long lived in nature. If compromised, they give attackers ample time to exploit the application. If they are stolen it would be a nightmare to discern which operations are legitimate. Thus, the only fail-safe choice is to cumbersomely rotate the keys and redistribute to customers. This is often overlooked action and adds extra pain for the DevOps.( p.s: It's not happier as it seems to be in the adajcent picture) Solution To help alleviate some of the above risks, Veritas has leveraged the ability to enhance security by introducingshort lived token-based authentication. Beginning with NetBackup 10.2 for Azure and NetBackup 10.4 for AWS (...GCP work in progress), users will have cloud storage accounts and a short-lived refresh token to connect securely to the Alta Recovery Vault storage. These new secrets are added as Credentials in the NetBackup Credential Management (as shown in diagram 2a and 2b) Once the initial connection is established, Veritas credential Management API is solely responsible forrenewing, refreshing, accessing and sharing access signature.Isn’t it amazing just no pain to rotate the keys and redistribute! ( I see the cyber security team seems happier and overjoyed ) Diagram 2a: Creating a Credential with the Storage Account and Refresh Token given by Veritas for Azure Diagram 2b: Creating a Credential with the Refresh Token given by Veritas for AWS Solution Benefits Enhanced Security :Short-lived tokens have a limited lifespan, reducing the exposure window for potential attacks. If a token is compromised, its validity period is short, minimizing the risk of unauthorized access. Regular token expiration forces users to re-authenticate, ensuring better security. Mitigating Token Abuse :Tokens are often used to authorize access to resources. By making tokens short lived, we limit the time an attacker can use to abuse a stolen token. Thus, minimizing the risk window significantly. Better Management of Permissions :When permissions change (e.g., user roles or access levels), short-lived tokens automatically reflect the updates upon renewal. Long-lived tokens may retain outdated permissions, leading to security risks. Conclusion Introduction to Alta Recovery Vault short lived token authentication adds another layer for ransomware protection thus making applications more secure than ever before. At Veritas, your data’s security is paramount to us and this blog serves just as one simple example of the challenges Veritas short lived tokens can help solve. Further, Veritas is always looking and working for better ways to secure your data. Here are some additional helpful links : Veritas Alta Recovery Vault Technical White Paper Veritas Alta Recovery Vault Security Guide Veritas Alta Recovery Vault Azure ExpressRoute Overview Guide Veritas Alta™ Recovery Vault AWS Direct Connect Overview Guide Please feel freeto give feedback and we can answer any queries !! Appreciate everyone time :)588Views3likes0Comments