NetBackup 10.1 - New PaaS Workload Protection
Starting with NetBackup 10, Veritas began expanding the support for PaaS workloads. In NetBackup 10.1, Veritas built an extensive framework designed to promote accelerated adoption of PaaS workloads protection. As a testament to that framework, NetBackup 10.1 adds support for the following 13 new PaaS workloads: Azure Workloads AWS Workloads GCP Workloads Azure PostgreSQL Amazon RDS Postgres Google MySQL Azure MySQL Amazon RDS MySQL Google PostgreSQL Azure Managed SQL Amazon RDS MariaDB Azure SQL Amazon Aurora SQL Azure MariaDB Amazon Aurora PostgreSQL Amazon DynamoDB The process of protecting and recovering PaaS workloads is easy and streamlined via NetBackup Web UI. NetBackup Snapshot Manager needs to be configured to facilitate the discovery of the supported PaaS workloads. Media Server with MSDP Universal Share configuration is also a requirement. After NetBackup Snapshot Manager and cloud provider credentials are configured, discovery process will be triggered automatically or can be started manually. Once the discovery runs successfully, supported workloads will be populated on the Web UI PaaS tab: Add PaaS credentials as required for the workloads to be protected. Credentials can be created previously and leveraged later for the workload to be protected or created as new during configuration. On this example, credential is being created previously using Credential Management tab: Add the credential to the PaaS workloads to be protected. Please note the “validation host” is the Media Server hostname that will be utilized to communicate with the cloud provider and PaaS workload. Media Server need to be able to resolve PaaS services to validate credentials: After that, it is just a matter of creating Protection Plan as usual. The following two prompts are specific to PaaS workloads: 1-) Protection Plan is for Cloud, same as the one used to protect virtual machines in the Cloud, for example. Check “Protect PaaS assets only” to be able to call the correct workflow and framework for PaaS: 2-) On step 4 (Backup options), storage path is the previously configured Universal Share mount point: Just complete Protection Plan workflow and that’s it! Protection Plan will run according to the schedule configuration and recoveries will be fully managed by NetBackup Web UI as well. Veritas NetBackup 10.1 now makes it easier to protect PaaS workloads, with a streamlined process guided by Web UI and leveraging benefits of NetBackup deduplication service (MSDP) and RBAC (role-base access control) to empower workload owners and administrators as needed. Here are some good references for more information about PaaS Workload protection with NetBackup 10.1: NetBackup 10.1 Web UI Cloud Administrator's Guide - Protecting PaaS objects NetBackup 10.1 Web UI Cloud Administrator's Guide - Recovering PaaS assets2.5KViews3likes0CommentsHow important is it to verify the replication of (already verified) disk-based backup sets to cloud?
My company is a little bit behind on the adoption of cloud technology; there hasn't been a compelling need to change anything. Anyway, I was hoping to at least modify my disk-to-disk-to-tape backup scheme to a disk-to-disk-to-cloud one. Having to change tapes is a pain, especially now since I have to make a special trip into the office to do it. There's no way to "budget" anything until I have some sense of how much this would all cost. Amazon has cost calculators, but they ask for metrics I have no way of getting at. So as a test, I created an AWS S3 bucket and did some test Duplicate jobs from existing backup sets to the bucket. The first ~15.5 GB backup set cost $1.59 to duplicate to 1Z-IA. So a full 3 TB backup would scale up to approximately $320 every week. Plus, with a 4-week retention period, 12 TB of data storage would be another $120/mo. That would be a nonstarter with management. Then I dug into the Amazon Cost Explorer and found that most of this ($1.38) was Data Transfer out. The cost appeared because the duplicate job did a Verify, and to do that, the Backup Exec server had to read all that data back in from AWS. I verify the duplicate to tape because there's no cost to that. If I skip the Verify, the weekly cost for AWS backups would drop a lot. A subsequent 70 GB Duplicate, without the verify, cost about 25 cents. What would I lose if I skipped the verify? The backups to local disk would still be verified as part of the job that creates them. There's some error correction built into Ethernet, would that remove the need? Getting rid of tapes, I could restructure the backups to do incremental forever, so I wouldn't need to keep 4 full backups in local storage or in the cloud. But I'd like to first estimate costs using the closest structure to existing practice.817Views0likes2CommentsBackupExec + S3 [A backup storage read/write error has occurred]
Hi, We have BackupExec 20.4 with StoreOnce on premise and use Amazon S3 with Storage Gateway for Virtual Tape Library (VTL). My jobs create backup onsite on StoreOnce and then they are pushed to the cloud via AWS S3 with a duplicate job. I only get this error from time to time and have already checked with my ISP, VPN Network team and opened a ticket with AWS. I ask if anyone can help me out with these failures that occur: Job ended: venerdì 19 giugno 2020 at 02:27:11 Completed status: Failed Final error: 0xe00084c7 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.Final error category: Backup Media Errors Duplicate- VMVCB::\\XXXXX\VCGuestVm\(DC)XXXX(DC)\vm\XXXX An unknown error occurred on device "HPE StoreOnce:3".V-79-57344-33991 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.V-79-57344-65072 - The connection to target system has been lost. Backup set canceled. I can't try cloudconnect optimizer because it's a iScSi connection. Any help would be great. Thank you, Federico PieracciniSolved3.5KViews0likes7CommentsS3 Cloud Storage setup across AWS accounts
Currently using NetBackup to backup servers in a single AWS account to S3 using an IAM user/access keys. This is working fine for backing up to S3 buckets in that account, but there is another AWS account that the NetBackup IAM user has cross acount access to that we want our master server to manage the backups of. The thing I'm trying to figure out is how do I configure netbackup cloud storage to use the IAM user in Account #1 to access/backup to the S3 buckets in Account #2? The IAM user is working just fine and I can access and manage the S3 buckets through CLI from the master in account #1 to account #2, but when configuring cloud storage, it doesn't have an option to say to use account #2's S3 bucket. It only loads the buckets in account #1 or gives me an option to Add Volume which seems to try and create the bucket in Account #1. Is it possible to point it directly to an existing bucket in a different account when the IAM user has cross account permissions or is an IAM user needed for each AWS account being accessed? Thanks868Views0likes2CommentsAWS Tape Gateway Wiping Tapes
We are currently playing with Backup Exec & the AWS Tape Gateway to backup to VTL in S3/Glacier. We have an issue where backups are failing and also where once a week (sometimes every other week) when one backup job begins, it first wipes all tapes. Has anyone seen this before?441Views0likes0CommentsVideo - Migrating your assets to different availability zones of AWS
Using Veritas Resiliency Platform you can migrate your assets to Amazon Web Services (AWS) cloud. The following video covers how to migrate your assets to the different availability zones of AWS. Some of the reasons for migrating to different availability zones are: high availability and load balancing, or to retain your VMware cluster configuration of the source data center on the target data center. The video shows how to club your assets to form a resiliency group and configure this resiliency group for disaster recovery. While configuring, you need to choose the replication technology, select the Replication Gateways that are mapped to different availability zones, map the networks to successfully protect the assets or virtual machines.2.1KViews1like0CommentsEvacuating your assets using Veritas Resiliency Platform
The Evacuation Plan feature of Veritas Resiliency Platform lets you evacuate all the assets from your production data center to your recovery data center. Instead of individually moving your assets to the recovery center, you can save your time by adding these assets to the evacuation plan and then execute the plan with a single click operation. Resiliency Platform supports evacuation to various recovery data centers namely Azure, Amazon Web Services, vCloud Director, and even to your on-premises data center. Use the evacuation plan template to define the sequence in which the virtual business services (VBS) should be migrated from the production data center to the recovery data center. Resiliency groups that do not belong to any VBSs, are appended at the end of the evacuation plan workflow after the VBS. If you have not configured any VBSs, then the evacuation plan is created using only the resiliency groups. Having a VBS is not compulsory. An evacuation plan has Priorities. You can add the VBSs to different priority levels but the ordering of resiliency groups is done by the Resiliency Platform. You can also define the VBS priority within the priority group. When you have a large number of VBSs, then up to 5 VBSs within a priority group are migrated in parallel to the recovery data center. Similarly, if you have a large number of resiliency groups, up to 10 resiliency groups are migrated in parallel. Continue on failures is another functionality of an evacuation plan. If an asset within a VBS or a resiliency group fails to recover, the evacuation plan skips the asset and continues the process for the remaining assets. You have the option to select this checkbox while creating the evacuation plan. If you choose not to select the checkbox, then the evacuation process stops till you have fixed the problem. For a VBS or a resiliency group to successfully evacuate to the target data center, it should meet the following criteria: ■ VBS or resiliency group that belong to the evacuation plan must be configured for disaster recovery ■ VBS can contain resiliency groups some of which are configured for disaster recovery and some using the service objective with data availability as Copy. ■ Resiliency group must belong to only one VBS. After successfully creating an evacuation plan, you can perform operations such as Rehearse evacuation, Cleanup rehearse evacuation, Evacuate or Regenerate the plan. It is recommended that you perform a rehearsal to ensure that the assets are evacuated properly. After rehearsal you can run the cleanup rehearse operation to delete all the temporary objects that were created during rehearsal. At a later point in time if you make some changes to the VBSs or the resiliency groups, an alert is raised asking you to regenerate the plan. Refer to About evacuation plan for more information on the situations when an alert is raised. To create and run an evacuation plan, you need Manage Evacuation Plans permission. You can also watch the videoshere.2.2KViews2likes0Comments