BackupExec + S3 [A backup storage read/write error has occurred]
Hi, We have BackupExec 20.4 with StoreOnce on premise and use Amazon S3 with Storage Gateway for Virtual Tape Library (VTL). My jobs create backup onsite on StoreOnce and then they are pushed to the cloud via AWS S3 with a duplicate job. I only get this error from time to time and have already checked with my ISP, VPN Network team and opened a ticket with AWS. I ask if anyone can help me out with these failures that occur: Job ended: venerdì 19 giugno 2020 at 02:27:11 Completed status: Failed Final error: 0xe00084c7 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.Final error category: Backup Media Errors Duplicate- VMVCB::\\XXXXX\VCGuestVm\(DC)XXXX(DC)\vm\XXXX An unknown error occurred on device "HPE StoreOnce:3".V-79-57344-33991 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.V-79-57344-65072 - The connection to target system has been lost. Backup set canceled. I can't try cloudconnect optimizer because it's a iScSi connection. Any help would be great. Thank you, Federico PieracciniSolved3.5KViews0likes7CommentsNetBackup 10.1 - New PaaS Workload Protection
Starting with NetBackup 10, Veritas began expanding the support for PaaS workloads. In NetBackup 10.1, Veritas built an extensive framework designed to promote accelerated adoption of PaaS workloads protection. As a testament to that framework, NetBackup 10.1 adds support for the following 13 new PaaS workloads: Azure Workloads AWS Workloads GCP Workloads Azure PostgreSQL Amazon RDS Postgres Google MySQL Azure MySQL Amazon RDS MySQL Google PostgreSQL Azure Managed SQL Amazon RDS MariaDB Azure SQL Amazon Aurora SQL Azure MariaDB Amazon Aurora PostgreSQL Amazon DynamoDB The process of protecting and recovering PaaS workloads is easy and streamlined via NetBackup Web UI. NetBackup Snapshot Manager needs to be configured to facilitate the discovery of the supported PaaS workloads. Media Server with MSDP Universal Share configuration is also a requirement. After NetBackup Snapshot Manager and cloud provider credentials are configured, discovery process will be triggered automatically or can be started manually. Once the discovery runs successfully, supported workloads will be populated on the Web UI PaaS tab: Add PaaS credentials as required for the workloads to be protected. Credentials can be created previously and leveraged later for the workload to be protected or created as new during configuration. On this example, credential is being created previously using Credential Management tab: Add the credential to the PaaS workloads to be protected. Please note the “validation host” is the Media Server hostname that will be utilized to communicate with the cloud provider and PaaS workload. Media Server need to be able to resolve PaaS services to validate credentials: After that, it is just a matter of creating Protection Plan as usual. The following two prompts are specific to PaaS workloads: 1-) Protection Plan is for Cloud, same as the one used to protect virtual machines in the Cloud, for example. Check “Protect PaaS assets only” to be able to call the correct workflow and framework for PaaS: 2-) On step 4 (Backup options), storage path is the previously configured Universal Share mount point: Just complete Protection Plan workflow and that’s it! Protection Plan will run according to the schedule configuration and recoveries will be fully managed by NetBackup Web UI as well. Veritas NetBackup 10.1 now makes it easier to protect PaaS workloads, with a streamlined process guided by Web UI and leveraging benefits of NetBackup deduplication service (MSDP) and RBAC (role-base access control) to empower workload owners and administrators as needed. Here are some good references for more information about PaaS Workload protection with NetBackup 10.1: NetBackup 10.1 Web UI Cloud Administrator's Guide - Protecting PaaS objects NetBackup 10.1 Web UI Cloud Administrator's Guide - Recovering PaaS assets2.5KViews3likes0CommentsBackup Exec 16 - AWS S3 or Tape Gateway
Icurrently have the aws tape gateway setup to do weekly backups for our servers. Its working well minus the auto ejecting after the job is over (will see if this issue still happens with backupexec 16, just upgraded yesteraday). I noticed backupexec 16 has the ability to add AWS S3 as a storage. Can someone please explain the advantages/disadvantages of s3 versus tape gateway? Thanks2.3KViews0likes3CommentsEvacuating your assets using Veritas Resiliency Platform
The Evacuation Plan feature of Veritas Resiliency Platform lets you evacuate all the assets from your production data center to your recovery data center. Instead of individually moving your assets to the recovery center, you can save your time by adding these assets to the evacuation plan and then execute the plan with a single click operation. Resiliency Platform supports evacuation to various recovery data centers namely Azure, Amazon Web Services, vCloud Director, and even to your on-premises data center. Use the evacuation plan template to define the sequence in which the virtual business services (VBS) should be migrated from the production data center to the recovery data center. Resiliency groups that do not belong to any VBSs, are appended at the end of the evacuation plan workflow after the VBS. If you have not configured any VBSs, then the evacuation plan is created using only the resiliency groups. Having a VBS is not compulsory. An evacuation plan has Priorities. You can add the VBSs to different priority levels but the ordering of resiliency groups is done by the Resiliency Platform. You can also define the VBS priority within the priority group. When you have a large number of VBSs, then up to 5 VBSs within a priority group are migrated in parallel to the recovery data center. Similarly, if you have a large number of resiliency groups, up to 10 resiliency groups are migrated in parallel. Continue on failures is another functionality of an evacuation plan. If an asset within a VBS or a resiliency group fails to recover, the evacuation plan skips the asset and continues the process for the remaining assets. You have the option to select this checkbox while creating the evacuation plan. If you choose not to select the checkbox, then the evacuation process stops till you have fixed the problem. For a VBS or a resiliency group to successfully evacuate to the target data center, it should meet the following criteria: ■ VBS or resiliency group that belong to the evacuation plan must be configured for disaster recovery ■ VBS can contain resiliency groups some of which are configured for disaster recovery and some using the service objective with data availability as Copy. ■ Resiliency group must belong to only one VBS. After successfully creating an evacuation plan, you can perform operations such as Rehearse evacuation, Cleanup rehearse evacuation, Evacuate or Regenerate the plan. It is recommended that you perform a rehearsal to ensure that the assets are evacuated properly. After rehearsal you can run the cleanup rehearse operation to delete all the temporary objects that were created during rehearsal. At a later point in time if you make some changes to the VBSs or the resiliency groups, an alert is raised asking you to regenerate the plan. Refer to About evacuation plan for more information on the situations when an alert is raised. To create and run an evacuation plan, you need Manage Evacuation Plans permission. You can also watch the videoshere.2.2KViews2likes0CommentsVideo - Migrating your assets to different availability zones of AWS
Using Veritas Resiliency Platform you can migrate your assets to Amazon Web Services (AWS) cloud. The following video covers how to migrate your assets to the different availability zones of AWS. Some of the reasons for migrating to different availability zones are: high availability and load balancing, or to retain your VMware cluster configuration of the source data center on the target data center. The video shows how to club your assets to form a resiliency group and configure this resiliency group for disaster recovery. While configuring, you need to choose the replication technology, select the Replication Gateways that are mapped to different availability zones, map the networks to successfully protect the assets or virtual machines.2.1KViews1like0CommentsBackup Exec 15 to AWS Tapes
Hey all, I've setup backup exec to backup our 2 weekly jobs to AWS tapes. It's been working fine the last few weeks. Each backup going to individual tapes. This past weekend, both jobs went to the same tape. I would like to set it so that it uses individual tapes so if i need to recover in the future, I dont have to pull down the entire weeks data and save on download costs. Can you please explain how i can set it up? Thanks1.2KViews0likes3CommentsBackup Exec: EC2 and On-Premises CAS/MBES configuration
We are working with a client who wants to move all his on-premises infrastructure to AWS cloud. There is a limitation with his WAS bandwidth which is 1gb. Their total data size is 30TB. Since the data size is big and the WAN bandwidth is limited they are going with a transportable disk which will take the initial full back to the cloud and then there will be just incremental backups that will be sent to cloud. My query is has anyone done a configuration where there is a CAS and an MBES on EC2 virtual machines and then there is another MBES which is on premises. Can we have some pointers on how we can achieve this setup. How will the communication between the CAS Backup Exec server on the EC2 and the on-premises MBES server happen? What are VPN configurations that need to be done to get this configurations working. I believe that if the communications between all the servers is working, then this setup should work and the customer can achieve what he is looking for. Any pointers regarding this will help a lot. Thanks in advance!1.1KViews0likes2Comments