[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!416Views0likes1CommentSnapshot Manager - Someone use it on production?
Hello, I would like to know if users have tested and installed the Snapshot Manager solution in the cloud (Azure or other) and their feedback on this solution? Did you encounter any problems with the configuration? Do you manage to backup many workloads (VMs)? How is your configuration approximately? This topic is more about discussing the product. Thanks for your feedback!625Views0likes0Commentsmigrate On-Premise Solution to Hybrid Environment
hello everyone, I am currently working on a migration project of a NetBackup solution (software/hardware) that has 95% of the hardware obsolete and without support and licensing by capacity, is studying the option of migrating to a hybrid solution that allows supporting both on-premise services and cloud services, which is flexible and scalable. I am looking into the "veritas resilient platform" solution, but it is unclear to me if it is the best option in my scenario. I would like to know if it is possible to provide me with information related to requirements, architecture, etc. or another veritas solution that allows me to perform this migration. Kind Regards Jose Sumoza725Views0likes3CommentsNetBackup 10.1 - New PaaS Workload Protection
Starting with NetBackup 10, Veritas began expanding the support for PaaS workloads. In NetBackup 10.1, Veritas built an extensive framework designed to promote accelerated adoption of PaaS workloads protection. As a testament to that framework, NetBackup 10.1 adds support for the following 13 new PaaS workloads: Azure Workloads AWS Workloads GCP Workloads Azure PostgreSQL Amazon RDS Postgres Google MySQL Azure MySQL Amazon RDS MySQL Google PostgreSQL Azure Managed SQL Amazon RDS MariaDB Azure SQL Amazon Aurora SQL Azure MariaDB Amazon Aurora PostgreSQL Amazon DynamoDB The process of protecting and recovering PaaS workloads is easy and streamlined via NetBackup Web UI. NetBackup Snapshot Manager needs to be configured to facilitate the discovery of the supported PaaS workloads. Media Server with MSDP Universal Share configuration is also a requirement. After NetBackup Snapshot Manager and cloud provider credentials are configured, discovery process will be triggered automatically or can be started manually. Once the discovery runs successfully, supported workloads will be populated on the Web UI PaaS tab: Add PaaS credentials as required for the workloads to be protected. Credentials can be created previously and leveraged later for the workload to be protected or created as new during configuration. On this example, credential is being created previously using Credential Management tab: Add the credential to the PaaS workloads to be protected. Please note the “validation host” is the Media Server hostname that will be utilized to communicate with the cloud provider and PaaS workload. Media Server need to be able to resolve PaaS services to validate credentials: After that, it is just a matter of creating Protection Plan as usual. The following two prompts are specific to PaaS workloads: 1-) Protection Plan is for Cloud, same as the one used to protect virtual machines in the Cloud, for example. Check “Protect PaaS assets only” to be able to call the correct workflow and framework for PaaS: 2-) On step 4 (Backup options), storage path is the previously configured Universal Share mount point: Just complete Protection Plan workflow and that’s it! Protection Plan will run according to the schedule configuration and recoveries will be fully managed by NetBackup Web UI as well. Veritas NetBackup 10.1 now makes it easier to protect PaaS workloads, with a streamlined process guided by Web UI and leveraging benefits of NetBackup deduplication service (MSDP) and RBAC (role-base access control) to empower workload owners and administrators as needed. Here are some good references for more information about PaaS Workload protection with NetBackup 10.1: NetBackup 10.1 Web UI Cloud Administrator's Guide - Protecting PaaS objects NetBackup 10.1 Web UI Cloud Administrator's Guide - Recovering PaaS assets2.5KViews3likes0CommentsVeritas System Recovery adding cloud storage issue
Hi all, I would like to add S3 compatible cloud storage as my backup destination. Firstly, I have created a cloud instance as per https://www.veritas.com/support/en_US/doc/38007533-136670227-0/v132418412-136670227 Since I dont have Certificate Authority (CA)-signed certificate, I have to use SSL: 0 (disabled). However, when I tried to add cloud backup destination, in the packet received from the cloud library I see this message: The authorization mechanism you have provided is not supported. Please us AWS4-HMAC-SHA256 Unfortunately, on the cloud library (netapp) there is not possible to change the configuration. Do you think that there is any workaround to the issue in such situation? Thanks for your opinions.Solved2KViews0likes5CommentsCloud Upload Throttle not Working
I have a Cloud Catalyst BYOD Media Server on Netbackup 8.3 We upgraded from 8.2 to fix cloud upload issues which were consistently failing. After upgrade to 8.3 , all upload issues went away and upload is working fine, but now its not honoring Network Throttle and is running at full utilisation. How can we fix this ? The Sampling Interval is 5 Seconds. and have rebooted the Server .783Views0likes0CommentsCloud Storage Server configuration
Hello, I have a basic default install of NBU on a single server. I have a backup job running successfully that writes to local disk. I am trying to add an S3 Object Store as an archive tier or secondary location to write backup files, but I can't seem to get the Clous Storage Server to complete successfully (I am using the wizard). I am trying to use RStor.io cloud storage (endpoint is: s3.demo.rstorcloud.io), which is not currently listed in the drop down list of vendors/providers. Anyone familiar with configuring and using this feature, I would very much appreciate your help... Thank you!1.2KViews0likes3CommentsCreating a Azure Germany Storage account for Backup Exec - anyone successfull ?!
Has anyone used a Azure Germany Cloud Storage successfully and could provide some simple steps howto create a *working Storage account* ? I tried all type of storage accoungs (V2, Blob, Classic). I used the storage account name and key1 or key2. I created a firewall rule to allow all traffic fro backup server to internet with no interference of firewall or traffic inspection. BE 20.2 will not connect to azure germany. Thanks JürgenSolved1.3KViews0likes4CommentsHow to Configure Synthetic Backups to Cloud with Full and Incremental Backup to Disk?
I am currently doing Full backups to disk at the end of the work week with incremental backups to disk at the end of each work day. The Full backups are retained on disk for a month. The Incremental backups are retained for a week. What I am hoping to do is also duplicate these backups to the cloud. However a regular duplicate of the Full to the cloud would be both time and cost prohibitive. To minimize the amount of data to be transferred, it looks like synthetic backups are the way to go for cloud backup. So what I am hoping to do is: Full weekly backups to disk to be retained for one month on disk. Incremental daily backups to disk to be retained for one week on disk. Synthetic full backup to cloud to be retained perpetually. Duplicate the incremental daily backups to the cloud to be retained for one week. How would I configure this in Backup Exec? A Synthetic Backup is defined in Backup Exec as a baseline Full, Incrementals, and a Synthetic Full Backup. So when defining the backup job: Do I configure the Full and Incrementals to go to Disk and then configure the Synthetic Full Backup to go directly to the cloud? Or do I have to configure the Full, Incrementals, and Synthetic Full to go to Disk and then add a separate duplicate task for the Synthetic Full Backup to go to the Cloud? Or do I have to do configure it a different way? Best regards.2.1KViews0likes5Comments