Veritas System Recovery adding cloud storage issue
Hi all, I would like to add S3 compatible cloud storage as my backup destination. Firstly, I have created a cloud instance as per https://www.veritas.com/support/en_US/doc/38007533-136670227-0/v132418412-136670227 Since I dont have Certificate Authority (CA)-signed certificate, I have to use SSL: 0 (disabled). However, when I tried to add cloud backup destination, in the packet received from the cloud library I see this message: The authorization mechanism you have provided is not supported. Please us AWS4-HMAC-SHA256 Unfortunately, on the cloud library (netapp) there is not possible to change the configuration. Do you think that there is any workaround to the issue in such situation? Thanks for your opinions.Solved2KViews0likes5CommentsBackupExec + S3 [A backup storage read/write error has occurred]
Hi, We have BackupExec 20.4 with StoreOnce on premise and use Amazon S3 with Storage Gateway for Virtual Tape Library (VTL). My jobs create backup onsite on StoreOnce and then they are pushed to the cloud via AWS S3 with a duplicate job. I only get this error from time to time and have already checked with my ISP, VPN Network team and opened a ticket with AWS. I ask if anyone can help me out with these failures that occur: Job ended: venerdì 19 giugno 2020 at 02:27:11 Completed status: Failed Final error: 0xe00084c7 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.Final error category: Backup Media Errors Duplicate- VMVCB::\\XXXXX\VCGuestVm\(DC)XXXX(DC)\vm\XXXX An unknown error occurred on device "HPE StoreOnce:3".V-79-57344-33991 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.V-79-57344-65072 - The connection to target system has been lost. Backup set canceled. I can't try cloudconnect optimizer because it's a iScSi connection. Any help would be great. Thank you, Federico PieracciniSolved3.5KViews0likes7CommentsVERITAS NETBACKUP WITH RED HAT CEPH STORAGE
We are using netbackup with ceph storage where ceph is presented to the media servers as a MSDP. Since MSDP has limitations of one pool per media and a sizing limit of 96 TB, we are planning to use CEPH as a S3 - backup target for the new media servers we have to configure. I want to know if we configure Ceph as a S3 cloud storage, can Netbackup perform deduplication? Will all dedup be just on client server using accelerator or we can have target dedup by netbackup as well? Or netbackup will send all backup data it receives to Ceph without any dedup, leaving all data reduction to be handled by CEPH? Can CEPH as a cloud storage perform dedup on its own? With CEPH 4 , we will have erasure coding, so the total storage used will be less but in terms of dedup what advantages can I have by using ceph as cloud storage instead of a MSDP. If anyone is using Netbackup with Ceph, could you share your approach please.Solved1.7KViews0likes1CommentBackup Exec 16 - AWS S3 or Tape Gateway
Icurrently have the aws tape gateway setup to do weekly backups for our servers. Its working well minus the auto ejecting after the job is over (will see if this issue still happens with backupexec 16, just upgraded yesteraday). I noticed backupexec 16 has the ability to add AWS S3 as a storage. Can someone please explain the advantages/disadvantages of s3 versus tape gateway? Thanks2.3KViews0likes3CommentsNetbackup integration with S3 ...
Hi All, I am evaluating a scenario with netbackup 8.0 i have integrated netbackup with AWS S3 as backup target in S3, i have applied lifecycle rule where data will move to glacier after 1 day (netbackup will write data in S3, after 1 day, AWS will automatically transition data to glacier storage class) Now while restoring from netbackup, i am unable to get that data since data is stored in glacier and netbackup cannot talk to galcier directly. just want to verify if anyone have tested such scenario.3.3KViews0likes3CommentsConfiguring cloud storage using CLI
Has anyone successfully configured S3 cloud storage (from start to finish) using the CLI? The csconfig documentation (from the reference guide) is not that helpful. It's unclear how the access key and secret are entered, how the disk pool is configured (will nbdevconfig work with cloud storage), and so on. Almost every single document references the GUI but there are statements indicating that the csconfig command can be used but with no decent examples.1.1KViews0likes1CommentBE 15 fp 5 backup to the private S3
I'm currently trying to test configuration with backup to our private S3 which is compatible with Amazaon S3 but unfortunately I am unable to complete it. here is the list of my actions: 1) configure cloud instance using BE MCLI according to BE MCLI *.chm help. It finishes with success. 2) Configure Cloud Storage using BE GUI. All configuration wizard steps completes with success, wizard even successfully retrieves the list of S3 buckets from the S3 service, and completes the configuration with no error. 3) At the end of configuration BE offers to restart it's services to finalize the cloud storage. 4) After the restart device is still shown as offline. There is an alert at the time of services start: Unable to connect to to the OpenStorage device. Ensure that the network is properly configured between the device and the Backup Exec server. For more information, see V-275-1017 on the Symantec Knowledge Base. However, I was unable to find any useful reference in KB to solve this particular issue. The main question is to understand what is missing from either configuration steps or S3 itself. Does anyone know how this can be debugged?4.2KViews0likes15CommentsConfiguring Amazon S3 in China (Blog)
(This isn't a question, just documentation of how we fixed an issue I couldn't find an answer to online. I have no doubt all of this is in the Cloud Admin Guide, I just decided to take the long route and skip that part :manembarrassed: ) My company has a handfulof remoteofficesaround the world, with two of themin different sitesin China.In each of those China offices we run a 7.7.3NetBackup Master Server, (Windows 2008), to back up local data to disk, and use SLP to replicate those images across the WAN to the other office for offsite protection. This has been working successfully for 3 years, but we've slowly been adding more data to these offices and can no longer do local backups ANDcrosssite replication without filling up the disk targets. We're using "borrowed storage"from another server to protect our offsite copies at the moment while we decide where to go next. One of our new initiativetests was to replicate to S3, but AmazonAWS has a separateChina environment different from the amazonaws.com that the rest of the world can access, amazonaws.com.cn. Our Cloud Admin set up a new S3 instance in China without an issue, but the problem is the default Amazon cloud instance in NetBackup 7.7.3 is not customizable, and does not include the China Amazon region. The problem is the cloudprovider.xml file (C:\Program Files\Veritas\NetBackup\db\cloud) is locked and there are no commands available to add the China region to the Amazon plugin. I was able to get a new cloudprovider.xml from NetBackup support, but that was only because the tech working my case had one on his desktop from helping another customer. The actualsolution to this, and any other provider that isn't a default Cloud option, is to contact the vendor and request the plugin directly from them. (You may need help combining customized instances with the new .xml but I didn't experience that so I don't know the procedure). Also, my device mappings were about a year old so I had to update that file as well. (https://sort.veritas.com/checklist/install/nbu_device_mapping) After replacing the cloudprovider.xml and upgrading the device mappings I was able to see the amazonaws.com.cn instance and, (having already opened the firewall), connected the very first attempt. Firewall: source tos3-cn-north-1.amazonaws.com.cn Bidirectional TCP: (5637,80,443)4.2KViews6likes4Comments