[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!444Views0likes1CommentBackupExec + S3 [A backup storage read/write error has occurred]
Hi, We have BackupExec 20.4 with StoreOnce on premise and use Amazon S3 with Storage Gateway for Virtual Tape Library (VTL). My jobs create backup onsite on StoreOnce and then they are pushed to the cloud via AWS S3 with a duplicate job. I only get this error from time to time and have already checked with my ISP, VPN Network team and opened a ticket with AWS. I ask if anyone can help me out with these failures that occur: Job ended: venerdì 19 giugno 2020 at 02:27:11 Completed status: Failed Final error: 0xe00084c7 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.Final error category: Backup Media Errors Duplicate- VMVCB::\\XXXXX\VCGuestVm\(DC)XXXX(DC)\vm\XXXX An unknown error occurred on device "HPE StoreOnce:3".V-79-57344-33991 - A backup storage read/write error has occurred.If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.If the storage is disk based, check that the storage subsystem is functioning properly. Review any system logs or vendor specific logs associated with the storage to help determine the source of the problem. You may also want to check any vendor specific documentation for troubleshooting recommendations.If the storage is cloud based, check for network connection problems. Run the CloudConnect Optimizer to obtain a value for write connections that is suitable for your environment and use this value to run the failed backup job. Review cloud provider specific documentation to help determine the source of the problem. If the problem still persists, contact the cloud provider for further assistance.V-79-57344-65072 - The connection to target system has been lost. Backup set canceled. I can't try cloudconnect optimizer because it's a iScSi connection. Any help would be great. Thank you, Federico PieracciniSolved3.5KViews0likes7CommentsNetbackup 7.7.3 save backups to storage
Hi everyone, I need to save backups in a Dell Emc ISILON storage, is this possible? Wich is the best recomandation for this? I try with "Configure disk storage server" but the only option for storage is "Netapp". Actually we use this storage only for take backup´s (NDMP)2.8KViews0likes10CommentsBackup exec 2016 & Dell Compellent
Is there a way to backup Dell Compellent volumes, directly i did hear a while ago that you could do this by copying/mounting the volume/snapshot to the backup server and backing it up from there, i think it is all powershell scripting but wouldnt knwo where to start.1.5KViews0likes2CommentsFujitsu selects Veritas Access as their next-gen storage infrastructure to support K5
Have you read today’s news? FujitsuLimited(Minato-ku, Tokyo) has selectedVeritas TM Access,a software-defined scale-out network-attached storage (NAS) solution, as the premier storage infrastructure for theFujitsu Cloud Service K5. Fujitsu selected Veritas Access due to the following key benefits: Scale-out architecture that enables the cluster’s hardware to be added or replaced without taking systems off-line Flexibility in supporting multiple protocols and the ability to seamlessly interface with different types of storage Control and efficiency Advanced technology with proven record of supporting global mission critical systems Read the press release and share your thoughts by commenting below.1.8KViews0likes0CommentsPause or Disable External Array?
Hi. I have a B2D setup where the array is JBOD Raid5 on a Dell PE510 PercH800. I need so swap out a dead drive in the array (VDrive). I will power off the array to do so but intend to keep the server running. Should I pause or disable the VDisk within BEX before taking it off line or will BEX automatically offline the VDrive when I power it down and online the drive when it is powered back on? Tnx958Views0likes1CommentMoving Backup Sets to Another Disk Storage Location
We're replacing a NAS used for disk storage with a new larger one and I need to transfer the backup sets from the old to the new. I've read some posts stating that I need to run duplication jobs on all the backup sets to the new storage, but this seems awfully time consuming. Is it possible to robocopyeverything over including the .cfg and BEControl files? Or perhaps copy the .bkf files over an re-catalog them?Solved5.2KViews0likes2CommentsSurvey Participants Needed: Long Term Retention Storage Appliance
The purpose of this survey is to gather feedback on future appliance concepts. We'll be asking questions about storage use cases for long term retention. SURVEY LINK We look forward to your participation and feedback on this topic!795Views1like1CommentSetting Active Backups Per Storage Unit Group
Specific Netbackup Version: 7.7.3 Specific Windows Version: 2012 R2 I have two Storage Unit Groups (SUG) and I send one group of policies to one SUG and the another group to the other SUG. Due to various reasons related to network speed, disk speed, and time I would like for each SUG to be active with a maximum of two jobs. How do I make sure each SUG is active with two jobs only so that when I look at the activity monitor I see 4 writing operations max ... 2 to one SUG and 2 to the other. Of course, if there is no policy scheduled for a particular SUG at the time then I would only see 2active jobs to a SUG if a policy is scheduled to run and not 4 active jobs. At no time should any SUG have more than 2 active jobs. Is this even possible? What I basically have is two NetApp devices with defined Disk Pools. These Disk Pools are in Storage Units. And the Storage Units are in Storage Unit Groups. A simple diagram is attached. Thank you for any insight or guidance you might have. Kind regards.804Views0likes2CommentsBackup Exec 15 does not consider overwritable media in storage capacity
Hi, I'm having a small issue with the Backup Exec storage screen. As shown below, the capacity of the tape library is reported as almost full: However, the media in the tape library is now outside of it's protection period and is marked as overwriteable: Should Backup Exec not see this overwritable media as available capacity? I should also mention that backups are running to tape correctly, it would just be nice to see usable capacity accurately on the storage screen, rather than having to drill down into the 'Slots' screen.Solved1.4KViews0likes2Comments