How can I find out the actual size of a backup after compression and deduplication ?
Hi everyone, I am using Veritas NetBackup version 11 to backup Oracle databases and all virtual machines in a quite big system. The using experience is nice and smooth, but my boss want me to show him the exact actual size of a backup after being backed up successfully. This means the actual size of a backup after being deduplicated. How can I find this statistics ? The Activity monitor in NetBackup Web UI shows the logical size and the deduplication ratio of each job ID, but it is suffering to manually sum them up =))) So, I really want to know that: Are there any faster ways to get the actual size of a backup after being deduplicated ? And if the only way to do that is manually calculating, what is the correct formula when I know the logical size and the deduplication ratio ?83Views0likes2CommentsBackupExec deduplication manager service failed to start
Hi all I have a backupexec appliance 3600 R4 wint BE15. This appliance is EOX, I know it, but our client maintains this old infraestructure and now I have a problem. This appliance had 2 disks failed and the the RAID was offline. Our client bought the 2 disks and we replace that on the appliance. The rebuilding RAID process was apparently successful and now on the appliance web UI show us all disks online an also the RAID. But now, when I try to start the BackupExec services I have this error for BackupExec deduplication manager: I can see this detailed status on Windows EventViewer: But when I try to verify the D:\Appliance_Dedupe\etc\pdregistry.cfg file I can't access to the folder (D:\Appliance_Dedupe is the appliance's deduplication folder): Anyone can help me with this? Thanks in advance. Regards Toño1.3KViews0likes3CommentsVERITAS NETBACKUP WITH RED HAT CEPH STORAGE
We are using netbackup with ceph storage where ceph is presented to the media servers as a MSDP. Since MSDP has limitations of one pool per media and a sizing limit of 96 TB, we are planning to use CEPH as a S3 - backup target for the new media servers we have to configure. I want to know if we configure Ceph as a S3 cloud storage, can Netbackup perform deduplication? Will all dedup be just on client server using accelerator or we can have target dedup by netbackup as well? Or netbackup will send all backup data it receives to Ceph without any dedup, leaving all data reduction to be handled by CEPH? Can CEPH as a cloud storage perform dedup on its own? With CEPH 4 , we will have erasure coding, so the total storage used will be less but in terms of dedup what advantages can I have by using ceph as cloud storage instead of a MSDP. If anyone is using Netbackup with Ceph, could you share your approach please.Solved1.9KViews0likes1CommentBackup Exec 16 Exchange GRT backup error with deduplication
After windows in place upgrade from windows server 2012 R2 to 2016, and BE services started normally and all job are running successfully, except Exchange and VMware GRT backup failed with the following error. Final error: 0xe0001203 - An error occurred while creating or accessing the directory specified for this operation. Check to make sure that the B2D folder and/or the temporary path specified for this operation are valid. V-79-57344-4611 - An error occurred creating or accessing the directory \\.pdvfs\servername\2\BEData specified for this operation. Ensure that the directory is accessible and can be written to. I changed storage from deduplication disk to tape drive and job success . FP 2 and latest HF 128051 are installed .Solved3KViews0likes3CommentsDuplicate Jobs stuck in Active Queued
Veritas support has been worse than useless. We have a CAS MBS enviroment, duplicate jobs are scheduled to run every week from MBS to Deduplication storage on CAS but will get stuck in the active queued state for hours without any data transfer. No failure and no events in logs. Many weeks ago there was an Alert about Robotic Libraries but it has been cleared and not appeared since. Vertias support had me run an inventory on Dedupe storage, recreate storage, update all servers to 20.5, and generate several different logs. All to no avail. Any ideas are welcome.618Views0likes0CommentsWhat you can store and transmit with deduplication seems to very sensitive to deduplication rates.
The charts in the PDF are theortetical it would be interesting to see if others are getting results like these? From what I can see the ability to deupe data can have a significant impact on storage and transmission.1.3KViews0likes3Comments