BackupExec deduplication manager service failed to start
Hi all I have a backupexec appliance 3600 R4 wint BE15. This appliance is EOX, I know it, but our client maintains this old infraestructure and now I have a problem. This appliance had 2 disks failed and the the RAID was offline. Our client bought the 2 disks and we replace that on the appliance. The rebuilding RAID process was apparently successful and now on the appliance web UI show us all disks online an also the RAID. But now, when I try to start the BackupExec services I have this error for BackupExec deduplication manager: I can see this detailed status on Windows EventViewer: But when I try to verify the D:\Appliance_Dedupe\etc\pdregistry.cfg file I can't access to the folder (D:\Appliance_Dedupe is the appliance's deduplication folder): Anyone can help me with this? Thanks in advance. Regards Toño1.3KViews0likes3CommentsVERITAS NETBACKUP WITH RED HAT CEPH STORAGE
We are using netbackup with ceph storage where ceph is presented to the media servers as a MSDP. Since MSDP has limitations of one pool per media and a sizing limit of 96 TB, we are planning to use CEPH as a S3 - backup target for the new media servers we have to configure. I want to know if we configure Ceph as a S3 cloud storage, can Netbackup perform deduplication? Will all dedup be just on client server using accelerator or we can have target dedup by netbackup as well? Or netbackup will send all backup data it receives to Ceph without any dedup, leaving all data reduction to be handled by CEPH? Can CEPH as a cloud storage perform dedup on its own? With CEPH 4 , we will have erasure coding, so the total storage used will be less but in terms of dedup what advantages can I have by using ceph as cloud storage instead of a MSDP. If anyone is using Netbackup with Ceph, could you share your approach please.Solved1.8KViews0likes1CommentBackup Exec 16 Exchange GRT backup error with deduplication
After windows in place upgrade from windows server 2012 R2 to 2016, and BE services started normally and all job are running successfully, except Exchange and VMware GRT backup failed with the following error. Final error: 0xe0001203 - An error occurred while creating or accessing the directory specified for this operation. Check to make sure that the B2D folder and/or the temporary path specified for this operation are valid. V-79-57344-4611 - An error occurred creating or accessing the directory \\.pdvfs\servername\2\BEData specified for this operation. Ensure that the directory is accessible and can be written to. I changed storage from deduplication disk to tape drive and job success . FP 2 and latest HF 128051are installed .Solved3KViews0likes3CommentsDuplicate Jobs stuck in Active Queued
Veritas support has been worse than useless. We have a CAS MBS enviroment, duplicate jobs are scheduled to run every week from MBS to Deduplication storage on CAS but will get stuck in the active queued state for hours without any data transfer. No failure and no events in logs. Many weeks ago there was an Alert about Robotic Libraries but it has been cleared and not appeared since. Vertias support had me run an inventory on Dedupe storage, recreate storage, update all servers to 20.5, and generate several different logs. All to no avail. Any ideas are welcome.594Views0likes0CommentsWhat you can store and transmit with deduplication seems to very sensitive to deduplication rates.
The charts in the PDF are theortetical it would be interesting to see if others are getting results like these? From what I can see the ability to deupe data can have a significant impact on storage and transmission.1.3KViews0likes3CommentsWindows agent migration to new server
I'm running BE 20 for Windows and backing up my iscsi data volumes to a Dell OST deduplication device. I'm getting ready to migrate several Windows 2008 R2 servers running the BE agent to new Windows 2016 servers. Each migrated agent will keep it's host name and IP, and I want to make sure the new backups continue to be deduplicated against the previous backups. Do I need to do anything special to prep for that? I've found docs about migrating the BE server, but not the remote agents. Here's my plan: Shut down the old server, bring up the new server; rename with old server name and rejoin domain From BE server, establish trust relationship with new server and install agent Install OST dedup plugin on agent Is there anything else I need to do to deduplication recognize the previous backups? Thanks.Solved1.1KViews0likes2CommentsAccess 7.4.2 GAs on October 1 2018
Today (10/1/18) marks the release of the 7.4.2 version of Access. The new release has several new features, but by far the most important of those features is support for NetBackup deduplication on the Access Appliance. Deduplication targets cost-of-ownership reduction (a key focus for the Access Appliance) with features like support for client-direct data deduplication, and multi-domain global deduplication. Most of the Access collateral has been rewritten for this new functionality. Check out the blog here, or visit the Access web page here.2KViews0likes0CommentsDuplicate deduped NDMP backup (block-level) to cloud?
Hello, we have a Backup Exec (16 FP2) job which backups data (mostly vm images) from a NetApp to a deduplication storage via NDMP. As it is a file-level-based backup and the data of the vm images changes rapidly, no real incremental or differntial backups are possible. Is it possible to duplicate this deduped backup-set to a cloud storage (as offsite-backup) on a block-level base, so that incremental backups can be used and the data sent can be reduced? And can I use this duplicated backup to restore (single files) back directly to the NetApp or do I have to restore it first to a local storage and then back to the NetApp? Thomas2KViews0likes5CommentsHow to backup deduplication storage to external drive?
What is the proper procedure to make a backup of an entire deduplication storage directoryto external storage? In the backup job setup, there is both the raw directory structure of the drive letter and directory name "E:\BackupExecDeduplicationStorageFolder\" .... but there is an additional section under "Shadow copy components" with "Configuration" and "Storage" as possible selections. So which do I choose to backup? Both of these? Or only the shadow copy components? The target of the backup is a large external USB drive, with backup compression enabled. , Also, is this really enough for the dedupe data to be usable if restored in the future, or do I need to be backing up some additional SQL database that is associated with the deduplication disk directory?1.2KViews0likes5CommentsExchange backup to deduplication storage or not?
I'm wondering what are your thoughts about this process... We currently backup our Exchange 2010 server like so. Full backup (MB database and logs) once a week, send it to our deduplication storage and then, duplicate the job to another deduplication storage offsite and also duplicate it to the tape. Incremental everyday to simple disk and duplicate it to another simple disk offsite. When the duplicate job for the tape run, the speed is very slow, yes I know, it's because of the rehydratation process. The total size of the data is almost 1 TB. So the job take, on average, 2 days to run, plus or less 48 hours. During that time, the tape is unavailable for other, critical jobs. I need that tape drive for other jobs, 2 days is too long for me, I want to submit to my boss, the architect of the infrastructure, the point that we should store the full backup to a simple disk storage instead of the deduplication store so that the job of the duplication to the tape is much faster. The point that he will make is that we save a lot of space when storing the full exchange backup to the deduplication storage. But I'm wondering if we save that much space by sending the backup to the dedup storage. I can't find the info of the space that the backup of Exchange take on the dedup itself. What is your experiences and thoughts about this process? Thx for sharing.1.8KViews0likes4Comments