Deduplication - PDVFS errors
Hi All, I'm struggling with an issue that has occured with deduplication storage (or thats what it appears to be to me) on one of our managed media servers. All of a sudden most of the jobs that are run on this server are not completing. They start and go into the "Active: Running - Backup" state, but the average job rate for 75% of the jobs is only 1-2 MB/min or just stops at 0 MB/min. I have checked the event logs today and can see that PDVFS errors have appeared as well. Here is what we are getting (oldest first): Log Name: Application Source: PDVFS Date: 15/08/2016 22:00:53 Event ID: 50000 Task Category: None Level: Error Keywords: Classic User: N/A Computer: RESBE03.RES.TSS.NHS.UK Description: An error occurred in the PDVFS driver: <ERR> PdvfsOpen: O_CREAT not set Log Name: Application Source: PDVFS Date: 15/08/2016 22:00:53 Event ID: 50000 Task Category: None Level: Error Keywords: Classic User: N/A Computer: RESBE03.RES.TSS.NHS.UK Description: An error occurred in the PDVFS driver: <ERR> PdvfsOpen: failed for "/RESBE03/2/BEData/_BE101_.TMP": No such file or directory (2) Log Name: Application Source: PDVFS Date: 15/08/2016 22:12:23 Event ID: 50000 Task Category: None Level: Error Keywords: Classic User: N/A Computer: RESBE03.RES.TSS.NHS.UK Description: An error occurred in the PDVFS driver: <ERR> PdvfsPread: pdvfs_io_pread_fake failed: No such file or directory (2) Log Name: Application Source: PDVFS Date: 15/08/2016 22:39:17 Event ID: 50000 Task Category: None Level: Error Keywords: Classic User: N/A Computer: RESBE03.RES.TSS.NHS.UK Description: An error occurred in the PDVFS driver: <ERR> PdvfsPread: fd->dep or fp->dep->stat NULL I have tried accessing the pdvfs file path and it is present, I have been able to navigate (very slowly) to the directory mentioned in the alerts, \\.pdvfs\RESBE03\2\BEdata, I'm still waiting for it to open to see if the mentioned files are present as well. While I'm confident this is pointing to a storage error, I',m confused as to he extent of the issue as we are still seing jobs completing without any issues?? The server itself is running Backup exec 2015 FP4, Windows 2012 R2. Dedupe storage is a VNX5500. Thanks in advance for any insight anyone can provide on this. P.S. Other media servers we are running are not affected by this issue.3.1KViews0likes6CommentsBackup Exec 16 Exchange GRT backup error with deduplication
After windows in place upgrade from windows server 2012 R2 to 2016, and BE services started normally and all job are running successfully, except Exchange and VMware GRT backup failed with the following error. Final error: 0xe0001203 - An error occurred while creating or accessing the directory specified for this operation. Check to make sure that the B2D folder and/or the temporary path specified for this operation are valid. V-79-57344-4611 - An error occurred creating or accessing the directory \\.pdvfs\servername\2\BEData specified for this operation. Ensure that the directory is accessible and can be written to. I changed storage from deduplication disk to tape drive and job success . FP 2 and latest HF 128051 are installed .Solved3KViews0likes3CommentsDeduplication stream size vs disk cluster size
In a few older applications that rely on file system-based databases, like old Usenet news spool servers for instance, it was a good idea to match the storage volume's cluster size to the smallest file size if possible. For that old Usenet server, 1 KB clusters were good for those less-than 1 KB posts. This made the best use of the space, though it made the file system hellacious to manage when things went wrong. Now fast-forward to today. I have a new 20 TB volume to put a dedupe store on, and NTFS on Server 2008 R2 has a minimum cluster size of 8 KB for a volume that size. I can make the clusters larger though. Is there any benefit to matching the dedupe store volume's cluster size to the data stream size to maximize data deduplication and minimize storage waste? For instance, choose a cluster size of 32 KB and data stream size of 32 KB? (By the way, there's no OS selection for Server 2008 or 2008 R2. It says "2008) not applicable." --Solved2.6KViews0likes3CommentsDisk deduplicate to disk
Hi, My company want to change the actual backup system. So i try the last Backup Exec version, but i have a problem to make what i want. I have configure a backup job "save to deduplicate disk" (NFS volume attach to the VM), it works. next, in order to protect this data, i want to create a copy of this volume to a remote share, so i have created a new job "disk to disk" but when i launch a complete save, it copy nothing and say "succes", i don't understand why ! i hope my english is correct. best regards, Max2.4KViews0likes7CommentsAccess 7.4.2 GAs on October 1 2018
Today (10/1/18) marks the release of the 7.4.2 version of Access. The new release has several new features, but by far the most important of those features is support for NetBackup deduplication on the Access Appliance. Deduplication targets cost-of-ownership reduction (a key focus for the Access Appliance) with features like support for client-direct data deduplication, and multi-domain global deduplication. Most of the Access collateral has been rewritten for this new functionality. Check out the blog here, or visit the Access web page here.2KViews0likes0CommentsDuplicate deduped NDMP backup (block-level) to cloud?
Hello, we have a Backup Exec (16 FP2) job which backups data (mostly vm images) from a NetApp to a deduplication storage via NDMP. As it is a file-level-based backup and the data of the vm images changes rapidly, no real incremental or differntial backups are possible. Is it possible to duplicate this deduped backup-set to a cloud storage (as offsite-backup) on a block-level base, so that incremental backups can be used and the data sent can be reduced? And can I use this duplicated backup to restore (single files) back directly to the NetApp or do I have to restore it first to a local storage and then back to the NetApp? Thomas2KViews0likes5CommentsReinstalling Backup Exec 16 and maintain deduplication storage
Hello, we have a Windows Server 2008 R2 with Backup Exec 16 FP1 installed with deduplication option, we have to format the server and install the Windows Server 2012 R2 and reinstall Backup Exec 16 maitaining the settings and catalogs, I will follow this technote: https://www.veritas.com/support/en_US/article.TECH62744 But I am afraid about this part : Do not follow this procedure if any of the following are true : If the deduplication option is installed Anyone has already tried this? Will I get the deduplication storage again after the reinstall?1.9KViews0likes3CommentsVERITAS NETBACKUP WITH RED HAT CEPH STORAGE
We are using netbackup with ceph storage where ceph is presented to the media servers as a MSDP. Since MSDP has limitations of one pool per media and a sizing limit of 96 TB, we are planning to use CEPH as a S3 - backup target for the new media servers we have to configure. I want to know if we configure Ceph as a S3 cloud storage, can Netbackup perform deduplication? Will all dedup be just on client server using accelerator or we can have target dedup by netbackup as well? Or netbackup will send all backup data it receives to Ceph without any dedup, leaving all data reduction to be handled by CEPH? Can CEPH as a cloud storage perform dedup on its own? With CEPH 4 , we will have erasure coding, so the total storage used will be less but in terms of dedup what advantages can I have by using ceph as cloud storage instead of a MSDP. If anyone is using Netbackup with Ceph, could you share your approach please.Solved1.8KViews0likes1CommentExchange backup to deduplication storage or not?
I'm wondering what are your thoughts about this process... We currently backup our Exchange 2010 server like so. Full backup (MB database and logs) once a week, send it to our deduplication storage and then, duplicate the job to another deduplication storage offsite and also duplicate it to the tape. Incremental everyday to simple disk and duplicate it to another simple disk offsite. When the duplicate job for the tape run, the speed is very slow, yes I know, it's because of the rehydratation process. The total size of the data is almost 1 TB. So the job take, on average, 2 days to run, plus or less 48 hours. During that time, the tape is unavailable for other, critical jobs. I need that tape drive for other jobs, 2 days is too long for me, I want to submit to my boss, the architect of the infrastructure, the point that we should store the full backup to a simple disk storage instead of the deduplication store so that the job of the duplication to the tape is much faster. The point that he will make is that we save a lot of space when storing the full exchange backup to the deduplication storage. But I'm wondering if we save that much space by sending the backup to the dedup storage. I can't find the info of the space that the backup of Exchange take on the dedup itself. What is your experiences and thoughts about this process? Thx for sharing.1.8KViews0likes4CommentsRecover Dedup Backup Sets From MMS when CASO is offline
I have Backup Exec 2014.1 V-Ray CAS installed at my main data center (site 1) and I have a deduplication storage system set up there. I also have a MMS backup exec server at my COLO (site 2) with a second deduplicatoin storage system. After each backup job at site 1, the backup sets are duplicated to site 2. I was told when I purchased this system that I would be able to restore at site 2 in a DR situation where my CAS at site 1 would fail. I have conducted tests by disconnecting the CAS at site 1 from the network and tried to restore data sets at site 2, I was met with errors saying there was no communication to the CAS and restores would not be possible. I have since contacted support and was given documentation on how to convert the MMS to a CAS to attempt restores, however in my research this isn't possible because the catalog information is stored on the CAS the way that mine is set up. My Questions: 1. With all that I have here is it possible to make the MMS restore backup sets without a connection to the CAS? 2. Do I have to change the way that the backup sets are cataloged? 3. Will running a device inventory and catolog on the MMS server's deduplication drive when it's disconnected from the CAS and converted to a CAS let me restore backup sets?1.3KViews0likes3Comments