cancel
Showing results for 
Search instead for 
Did you mean: 

Deduplication Option in Backup Exec

Gurvinder
Moderator
Moderator
Employee Accredited Certified

This is a troubleshooting article showing some of the Issues that one might encounter when working with the Deduplication storage and how to go about solving them or collect info that would help Veritas Technical Support to solve it quicker for you.

Deduplication Storage Folder Issues :




A. Deduplication folder is offline

Possible Causes: 
Dedupe service not starting, Dedupe credentials not correct, Dedupe folder corruption

It is important to start your checks by making sure the disk is healthy.

  1. Run a CHKDSK to ensure the drive on which the Deduplication Folder is hosted is free from any corruption.
  2. Ensure Windows indexing is unchecked for this drive.
  3. Ensure an AV exclusion is set for the drive on which Deduplication Folder is created.

Deduplication folder in Backup Exec may show offline due to any of the following:

  • Deduplication service not starting
  • Deduplication folder credential in Backup Exec (BE) have changed and not updated on the Deduplication SPA database
  • Corruption inside the dedupe folder.

Ensure all Dedupe services can be started. Deduplication Engine, Deduplication Manager, Postgres, are all run under local system account. If any one does not start, then following can be checked:

1. PostgreSQL or any other deduplication service startup issues due to registry set incorrectly. Confirm ImagePath is accurate. This can be compared with another working system if available:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\postgresql-8.3 -> ImagePath
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\spoold -> ImagePath
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\spad -> ImagePath

2. Logs that can be referred to apart from error in Windows Event Viewer for each Dedupe service startup issue

Note: Error description in event viewer is fairly accurate and helps in most cases
 
Spoold startup Issues    - Drive:\BackupExecDeduplicationStorageFolder\log\spoold\Spoold.log
Spad startup Issues      - Drive:\BackupExecDeduplicationStorageFolder\log\spad\spad.log
Postgres startup Issues - Drive: \BackupExecDeduplicationStorageFolder\log\pddb (refer latest timestamp logs)
                                      Drive: \BackupExecDeduplicationStorageFolder\databases\pddb\data\pg_xlog (http://www.veritas.com/docs/000085620 )

Known Issue with Postgres startup:
File named 'Backup' under Program Files\Symantec\ or under the folder where Backup Exec is installed might create a problem for PostgreSQL service startup and cause a failure during the deduplication folder creation/re-creation. Remove/Rename if such a file exists at the said path.
Refer these technotes for Postgres startup Issues : 000014031 , 000008261 , 000094757

Deduplication Service Startup -
There can be various reasons why deduplication engine doesn’t start. From BE Install Path\spoold.exe --test   this tests the contentrouter.cfg file in Dedupe\etc\Puredisk. If this file is tampered then Deduplication Engine will not start (call support to rectify the file)
Backup Exec Deduplication Engine may not start if any bin or bhd file goes missing from Dedupe\Data folder. The Data folder contains all containers (bin and bhd) which houses the unique segments). Refer http://www.veritas.com/docs/000008062 (It could be the AV quarantined these files or disk Issues or abrupt reboots). The container number is reported in the event viewer (application section). Please contact Tech Support if unable to find that bin or bhd since a tool will then need to be run to bring dedupe in a consistent state and for the dedupe engine service to start. Even after the tool is run, it is important the root of why these files went missing should be investigated so that the Issue does not recur in future. Some root causes already known (AV quaratined the files, disk issues ). Event viewer should be looked at to determine when the error started and what issue was encountered on that day. This would help to determine the root cause of why these files went missing.

3. If the services starts up, it is also important to note that the queue processing in dedupe is working. If tlogs (inside dedupe\queue) are older than a couple of days and are not going away then this could mean there may be something wrong with the internal queue process.

They should be committing automatically and under no circumstances be deleted by anyone (manually deleting of tlog files can badly affect the dedupe folder and backup sets).

Note: Any errors for queue process will be recorded in Dedupe\log\spoold\storaged.log
See http://www.veritas.com/docs/000087645 - A known case where queue processing does not run.

Queue processing can be manually triggered by running crcontrol.exe --processqueue from command prompt (run from BE Install path. Run the command 2 times) to see if the tlogs inside the queue folder get cleared.  
Note: Read the log from the bottom as the latest entries are added at the very end.
If errors are still being reported in stroraged.log then contact Veritas Technical Support.

4. If the dedupe folder is still offline i.e. all Deduplication Services are running, then restart the Backup Exec services. If the deduplication folder is still offline refer to the adamm.log present in Backup Exec Install Path\logs folder. Review the adamm.log from the bottom.

Example: See adamm log snippet below. In this case, the dedupe user's password was not correctly updated.
See, How to change password for logon account used for deduple storage folder  

Here's how to look for a similar section in your adamm.log.

  • Open adamm.log,
  • Go to the bottom of the page
  • Search for the following string "adamm log started".
  • When you find it, start reading the log downwards from there. Review these sections and note any errors:

[04596] 02/24/16 03:09:24.300 Read OST Server Records - start 
[04596] 02/24/16 03:09:24.345 OST Server: PureDisk:BE-CAS:PureDiskVolume
[04596] 02/24/16 03:09:24.345 Read OST Server Records - end
[04596] 02/24/16 03:09:24.345 DeviceIo Discovery - start
[04596] 02/24/16 03:09:26.431 DeviceIo: STS: Critical: (Storage server: PureDisk:BE-CAS) PdvfsRegisterOST: Failed to register with SPA on storage server BE-CAS. Check to make sure the server is on and that the services are running. (Permission denied) V-454-25
[04596] 02/24/16 03:09:26.432 DeviceIo: sts_open_server PureDisk:BE-CAS dedup 2060029
[04596] 02/24/16 03:09:26.432 DeviceIo: ostaspi: sts_open_server PureDisk:BE-CAS as dedup error 2060029
[04596] 02/24/16 03:09:26.432 DeviceIo: ostaspi: authorization with server PureDisk:BE-CAS has failed

SGMON.exe can also be used to debug a Deduplication Folder Offline issue. If all deduplication related services are starting without any problem then shut down only the Backup Exec services, launch SGMON with Device and Media verbose (verbose can be enabled from SGMON settings) and start all Backup Exec services.
Note: SGMON.exe is present in Backup Exec Install Path

Filter SGMON or any other log with string “ERR” as shown below. The SGMON log file may be named differently, therefore check the log location to confirm the name of the log file.

C:\Program Files\Symantec\Backup Exec\Logs>findstr /C:"ERR" BE-CAS-SGMon.log > SGMON_ERR.log

PVLSVR:   [02/24/16 03:18:32] [2436]     DeviceIo: STS: Error: [ERROR] PDSTS: pd_register: PdvfsRegisterOST(BE-CAS) failed (13:Permission denied)
PVLSVR:   [02/24/16 03:18:32] [2436]     DeviceIo: STS: Error: [ERROR] PDSTS: add_mount: PdvfsMount() failed for mount point:<BE-CAS#1> (13:Permission denied)
PVLSVR:   [02/24/16 03:18:32] [2436]     DeviceIo: STS: Error: [ERROR] PDSTS: open_server: pd_mount() failed (2060029:authorization failure)
PVLSVR:   [02/24/16 03:18:32] [2436]     DeviceIo: STS: Error: [ERROR] PDSTS: impl_open_server: open_server(PureDisk:BE-CAS) failed (2060029:authorization failure)
PVLSVR:   [02/24/16 03:18:32] [2436]     DeviceIo: STS: Error: [ERROR] PDSTS: pi_open_server_v7: impl_open_server(PureDisk:BE-CAS) failed (2060029:authorization failure)

There could be multiple reasons for dedupe being offline even if the services are started. This was just one example. But one thing to note is that deletion of the dedupe folder from UI and re-importing would not help in bringing the Dedupe folder online for the reasons discussed above.

Note: While creating or re-importing a Deduplication Storage Folder in Backup Exec, pdde-config.log needs to be referred if an error is seen while creation/re-import. The log file is present in DedupeFolder\log.
See Deduplication Folder creation or recreation fails with "An Error Occurred while Creating the dedupli....

Sometimes it may be required to delete the EtcPath and ConfigFilePath String Value from the following Windows Registry location:
HKEY_LOCAL_MACHINE\SOFTWARE\Symantec\PureDisk\Agent.
Note: This only applies to re-import cases.
Incorrect use of the Windows registry editor may prevent the operating system from functioning properly. Great care should be taken when making changes to a Windows registry. Registry modifications should only be carried-out by persons experienced in the use of the registry editor application. It is recommended that a complete backup of the registry and workstation be made prior to making any registry changes.

5. After upgrading to Backup Exec 2014, the Deduplication Manager (SPAD) is seen to crash on startup, See http://www.veritas.com/docs/000022713.



B. Backup fails when targeted to Deduplication folder and works when targeted to normal disk storage

It is important to narrow down the issue if the backup problem only occurs when directed to a deduplication folder. It may caused by the kind of resource we are backing up or something else outside of deduplication storage but affecting the backup and it could also be something within Dedupe.
 

  1. If the deduplication folder is online and the backup is queued. 
One of the known reason is mentioned in this article: http://www.veritas.com/docs/000014980. Other reason could be; presence of faulty media which cannot be consumed by Backup Exec to perform the backups. Perform an Inventory to make sure all OST Media(s) are readable.

There could also be other discrepancies, refer another Issue http://www.veritas.com/docs/000088005

  1. Backup to dedupe may not run and show status ready;no idle devices available even when it has no active jobs. 
Restart all Backup Exec and Deduplication services. Ensure \\.pdvfs\servername\2\BEOST\BEOST_SDrv\ is empty when no backups are running on the deduplication device. If the drive do not get dismounted then as a workaround dedupe concurrency can be increased but if this issue reoccurs then Veritas Technical Support should be contacted to check on this Issue.

Note: Nothing should be deleted or touched in the above path. Just view the content and attempt to restart Backup Exec services to dismount the logical drives if there are any. The server name in the path should be replaced by the hostname of the server where deduplication folder is located. 

  1. Client Side Backups failing with Read Write Errors.
A Backup using the Media Server Side Deduplication can be tested to check if the backup job still fails. If Media Server Side Deduplication works then there may be some network issue (i.e. client to BE server) causing these errors. But, this is not the only reason, hence test both client side and media server side deduplication backup. The settings to choose between client side and media server side can be edited from Backup Job -> Edit -> Storage section.

On the Backup Exec Server, KeepAliveTime registry dword can be created and set to decimal 5000 (Note: server needs to be rebooted) to test if it helps.
Refer http://www.veritas.com/docs/000076435 for more information.

If client side backup still fails an SGMON log with Job Engine, Backup Exec Server, Device and Media (from settings, select verbose for device and media) selected can be captured when backup is running. Also get the SGMON log from Client Server. More details about Logging can be found here - http://www.veritas.com/docs/000005927.

There are some additional logs created which can also be collected for analysis by Veritas Technical Support staff DedupeLocation\log\spoold\remoteclientserver\beremote\store

The above log location can be interpreted as spoold connecting to remote server and to its process i.e. Beremote process and that Beremote has made a store connection i.e. backup with Dedupe Server. The logs inside this can be looked for connection reset errors just in case the connection is being aborted. Antivirus software can sometimes cause these aborts so in order to isolate this issue, if possible, uninstall the Antivirus Software and see how it goes. It is sometimes observed that disabling the Antivirus application does not help and hence it is recommended to test by uninstalling the Antivirus Software (Note: This is not a solution but doing this will help us isolate the problem).

  1. Network issues may cause optimized duplication between the backup servers or LSU to fail with this Read Write errors - See http://www.veritas.com/docs/000103306

SGMON.exe is a utility which helps to debug many issues in Backup Exec. We can use SGMON to debug this Issue when the optimized duplication is running. This will need to be done on the Backup Exec server which running the optimized duplication operation.
Additionally, also look at the replication.log located at DedupeFolderlocation\log\spad on the primary deduplication backup (i.e 1st Copy). On the target side i.e. where the optimized duplication is targeted to, check DedupeDrive:\log\spoold\primarydedupeBEservername\spad.exe\Store\ and review the logs with the maximum size for clues.

Also review Windows Event Viewer and ADAMM log from the Backup Exec server for more information.

The Opt Dupe between the Hardware OST Appliance will need the SGMON and vendor OST plugin logs during the opt dupe failure to narrow down the Issue.

  1. Client-side deduplication is enabled for this job, but it could not be used. http://www.veritas.com/docs/000041175 review adamm.log in BE Install Path\logs. Open that log, from the bottom search up with string "DeviceIo Discovery - start" and then down. For the remote server for which the client side exception is seen, please check for errors if any reported there. For eg : [08792] 04/21/16 09:26:51.801 DeviceIo: Discover: configure remote OST server PureDisk:BE-CAS for dbclient1.lab.symc failed, error=7. More about the error can be found out by running net helpmsg 7 in command prompt. The error shows The storage control blocks were destroyed. This could be related to DNS or any firewall disrupting the connections that the backup exec remote agent is trying to make with the deduplication engine and deduplication manager service ports. Maybe by just updating the firewall rules or updating correct IP and hostname entries in each other server (BE and Remote server for which client side isn't working) host file may do the trick. After making the corrective change it is important to restart the backup exec device and media service to again check the adamm.log if the same error appeared after the changes for the same client or not. If the error no longer appears for the remote client, then test the client side backup again.



C. Restore from Deduplication folder does not work

  1. It is important that the backup set is verified i.e. from within the deduplication store in Backup Exec console -> Storage -> Dedupe Folder details -> Backup Set -> highlight the backup set that is being restored -> right click on it and select Verify.
    1. If the Verify job fails: There may be an issue with that backup set and in which case one needs to check if the backup was successful for that resource.
    2. If the Verify job is successful: Duplicate the backup set on a normal disk storage (B2D) and then test the restore.
  2. Test the restore from other backup sets of the same resource to narrow down if this is an issue with specific backup set.
  3. Perform a backup of the same resource to a normal disk (i.e. new backup to B2D) and then perform a test restore.
Note: This may help to narrow down Backup Exec credential issues in some situations.
  1. One other thing that can be tested is to disable the Client-Side Deduplication from Deduplication Folder properties in Backup Exec console -> Storage (again this is just for testing and to narrow down and workaround the Issue). This will prompt to restart the Backup Exec services. Once the services are restarted, attempt to restore and see how it goes. If this does not help please contact Veritas Technical Support to investigate further.



D. Deduplication Storage Folder is full

a. Check if the deduplication device is nearing full capacity:
The capacity column available in the Storage tab of Backup Exec console shows usage of a disk storage. If it is RED in color, it is an alarm that the Deduplication device may be nearing its full capacity.
Note: 95% usage is the highest we have seen a deduplication folder get to. This is the time to reclaim space within Deduplication Storage so that newer backups can run. It is recommended that "The percentage of disk space to reserve for non-Backup Exec operation" value should never be lowered below 5%.

b. Check for deduplication statictics:
Following command can be run from command prompt -> BE Install Path to check the real deduplication statistics:

crstats.exe --verbose --convert-size from the Backup Exec install path on Backup Exec Media Server which hosts the Deduplication Storage Folder.

The parameters that need to be looked at are as follows :
  • Use Rate - Should be less than 95. Preferably in the 70 - 80 percent range so that future Backups can be run without worrying about the space on Dedupe
  • Catalog Logical Size - This is the front end data (we can say the uncompressed data (original size)) that is currently residing on the deduplication folder. 
  • Space Allocated For Containers - This is the space taken by the Dedupe containers i.e. the content inside the Dedupe\data folder.
  • Space Used Within Containers - If this is near or same as Space Allocated for containers then for newer backups (if more unique data is to be backed up) then new containers will be created hence more space will be needed. At this point if the use rate is high and there is no Space available within Containers the the dedupe disk volume may need to be extended or space needs to be reclaimed by deleting existing backup sets.
  • Space Available Within Containers - Each container within Dedupe Data folder is 256 MB. Some containers might fill up completely and some may not. This leftover space from all container is called Space available within container. If there is ample space within container then Backups can be run but again be cautious of the use rate.
  • Space Need Compaction - This is the deleted/dereferenced space from within space used within container which is still taking up space within Dedupe containers (i.e. bin bhds within Dedupe\data). This is not counted while calculating Used percentage but if this is in TBs (high), please use crcontrol.exe --compactstart 100 0 1  (Run it from command prompt, BE Install Path\ .It may take a while for this command to complete). Always open the command prompt in the elevated mode. To monitor the status, you can run crcontrol.exe --dsstat as well to monitor Deduplication Folder statistics.

c. Manual space reclamation:
If space need to be reclaimed follow this Technical Article Manual space reclaimation for Deduplication Storage Folder in Backup Exec 2012 and above.

Note: It is recommended to stop backups (i.e. backup to another storage) while attempting to manually reclaim the space within dedupe folder, since it may be difficult to identify how much space has been reclaimed due to a constant addition of data if Backups continue to run while reclaim process is being carried out.


Points to Remember Reclaiming Space manually from Deduplication Storage : 

  1. From the dedupe stats, review the deduplication ratio. If it is high i.e. you are getting a very good dedupe ratio then may need to expire more sets to release some space and its because dedupe does not release the space for that chunk, until the last last reference for a chunk is gone. (refer reclaim technote for more details)
  2. When backup sets are manually expired from Backup Exec console, core BE services delete the Media and catalog reference of that backup set from within BE (BEDB, catalogs etc.). This activity is recorded in BE Audit Log (Go to Backup Exec console -> Configuration and Settings - Audit Log ->Choose "Backup Set Retention" category from the drop down to check if the media used for that backup set was deleted or not). For assistance in identifying which media was used by the backup set see point no. 2 under solution in article number 000107956.
  3. If point 2 has worked (If not, contact Technicel Support), then BE Deduplication Manager is notified that these Media references need to be deleted from within Deduplication folder. Deduplication Manager then delegates the responsibility to Deduplication Engine Service to delete these references and update the Deduplication Engine database (To update the dedupe engine database tlogs are created. Tlogs are a way to perform updations in the Deduplication Database). To committ the Tlogs, queue processing (as the manual reclaim technote mentions) needs to be performed a couple of times and thats when the dedupe stats need to be checked to decide if more backup sets need to be expired to lower the use rate.
  4. Space available within containers increases after the manual reclaim process is followed. Space needs compaction may also be high after this process. crcontrol.exe --compactstart 100 0 1 can be run to give available space within containers and space needs compaction to the file system. May need to run this command a few times (use command prompt and run it from BE Install Path)

Important:

  • DO NOT delete any files from within the deduplication storage folder unless it has been validated by Veritas Technical Support.
  • It is highly recommended to be on Backup Exec 2014 or 15 which uses deduplication version 7 and run a  lot faster and smoother as compared to previous versions.