We have been using BE 2012 for the past 2 months and its been quite good, but all of a sudden my jobs simply stay in a queued state when targeted to teh deduplication store.
I have come across others who have the same issue and its due to "bad backup sets" on media, it seems to sort itself out after a week or so of cleanups and deleting the bad backup sets...
Has anybody else come across this issue and could it be a know issue?
I have been doing that for most of today and have now just given our server a restart, I have managed to get the following levels:
************ Data Store statistics ************
Data storage Raw Size Used Avail Use%
9.1T 8.7T 2.4T 6.3T 28%
Number of containers : 14472
Average container size : 183005833 bytes (174.53MB)
Space allocated for containers : 2648460417671 bytes (2.41TB)
Space used within containers : 2572606575016 bytes (2.34TB)
Space available within containers: 75853842655 bytes (70.64GB)
Space needs compaction : 2297756075 bytes (2.14GB)
Reserved space : 450554834944 bytes (419.61GB)
Reserved space percentage : 4.5%
Records marked for compaction : 1576111
Active records : 31641814
Total records : 33217925
Use "--dsstat 1" to get more accurate statistics
Which looks like its pretty much cleared out, I am going to run some jobs tonight and see if it continues.
Its currently running a dummy job and I seem to be getting alot of the following errors in SGMON:
ERROR = 0xE0008214 (E_CHG_SOURCE_ELEMENT_EMPTY)
PVLSVR: [11/29/12 17:17:01]  11/29/12 17:17:01.515 PvlSession::MountOverwriteMedia() - mount error ERROR = 0xE0008214 (E_CHG_SOURCE_ELEMENT_EMPTY)
PVLSVR: [11/29/12 17:17:01]  11/29/12 17:17:01.517 PvlSession::MountOverwriteMedia() - qualified drive Deduplication disk storage 0001:3 slot 0718
Its patched up to date excluding hotfix 194470 , as it doesnt fix any issues im having.
The below link in the thread descirbes same issue as you see if you have any bad media left still which could be causing this
I have already had a look to try and find the troublesome backup sets and have run an inventory on the dedupe store but end up having 200+ errors which sya something along the lines of the following:
Inventory Device 00065 -- Media Label: OST00000799-4B5CEDACE87C9821 -- A backup storage read/write error has occurred.
If the storage is tape based, this is usually caused by dirty read/write heads in the tape drive. Clean the tape drive, and then try the job again. If the problem persists, try a different tape. You may also need to check for problems with cables, termination, or other hardware issues.
I have enabled the ShowHiddenMedia reg key and have looked at the dedupe store media and the OST file in question is either not there or it is scratch media.
So I dont quite understand whats going on behind the scenes if thats the case...I did come accross this which may be a bug:
The link you have given indeed shows an issue and would request to open a formal case with symantec to investigate if it matches and to get an update on same
Jobs over the weekend, worked but very slowly, they seem to lock up at queued when they are trying to mount and erase media.
I dont understand why its doing this, as its incredibly frustrating that its happend, whats the point of backing up data if the jobs never finish so the others can run?
I will raise a support request with Symantec
We had a similar issue, I ended up dismounting and remounting the deduplication pool with tech support from Symantec. My jobs would sit in queued forever or start to work, only for the BE engine to crash.
Basically you will disable and delete the disk storage and then create a new disk storage, pointing it to the deduplication folder.
You will need to know the username/password it uses to connect to that folder though.
Also, double check that you have used the default folder name, as my coworkers had not and I created a 2nd deduplication folder.
I would however, verify with Symantec that this is causing your issue, before going ahead and doing it.