We have had the issue that our Backup Exec dedup disk (approx 14TB) was close to running out of space.
So after some investigation I found the running the "crcontrol.exe --processqueue" command, could do some cleanup.
First time I ran it discovered there were about 44.000 tlog files in the queue folder, so I disabled all backup jobs to let it work on the files.
Now A WEEK later, there are still files created in the folder, it has cleaned the folder several times, but it constantly seems to create more tlog files to work on, between 100-2000..
running "crcontrol.exe --processqueueinfo" keeps giving Busy=yes and pending=yes
A few times have i caught pending=no, but the scheduled run at 12:20 and 00:20 seems to reset pending to "yes"
The "crcontrol.exe --queueinfo" reports:
total queue size: 12440670369
Am i supposed to stay patient and pray to god nothing happens? Or should i just start the backup jobs again?
It seems it has cleaned up to make 3TB of available space.
We have seen some very extended timeframes for running various command line utilities against deduplication storage and usually you have to let them run. This extended timeframes are usually a combination of size of deduplication data and issues that have built up within deduplication over time.
However if concerned you should really log formal support cases as when deduplication folders do go wrong the issues are not typically something that can be fixed without formal assistence
some technotes that might help you to reclaim the space manually
https://www.veritas.com/support/en_US/article.000005365 (this is basically for netbackup, but the deduplication process is similar)