Omar@ This has been invaluable!!
We've been fighting with an insane backlog for months with no easy solution in sight.
After just a couple days of running this script, we've already identified mutiple areas for improvement.
For starters, we had 825TB of "un managed" images in the Queue ... which was about 80% of our images.
I found a tech note on how to clean that part up.
In addition, we have over 125,000 images under 100mb in size.
We are using LTO4s and have the following configured;
MIN_GB_SIZE_PER_DUPLICATION_JOB = 200
MAX_GB_SIZE_PER_DUPLICATION_JOB = 800
MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB = 120
DUPLICATION_SESSION_INTERVAL_MINUTES = 15
... any advice on how to tweak it further to account for the large number of tiny images?