Data domain- pools getting filled up. backups failing
Hi experts,
Out setup has destination storage as Datadomain (running on OS: 5.5.3.1-509919 Model: DD890 )
we are facing a serious concern on the disk space utilization.
configured high watermark to 95% and low to 85%
Fullbackups are kicked on Friday evenings and some backups fail due to storage full issue.
currently the DD has reached to 96.4% of disk utilization, But I dont see any duplicated images get deleted. Its been almost 48hrs and we dont see any change in available space numbers.
enviroinment :
NB master server 7.7.1
STU - DD 890. ( primary stu / duplications to tape library )
I think once the high water mark is reached, the duplicated images on the disk should be deleted automatically.
In our case I think this is not happening, and could be one of the reasons for failures.
Can some one help me out to solve this issue.
Cheers.
You can make a list of the image-ids with timestamps older than 2 weeks - ( more or less1452816000 and below).
Your management will need to make decision here - if you cancel the SLPs, the normal expiration date will be applied and images on DD will expire without being duplicated to tape.
You can delay the older images by setting them to Inactive. This will give newer images the chance to be submitted.
Newer timestamps can be cancelled and manually duplicated using bpduplicate with -bidfile that contains the imagelist.
Again - care must be taken that images can be duplicated before 2-week expiration is reached.
We cannot make this decision for you, but as long as you have a backlog that is bigger than the amount of backups per day, image expiration and cleanup is not going to happen.
You can only manually duplicate images if they are no longer under SLP control.
The problem with a backlog is that it becomes more and more difficult to catch up.
Every time a duplication is attempted and does not complete successfully, the retry interval is pushed out further and further.
SLP will always run oldest outstanding jobs before newer ones.The only way around this is to cancel older jobs or set them to inactive....
PLEASE PLEASE PLEASE read through the Best Practice Guide....