12-21-2014 04:00 PM
Appliance was R2, was upgrading to R3 twice, once in September and once again at the end of October. Ops center crashed on the first appliance.
3 months ago I was sitting at 2.3 to 2.5TB of dedupe storage at a 16.1:1 ratio, this was directly before the first upgrade. So I had about 37.03-40.25TB of hydrated data.
Since then I obviously have upgraded the appliance on around Sept 16. Looking in the adamm.log is where you can really tell it starts to grow from that point until now, because within 2 weeks it climbed a TB. Now I am at 4.88 TB and about to fill up the storage. I am manually having to delete the oldest backups sets trying to reclaim some data. My dedupe ratio is now at 10:1.
Since I have only jumped up 7-8TB of hydrated data, but yet have doubled the amount of space taken up with the dedupe something seems wrong.
I have only added 3 servers to the mix since Sept 16th, which each house about 50GB of data.
All jobs follow a basic template with a few being different: 1 full a month with differentials in between. The fulls kept for 95 days and the differentials kept for 32 days.
I have a ticket open on this but the tech doesn't believe anything is wrong. We looked in the audit log and it show media being reclaimed and deleted, but I don't think this is a consistent case.
Right now I have a lot of Expired backup sets that are not being deleted. Some I think are being affected by the known bug that is not deleting backup sets that were taken with 2012. Other though I expired yesterday morning and they are still there, other are deleted right away.
My environment is all Windows. I have almost half 2012R2, almost half 2003 and a couple of 2008 and 2 Windows 7. I only have 2 or 3 out of 20 servers being backed up that have more than 100GB on them and only 10 of the have more than 50GB. One of the ones with more than 100GB has 800GB of data on it, 1/4 of it being pictures which I know do not dedupe well, but this servers was being backed up pre upgrade to 2012 with no issues. The other server having more than 100GB of data is a file server with 200-300 GB on it, which resides on a windows dedupe storage which takes it down to 150-200GB.
To me there is no way, in my environment that going from 37-40TB of hydrated data/2.3-2.4TB dedupe at 16.1:1 ratio, and it sat this way for 4 months leading up to the 2014 upgrade.....to 47-48TB of hydrated data with 4.88Tb when deduped at 10:1 ratio within 2 months of the upgrade is correct. The only reason I haven't filled up my dedupe yet is I am manually expiring backup sets. But it is still climbing and the dedupe ratio is going down and I won't be able to fight it off for longer without doing more drastic expirations.
Is anyone else experiencing something similar? Is there anymore information I can give that would help?
12-27-2014 01:50 AM
...BE 2014 SP2 is out, so it might be worthwhile upgrading to this if you haven't already.
01-05-2015 02:07 AM
This can be caused by anything to be honest. Are you sure that there isn't any server pushing out some unique files somehow ?
Maybe you have a job setup the wrong way and you are backing up duplicate data ?
I can understand that the TSE says that there is nothing wrong, maybe the expirations that don't work can be called wrong but still it's hard to proof.
01-05-2015 05:47 AM
I was able to find of data to delete that cleared up 1TB. This was due to some .tar files, compressed files basically. A tech assured me that they would dedupe, well that was crap because a full backup is only 220GB, and then a few differentials later it went up a TB. The day after I deleted them it dropped from 4.8 to 3.8 and I don't think it was coincidence.
My plan is to install SP2, if I can get help that is. I have an appliance and it isn't out on Live Upate yet. According to BackupExec I am not allowed to install anything on my own.
We definetly have some unique files, but not likely that is the issue if we sit at 2.3ish-2.5-ish, possibly 2.6ish every now and then for 5 straight months, and then shoot up double that in a span of 1-2 months. Only 2-3 servers were added and they only house around 50GB of data a piece.
I just setup Fulls and then differential on all jobs, besides some where I do a full everyday because it is faster for some reason. SO I am not sure how I could be backing up duplicate data anyways. According to one tech everything can be deduped so in theory it wouldn't matter if I was backing up duplicate data, it would only hold one spot anyway.
Hopefully those expirations will be solved by SP2.
01-05-2015 10:43 PM
Stuff like tar files and file servers in general don't dedupe well.
If you backup the same tar file then yes, but not 500 different ones.
Deleting data won't always clear up storage, seems you were very lucky :)