cancel
Showing results for 
Search instead for 
Did you mean: 

backup exec deduplication storage size are keep growing

Nishant_p
Level 4
Partner Accredited

issue is with the backup exec 2012 deduplication folder , it is keep growing  before two month i am having same issue and i have manually reclaim space and after that for one month data size was fine, but suddenly data size is keep growing in TB , 

is there any idea why ?>>

11 REPLIES 11

Rapgg
Level 3

Whats the retention set to on the backup jobs? Deduplication is set to reclaim the space automatically but you will find that some backups sets are older than the sets expiration. The reason behind this is that it is using some of the data from this set to make up data in the newer and still current media sets as the resource being backed up hasn't changed since the initial backup. Deleting these old media sets could be putting you at risk of losing current backup data and I can't recommend it. I also believe that the next time a backup runs it will see that the set is missing and backup the missing data again filling up the space you had reclaimed but I could be wrong on that.... anyone else want to verify that?

 

If you are running out of space on your deduplication drive, I would recommend changing the retention/expiration date of your backup jobs first and foremost. Maybe start by having a week allowance and getting a feel of the useage after a week then building up from there.

 

Also, can you post back what the current deduplication compression ratio and the rough amount of data being backed up is over say a week? Might help us see where all the space is being used.

pkh
Moderator
Moderator
   VIP    Certified

The reason behind this is that it is using some of the data from this set to make up data in the newer and still current media sets as the resource being backed up hasn't changed since the initial backup. 

This is not true.  When dedup reclaims space from expired backup sets, the chunks that are still referenced by some newer backup sets will not be reclaimed.

Rapgg
Level 3

Hi PKH, Thanks for the clarification. Thats roughly what I was getting at but you did say it much better. What I was trying to get at was that the old expired media set will still be listed in the media sets and not be fully reclaimed (Because of the reason you mentioned... the data being referenced by a newer/current backup set).

pkh
Moderator
Moderator
   VIP    Certified

No. The expired backup sets will be deleted and any of its chunks which are not referenced anymore will be reclaimed.  Only the chunks which are still referenced will remain.

Rapgg
Level 3

Not to disbelieve, but I had logged a job with Symantec regarding the two appliances I managed not removing old media sets (Seemed to do it eventually... something weeks after they were expired). These were also media sets not linked to any other job (Incremental/differential/etc.). The tech informed me the old sets still had data being referenced in them by the dedupe store. I found this odd myself as what your explaining makes more sense but figured if its coming from the symantec tech, it "should" be correct.

 

I might look into this further as I want to be sure of the answer. Thanks PKH for bringing that to me attention :)

Nishant_p
Level 4
Partner Accredited

i just checked the backup sets in deduplication in bakcup exec conole . there number of duplicate backup sets copy of one backup with same name and same backup date, i don't know  why

Rapgg
Level 3

How large are the backups and how frequently are the jobs meant to be running?

 

Regarding the multiple sets of the same job, how many are there of that one job and are they all the same size? How much data in total should be getting backed up on that job in particular?

Example: Is there 6x 50gb media sets and 1x 23.5gb set?  is that backup job backing up roughly 50gb of data or should it be 323.5gb etc.

Nishant_p
Level 4
Partner Accredited

1.7 TB X40 .it is haveing same data ,time and size means same duplicate copy , 

 

i have checked the bakcup histroy , it is weekly backup only ,      

Rapgg
Level 3

Hi Nishant, few more questions just to get a good scope of whats happening.

 

Whats the total size of the Deduplication drive?

Whats being backed up (Virtual environment, 1x physical server, Data/cif share, etc.)

How much data is meant to be getting backed up (Example, 1 server being backed up, and the server has a c: drive thats 15gb of used space, and a D drive that 150gb of used space tatalling 165gb of backed up data).

What would be the consequence of deleting all of those backup sets? (Not that Im suggesting this at all, just trying to get a feel for the repurcussions in case someones suggestion results in a backup media set to be lost all together).

Nishant_p
Level 4
Partner Accredited

i am having around 10 tb deduplication storage , it is sql database , inteail if i  run the full backup it would be around 1.7 TB , 

 

Yesterdays  for testing purpose I have deleted almost duplicate backup sets for that  particular server and ran the full backup job, today I checked it is same issue ,under the deuplication backup sets it is having around 23 copies of one backup sets , each sets is 1.7 TB space ,

So copies are 23 hence backed up data size  should be around 30 T.B but because of deduplication it is not that  much,     so now I am sure my deduplication storage is getting full because these particular  backup jobs ,

 

My next plan I will run the backup on tapes but and will monitor it , if you guys have any  further suggestion please let me know I would like to hear u ,

Rapgg
Level 3

Thanks for the update Nishant. Any chance I can get you to delete the backup job and re-set it up. If its backing up SQL Databases, can I get you to run up the new backup job backing up one database or a reasonable size (50-100gb maybe) and see what the results are. If that works, can I get you to include a few more databases (Maybe half of the databases on that server) and again check the results and see if it works still. If that works (And theres still no weird duplication) do a full SQL backup and check the results.

Just to confirm, you said it was a SQL backup that you're running. Is it a full Server backup (C drive, system state, etc.) or just a SQL Full backup (only option selected is the "Microsoft SQL Server Instance").

 

Also, your Deduplication store is 10tb of total storage, and the SQL backup you are making is 1.7tb total?

 

Can you tell us what you Deduplication compression ratio is? If you go to the storage view, it should tell you in the column beside the capacity column. The two appliances I manage seem to hover around 4:1 ratio but this will vary a lot from site to site and depending on the jobs/frequency of backups/etc.