cancel
Showing results for 
Search instead for 
Did you mean: 

5230 - partition usage has exceeded warning threshold and will soon reach full capacity

Stanleyj
Level 6

For several months now I have been getting the following warning every single day.

  • The partition usage has exceeded warning threshold and will soon reach full capacity. Cleanup the partition and re-check status. If the issue is not resolved, contact Veritas Technical Support for assistance.
    • Time of event: 2017-05-01 16:32:03 (-04:00)
    • UMI Event code: V-475-103-1001
    • Component Type: Partition
    • Component: MSDP
    • Status: 88%
    • State: WARNING

The Status has been steady climbing every week by 1% and i cant seem to figure out why.  I contacted support and they said my retentions are to long and to shrink them.  I changed all, but the monthys by 75% of the retention period and then closed the ticket.  Several weeks have gone by and it still continues to grow.

This appliance is only used for replicating data offsite and the polices match except the monthly backups.  Monthlys are kept 2 years on my primary appliance and 7 years on this one.  But all other jobs match or have smaller retentions.

We are only 3 years into this model of appliances so 12 more monthly backups shouldn't be putting me in the almost full range considering i am keeping a quarter less of the weeklys, daily and incrementals.

When i look in the web console it shows i have 74% used with 96% deduplication ratio.  Has anyone run into this recently with the newest code release or have some suggestions on how i can see what is actually consuming all the space.  Thanks

nbu 8.0

5230 - 3.0 (msdp - 35tb)

14 REPLIES 14

elanmbx
Level 6

Are you using all/most of your disk?  Or is there available space on the array that hasn't yet been allocated?  Your mention of "74% used" has me wondering...

I did discover that i had about 7tb on the array that had not been allocated so i added it right after i created this post.  This only drove down the warnings to say that i was exceeding 81% of capacity.  Maybe since changing the policy retentions i need to give it a few weeks to see if they purge out correctly becuase cutting my retentions on all jobs by 75% should make a huge difference i would think. 

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified

hi mate,

think about the 7 year retention one more time. Every day your backups will accummulate more and more unique data which cannot be deduplicated, causing these long retention backup to be more and more unique, consuming more and more space. This accummulation of unique data is what is likely causing disk usage, and the more you store data with 7-year retention, the more space they will consume.

You may want to start checking individual client dedupe rates - if you have databases or systems which contain significant amounts of compressed and/or encrypted data, these can wreak havoc with the dedupe rates.

I have been picking away at our poor-dedupe clients for over a year.  It does make a difference...

I was expecting the 7 year retention to come into play at some point but i wasn't expecting it this soon considering this paticular appliance is only for AIR data and not all is being replicated.  My primary appliance which is identical is backing up alot more data that is not being sent offsite.  Yes the retention on the monthly's is only 1 year, but my daily and weeklys are kept far longer.  Unfortunatly i cant change the 7 year portion and i was hoping to phase out tape so i guess i just need to keep widling away at other jobs or possibly look into more storage.  Sometimes you just have to pay up.  :) 

Thank you all for the help and suggestions.  I truly appriciate it.

 

Sometimes, yes, you do, but in watching this thread a thing occurs to me -- it's a long shot but thought I'd ask.  At any time have you worked with support on this box and they suspended or modified the cleanup schedule?  If so, it could be that the objects aren't being fully cleaned up and you've got either orphans or garbage in the pool that hasn't been cleaned up properly.

Like I say, it's a super long shot, but something to consider.

Charles
VCS, NBU & Appliances

t_jadliwala
Level 4
Partner Accredited

If you have opscenter installed on your environment thn you can find out the pre and post deduplication ratio based on the policy type and it will further segregate to client.  

t_jadliwala
Level 4
Partner Accredited

Also for your netbackup environment try to run a pogather utility and find out if there is any orphan images lying on your appliance it will help you to clean up the space. 

Sorry for the long over due response, but how do i get my hands on the this pogather utility?  And how would i also check cleanup schedules?  Because i have changed all the retentions down to 1 week on the appliance except for the monthlys and its still growing.

I was hopeing at the very least to see a slight drop in the consumption to know that changing a retention did something but nope.  Its been almost 2 weeks and the storage has grown another 2%.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

You can get the tool by logging another Support call with Veritas. 

Hopefully your response time to them will be a bit better than here.... Smiley Tongue

andrew_mcc1
Level 6
   VIP   

No problems running pogather but I would strongly urge you to examine what is actually stored in the dedupe pool as advised earlier. OpsCenter or bpimagelist can help do this and to estimate expected back-end storage usage. Also compare this with the original sizing from 3 years ago (this was done back then right?).

Finally 7 years is a very long retention for a dedupe pool; dedupe rates for backups decrease as they age as other backups taken around the same time expire...

Andrew

Thanks.  I opened a ticket again to run the pogather just to see what it tells us.  I do understand that 7 years is a long disk retention time but unfortunatly its the hand i've been delt.  If storing on tape is what it comes down to then it is what it is.

I had opsmanager at one time setup, but i used it so little that when the server hosting came off lease i didnt bother intsalling it back.  Maybe i will set it up again.  I really appreciate everyones info on here and the quick responses. i have been the one to slowly reply.  

These forums have solved many an issue for me over the years.

 

Use the admin console to sort your clients over the last few days by dedupe rates.  You need to **bleep** the biggest offenders in the bud ASAP.

I've often found *large* systems that have poor dedupe rates, and generally there is something you can do about those...

I continue to find offending clients with poor deduplication - in our environment they are generally database backups in which the DBAs have compressed their backups to save space.  Well - this saves space on the primary storage, but will wreak absolute HAVOC with your dedupe rates.

I've got a particular PostgreSQL backup host that continues to be a thorn in my side, as well as a Microsoft SQL host that is regularly getting 4-5% dedupe rates.  This sort of things will RAPIDLY kill your MSDP.