Any thoughts on where to look for why some users virtual vault cache seems to grow far beyond 1GB limit? All desktop policies in our org have 1GB limits but some users grow to 5GB or far beyond. Any thoughts or ideas why that might happen? Most grow just over 1GB. This is when looking at the large files in %appdata%\local\kvs\enterprise vault\user ID
It is possible that the users modified the Vault Cache settings on their end. Are you using or Full or Light EVC client functionality?
You could roll out the following registry via GPO to enforce the Vault Cache size:
I checked that we have the policy enabled, and I have the registry value set to '1024' which appears to be MB. My peers have that set as well and while mine is being honored, his is not and grows to 5GB+ and beyond sometimes corrupting. I cant find anything documenting that 1024 is MB but it seems to be. Any other ideas?
You could try resetting the vault cache for a problematic user and monitor it for a while.
If the problem persists, I would suggest logging a case.
One possibility is that the Metadata Cache itself is greater than the size limit you have configured. This can happen with a very large or complex mailbox.
Consider that the Vault Cache feature consists of two main parts, the Metadata Cache (or "MDC") and the Content Cache (or "CC," or sometimes ".DB" because of its file extension). Both are basically PST files, each with a different subset of the data in the archive. The MDC contains the archive's layout, with the folder hierarchy and a small stubbed version of each item. The CC contains the full item content for each item, but it's organized in a different, date-based folder structure that makes it very quick for the Outlook Add-in to locate the data for any given item. A user navigating the archive via Virtual Vault is browsing through the MDC with the structure and stubs, and when he opens an item, the Outlook Add-in opens the full copy of that item from the more efficiently organized CC.
The point of all this explanation is to get to the next point: the size limit settings only control the CC. The MDC must always be a full representation of the archive as folders-and-stubs, while the CC can be limited by size or by age. This was almost never a concern given the generally smaller size and complexity of archives at the time the Vault Cache feature was developed. However, with a large enough archive and a small enough size limit setting, it is definitely possible to create a situation where just the folders-and-stubs representation of the archive is greater than the limit you have configured. The resultant behavior in this situation is that the MDC file grows to its full size, regardless of what size limit is configured. When that is complete, the CC begins to synchronize but immediately finds that the size limit has already been reached, and so it just stops. The end result is a Vault Cache that is all MDC, and where every time a user attempts to open an item, the full content must be retrieved from the EV server rather than from the local cache on the workstation. This basically defeats the main value proposition of the Vault Cache feature, and I wouldn't recommend it.
From your post I don't have a great idea of your environment or typical archive sizes, but in general I would say that a 1GB size limit on Vault Cache seems like it would be too low for all but the smallest archives. I suspect that if you check the Vault Cache folder location on the workstations of the users who are exceeding the size limits, you will find that the MDC alone (it's got a .MDC file extension) is responsible for the lion's share of the size. Really the only remedy here is to raise the size limit in your policy so that it's a value that is reliably greater than the size of these MDC files.
If you can identify a specific subset of users for whom this is a problem, you could separate them into their own Provisioning Group and give them their own more generous policy while keeping the 1GB policy for everybody else.
I hope that helps shed some light on what is probably going on here.