cancel
Showing results for 
Search instead for 
Did you mean: 

Enterprise Vault scalability.... "forever" retention?

Matthew_J
Level 4

I need some technical wisdom for those of you using Vault maybe on scales that are above what we currently use it for.  Management is talking about potentially scrapping our upcoming "start of expiration" date (7 years from time of archival) in lieu of simply just keeping everything forever.  I'm focused on the technical aspects of what it really means to just keep archiving everything and never expire anything, I'm letting others discuss the business impacts regarding discovery searches and the like.  

While it's not within my real to tell the business one way or the other what decision to make here, I would like them to be informed if there are technical pitfalls we are going to run into should they decide to just "keep everything".  

We currently archive about 1000 mailboxes on a 30 days from modified basis, and have about 2.7tb worth of existing data (this includes PST's that were imported at the beginning of the project).  We do a quarterly partition rollover and are currently at about 130 gigs per partition.  Our EV server is a single VM with 16 gigs of ram and 4 vCPU's.  This is probably small potatoes, but my concern is future scalability for the vault system.

Two things I can think of right off the top of my head are hitting a limit on number of RDM's we can attach to the VM for quarterly partition rollovers.  Does it make sense to continue with quarter rollovers, or would a yearly rollover suffice?  Is there a point where database size for the indexes and EV db's going to become an issue?  

Are there any whitepapers that I can read about scaling the vault system for this kind of thing, or should we just keep plugging along as-is until we run into problems?  Help a new vault admin out with some wisdom, since I'm sure there's got to be some people that have gone down the "keep it all forever" road. :D

 

4 REPLIES 4

GertjanA
Moderator
Moderator
Partner    VIP    Accredited Certified

Hello Matthew,

My personal 2 cents!

From a legal/compliance point of view, keeping eveything forever might be risky. All depending on the business you're in, it might not be wise to be able to deliver items older than an x-number of years. I always tell my customers, 'If you have it, you have to deliver it'. For internal discovery, this might be fine, but for outside legal cases, this might be a goldmine (for them, not for you).

As for storage/index/database etc, there is not really a limit, more a best practice. If you really want to keep everything, can you use 'business like' storage? (Netapp, Centera or something similar) You need to take in account you need to be able to backup the data!

Needing to add storage to a VM (if you use driveletters :) ) will end when you hit 23 disks. In large environments, I tend to use Mountpoints. Create a folder called EVData, create a folder called Partition1. Mount a disk to this empty folder. When nearing capacity, add a new disk, create a new partition, mount the disk in this empty folder, close Partition1. This way, you can keep going indefinitely.

Indexes are tied to archives. The default settings for indexes are sufficient for your environment. As for the storage used, same as above applies. Just mount a disk in an empty folder, create new index locations, close the old one. (as of EV10 and up, closing an index location does mean no new data is being added).

For both, try to keep about 10% free diskspace.

As for the databases, These can grow reasonably large, but it is good to do the required SQL maintenance, as described in the performance and maintenance guides.

You also might want to think of what to do with archives of leavers. Do you keep these too, or do you delete these? Deleting archives does cause a hit on storage and SQL, so be carefull not to delete too many.

Start here:

https://www.veritas.com/suppoer/en-us/

Look for EV for Exchange, then look for Performance.

somehwere there is a doc called 'recommended steps to optimize performance for EV/DA/CA'. That is a good start...

AS FYI, I am currently in an environment doing only Journal Archiving, also an 'old' environment of which archives are to be migrated, has over 250TB of data, 20TB of indexes, and some 5tb of databases. No issues... (Using Centera and Netapp for data, indexes local). all VM's.

You server (btw) is more than capable of processing the amount of users you have. If you (which is not clear if you do) use Journal Archiving, I suggest to do that with a seperate server, so you can use seperate fingerprint, vaultstore db, seperate indexes and storage.

Greetings,

 

Regards. Gertjan

WiTSend
Level 6
Partner

My current environment is fairly large.  I have data for +90K archives going back up to 9 years now.  The rollover and management is more dependent on your back-up/DR environment than it is on EV itself.  Some of my Index partitions are over 5TB, with several vault stores over 75TB.  My SQL dbs are over 1GB as well.  Scalability with EV is not an issue.

I would determine the growth rate of the Index volumes and the Vault Store partitions and base my rollover on what is managable in the backup environment.

Thank you to both of you for the insight.  Mostly you've confirmed that we are not really approaching (currently) pushing the limits of the "vault" system overall.  GertjanA: I agree that keeping all is risky for legal.  Ultimately that will be their decision.  We have a records management officer who is pushing very hard for that NOT to happen currently, so fingers crossed. 

Storage-wise, we are exporting individual volumes from our 3par SAN to our vmware environment, and attaching them as disks to our EV server VM.  Is this what you mean by "business-class" storage, or is there another level to look at for the long run?  I don't think we are near the point where we would dedicate a whole SAN to this purpose, but if there are some EV plugins or some other method to better utilize storage, that would be of interest.  Right now we roll over mail store partitions quarterly, they usually reach about 125-135 gigabytes in size, and we attach them to mountpoints so there's no issue there with drive letter limitations... but I see that vmware has a limit of 4 scsci controllers per VM x 15 disks for each controller.  We might need to move to a yearly or semi-yearly partition rollover. 

As for the indexes, that is good to know.  I will have to see how large our index drive is now and contemplate when to add a new one if we decide to keep things.

We do have the regular SQL maintenance plans set up, and we are currently keeping ALL e-mail which includes archives of leavers... since there's no one who has time to sit down and sort through what to save and what not.  Fun.

We are currently doing journal archiving, and it's on the same server, goes to the same storage partition/disk as daily archives but a different mailstore partition. 

Thank you WiTSend for your input too.  I assume you mean over 1tb for some of your databases?  our database storage for all vault related db's is currently about 300gb. 

GertjanA
Moderator
Moderator
Partner    VIP    Accredited Certified

That's indeed what I mean with 'business class storage'.

 

Regards. Gertjan