cancel
Showing results for 
Search instead for 
Did you mean: 

FSA and large File Servers

mbietz711
Level 4

Hi everyone,

i am still in testing phase to roll out a comprehensive way to FSA.

The main goals i want to achieve are:

1. Archive every object which is not changed for one year.
2. Freeing up the primary storage on file-servers.


Technical this would be of no Problem.

Here is the evirnoment with which is to deal with :

about 10 FileServers and 1000 Users.
Every File Server has one Share under which are huge FolderStructures.
The growth of Data per Server is 2 TB per year.
ArchiveStorage is EMC Centera

First in sight was to use one ArchivePoint for every Server.
This means that one Archive represents one FileServer-Structure.
And that one Archive will growth at least 1TB per year.

So, after some testing i ran into problems.
Large Archives of many terabytes - is this problematic with Enterprise Vault??
Well i just found out that rebuilding an index would take very long time.
Also if something has to be repaired/reported/verified and you deal with evsvr.exe utility or others, large archives
are very time consuming to deal with, if it is ever possible.

So First Plan was..


One EVault-Server
  One VaultStore-Group
     One VaultStore for every FileServer
       One Partition for the Archive which represents the share.


this leads to very Large Archives -> large Partition -> large VaultStore  -> Large VaultStore-Group

So my first suggest was to limit the Archives to 100GB. And if the limit of 100GB is reached, i delete the archivepoint and create a new one and so on. Over time this will give multiple Archives for one Share, but the Archives are good to handle.
Downside if you use ArchiveExplorer you see multiple Archives for one Share and you could get lost (but users are not intended to use archiveexplorer, that would do the IT-Support)
Is this solution a way to go, or is this a not supported way or is it totally bullshit to do that?

Another point would be
if I split it little bit off and limit archives to 100GB

One EVault-Server
 One VaultStore-Group for every FileServer
    One VaultStore for every FileServer
      One Partition for the Archive(s) which represents the share.

controllable Archives -> not so very large VaultStore as before ......


Or should i even split all off and limit archives to 100GB

One Evault-Server per FileServer
 One VaultStore-Group for every FileServer
    One VaultStore for every FileServer
      One Partition for the Archive(s) which represents the share.


well i hope there are some users, which have experience with such situations.
Perhaps there is even a better way to go, which i don't see.

many Thanks

Michael



 

8 REPLIES 8

WiTSend
Level 6
Partner

Were this my environment I would have 1 Vault Store Group, 1 Vault Store per EV server and rollover partitions at 500GB.   I would set archivepoints where they make logical sense such as using the -s (subfolder) on the users and/or "departments" folder rather than 1 big archivepoint (archive).  Each archivepoint would then have it's related archive which would make a more logical presentation.

mbietz711
Level 4

Thank you for your proposal maxwits.

Unfortunately rollover partitions are not possible with evault and centera.

Rob_Brenner
Level 5
Employee

Michael,

I can see that you are managing the environment and the testing in a very organized manner. This is extremely important and valuable as it allows you to be in full control and perform any post-Morten for unexpected issues.

I would just add that I would recommend, as maxwits suggested, that you should consider having Archives which correspond to business units, projects, users or whatever criteria would be more appropriate to the business. The Archives would also be a way to control users access.

If you do create a larger number of Archives the corresponding Indexes would also allow for better performance when searches are required. So considering the overall processing of all the Archive data and how the references are managed within the EV systems, it would allow better performance if you would avoid having a single Archive for a large amount of data.

As you have noticed, any additional administrative operations, such as the use of EVSVR, FSAUtility, Rebuilding Indexes, etc would be severely impacted by the size of the data set associated with a single Archive.

In the end, the best solution would  be a compromise between performance in production and the administrative burden of configuring and managing the environment.

mbietz711
Level 4

Well, i do see the point.

But there would be hundreds of Archives (which would take a huge additional administration tasks)

and even if i am going this way, archives will still grow in size and
the problem with growing archives is not solved it will only occur somewhat later.

If it is possible or supported i would go this way.

One Archive (ArchivePoint) for one File-Server.
Collect items for perhaps one year or any appropriate time range.
Then start a new Archive for this File-Server (ArchivePoint).
You can easily name those Archives with the year name or so.

This would lead to following result.

Multiple Archives for one structure (ArchivePoint).
Every single Archive would be of a size which is reasonable manageable.
No archive will grow out of line.

Well, in my test-lab it is possible to remove an ArchivePoint from a folder and then set a new one for this folder.
I get a new archive, the old one will not grow anymore.
You can access data from both archives, even from the placeholders or the archiveexplorer.

So this is what i wanted.

But my question here is, is this supported?

regards

Michael

 

 

 

 

Liam_Finn1
Level 6
Employee Accredited Certified

I do what you are proposing. I have an archive for each file server. I have a task for each share on the file server and all the tasks roll into one archive. Each year I create new archives so I only store yearly archived data in each archive for each server.

 

I also do the same for Journaling. The admin overhead is not so bad. Once you have them setup they pretty much take care of themselves.

 

I too store on a Centera and it works fine. To get the benefit of the Centera it is best to keep everything in the one Pool on Centera as that provides the best SIS solution because when you archive to Centera it is the Centera that does the SIS not EV.

 

Your biggest issue you will find will be the initial load of data. My suggestion is that you take one server at a time and test the time to do the initial archive run. When you archive FSA the archive task will crawl the whole directory structure. This takes time. On EV 9 this process has been improved but will still take time to run. Your initial load will be a long process according to your description of the size of the file system on your file servers/ The deeper the directory structure the longer the time it will take to crawl and so the longer the FSA initial load will take

mbietz711
Level 4

Hi Scanner thank you for your input.

That helps me a lot, so i think i am not on the wrong track

Thank you.

Liam_Finn1
Level 6
Employee Accredited Certified

I recommend that you dont go at this alone. With with Symantec shared services or work with a provider recommended by Symantec to design this correctly. Many make the mistake of sizing their EV solution themselves only to find out later they have hurt their business and their end users. 

mbietz711
Level 4

Hi Scanner,

A date with a symantec professional is set.

It is always good to be well prepared and able to ask the right questions.
Also it is much easier for the professional to get along with our systems, if i have
as much knowlegde as necessary to to talk with him on the same level.

regards

Michael