cancel
Showing results for 
Search instead for 
Did you mean: 

EV12 and Building Blocks

CadenL
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi

Has Building Blocks changed at all with EV12? I need to create a DR solution for EV across two sites and had the following thoughts - but just wanted them sanity checked as I've not done it this way previously.

This is quite a small requirement for Exchange mailbox archiving only and less than 500 users. Exchange is configured in a two node DAG with one node on each site - both DAG nodes host active exchange databases and passive copies of the other DAG node.

EV servers need to be physical with internal storage only and will be running EV12.x and configured to archive from the local DAG node (in BAU)

Failover doesn't need to be automated and can take up to 2Hrs to achieve so bulding blocks seem the best way to go.

Please assume that the SQL databases will be replicated as necessary using SQL functionality and I'll use an SQL alias to assist with failover of the SQL databases between SQL servers

Replication of data I perhaps think can be done via something like Robocopy\Richcopy\Doubletake or something similar.

So - can I do the following:

Create a E:\ on EVServer01 for the Archives and replicate this to E:\ on EVServer02

Create a F:\ on EVServer02 for the Archivesand replicate this to F:\ on EVServer01

Create a J:\ on EVServer01 for the Indexes and replicate this to J:\ on EVServer02

Create a K:\ on EVServer02 for the Indexes and replicate this to K:\ on EVServer01

When doing a BB failover and updating the DNS aliases and Service locations will the fact that the surviving EV server has the EV index and archive data (of the failed EV server) visible on the same volume letter and path as it was on the failed server be sufficient for everything to work as expected?

thanks in advance.

1 ACCEPTED SOLUTION

Accepted Solutions

The information about the Storage Queue (SQ) and safety copies is not quite correct.

It is true that if you do not store safety copies on the SQ, you should not have a whole lot of items just hanging out in the SQ long-term. However, if you happen to lose the primary server during an archiving window, when items are being actively added to and processed from the queue, you could end up in a situation where EV has written the items to the SQ and the database, but StorageArchive has not processed the item out of the queue yet. When you fail over to the DR server, the database will still contain records for these savesets and .EVSQ files, so the StorageQueueBroker will try to assign them for ingestion, but they will not exist on the disk and that will fail.

Depending on the type of target and certain policy settings (Pending Shortcut Timeout, for example), this might not be a huge problem as the original items will be reverted after sitting in pending for so long, and they will be rearchived later. Still, you'll have to wait whatever the timeout period is before this happens, and in the meantime you'll see a bunch of errors related to the missing files from the SQ. Better practice is just to replicate the SQ location like you are the partition and index locations.

 

--Chris

View solution in original post

9 REPLIES 9

VirgilDobos
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi mate,

That looks good. Remember that you need to replicate Vault Cache, Storage queue, etc. as well.

Once you decide to do a failover, just need to mount the replicated drives and follow the normal failover steps.

--Virgil

CadenL
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi Virgil Thanks for the quick reply.... 

I've done this in the past with shared storage (both EV indexes and EV archives on a shared (fast) NAS box) and I didn't need to replicate anything like the Vault cache and the storage queues. I can see that if storage queues are used for the safety copy then that would become a requirement but is there a need to do it otherwise? So long as both the vault cache and storage queue exist on both servers in the same location shouldn't it just work? I plan to leave safety copies on the exchange server until after the backups.

Also the volumes I'm replicating to will already be mounted - i'm just replicating from (for example) the E:\ on one server to the E:\ on the second server - all volumes will always be mounted.

Have I mis-understood what you're saying?

thanks again

VirgilDobos
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi Caden,

Right, if you keep safety copies till the backup completes, there is no need to replicate them. Also, Vault Cache is optional as it is merely used as a buffer area. Hence, your approach is OK.

Similarly, since you are going to use robocopy or other software replication, you should keep the drives mounted.

--Virgil

GertjanA
Moderator
Moderator
Partner    VIP    Accredited Certified

To add, try to have the Vault Server Cache location identical or specific for each server, and have the cache folders on each server.

I would also replicate the indexing metadata somehow, that speeds up getting back online.

When performing the replication (I used robocopy at past assignment), make sure that index and/or vaultstore are in backup mode. If at all possible, run a robocopy verify operation after a few days, to see if you are indeed in sync. When configuring robocopy, use a switch to ignore the *.archcab/archdvs/archdvssp and archdvssc files. These are temporary files used for item retrieval, and can add up in the amount of GB's to copy.

I assume you noted E: and F: etc as example. You most likely will use subfolders (right?).

As for the storagequeue, I have had some issues whilst being on EV11(01CHF5). If performing a controlled DR, set tasks in reportmode, restart them. That should clear out Storagequeue.

Regards. Gertjan

CadenL
Moderator
Moderator
Partner    VIP    Accredited Certified

great - thanks

So the main thing I wanted to check was that if EVServer01 has the EV Indexes on (for example) 'E:\EV_Index\index1' (through to \index8) and I replicate this data to EVServer02 keeping the drive letter and path exactly the same on EVServer02  - such that on EVServer02 the replicated index data from EVServer01 is on 'E:\EV_Index\index1' etc, once the Building Block process has completed there won't be any errors saying it can't find the index locations?

Clearly I'd need to do the same for the Archive locations.....

GertjanA
Moderator
Moderator
Partner    VIP    Accredited Certified

Correct. As long as the folderstructure on both is equal, no issues.

You might wanty to specifically name the folders to make sure you know what is what:

On EV001 -> e:\ev001\index\index1 to index8

f:\ev002\index\index1 to index8

Regards. Gertjan

CadenL
Moderator
Moderator
Partner    VIP    Accredited Certified

Thanks Gertjan

If I have different vault cache locations for each EV server as you saying that I should then replicate the content between servers - whereas if I have an identical location then I don't need to replicate the content?

Also with the indexmetadata - can I replicate that without over-writing the indexmetadata on the target server? eg Are the files inside this directory server specific so I'm effectively just 'merging' the two directories.

hope that makes sense 

GertjanA
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi Caden.

It all depends on how you use EV. If you use (for instance) Vault Cache, or perform many PST exports or something else which fills the cache, then I suggest to use seperate cache folders, and replicate. If there is no such thing in use (and thus the folder is empty), use single location, no replication. (the official advise is (btw) to use the same location on each server)

Indexmetadata is not unique. If you don't replicate, and failover, you will see in the eventlog it takes a bit longer to start the indexing service. That first needs to create the metadata files for indexes which have no metadata on that server..

For both, I withdraw my statement :) Use same cache folder, don't worry about indexmetadata.

See https://www.veritas.com/support/en_US/article.000101412.html and for the metadata this one https://www.veritas.com/support/en_US/article.000064598.html

 

Regards. Gertjan

The information about the Storage Queue (SQ) and safety copies is not quite correct.

It is true that if you do not store safety copies on the SQ, you should not have a whole lot of items just hanging out in the SQ long-term. However, if you happen to lose the primary server during an archiving window, when items are being actively added to and processed from the queue, you could end up in a situation where EV has written the items to the SQ and the database, but StorageArchive has not processed the item out of the queue yet. When you fail over to the DR server, the database will still contain records for these savesets and .EVSQ files, so the StorageQueueBroker will try to assign them for ingestion, but they will not exist on the disk and that will fail.

Depending on the type of target and certain policy settings (Pending Shortcut Timeout, for example), this might not be a huge problem as the original items will be reverted after sitting in pending for so long, and they will be rearchived later. Still, you'll have to wait whatever the timeout period is before this happens, and in the meantime you'll see a bunch of errors related to the missing files from the SQ. Better practice is just to replicate the SQ location like you are the partition and index locations.

 

--Chris