cancel
Showing results for 
Search instead for 
Did you mean: 

JournalArchive table and trigger file

TimHudson
Level 3

I have discovered that my vault store databases have very large watchfile and JournalArchive tables (3 db's for a total of 300 million rows).  The reason for this is that my backup software doesn't reset archive bits, and I was not advised to use the IgnoreArchiveBitTrigger.txt file to clear these records. As I understand it, I can clean up these records by creating this trigger file, and it will change 1 record in each row from false to true (indicating that they have been backed up).  It is my understanding that since my stores have sharing enabled amongst them, that I have to do this on all partitions at once. 

So my concern is that if i run this process, it will have to modify 300 million rows, which I imagine will take a long time and might bring my sql server to it's knees.  Symantec Support doesn't know how much of an impact it will have other than saying there will be a performance hit.  I don't really want to risk either crashing sql server or making Enterprise Vault unbearably slow for 3 days (weeks, or however long it will take)

That said, is there any way that this can be done in smaller batches (only modify some of the records in the tables)?  One thought I had was to use a trigger file with an old created date, but the support tech said that the file has to be the current date in order for it to work (whereas my understanding was that EV used that date to determine which records were safe to purge).

Any advice would be much appreciated.  I can't really move forward with upgrading from 8.4 to 9 until this is resolved.

1 ACCEPTED SOLUTION

Accepted Solutions

JesusWept3
Level 6
Partner Accredited Certified

OK so yes technically with sharing enabled for an item to truly be secure any items that its shared across from other partitions would also have to be "backed up" in order for the BackupComplete = 1

However that being said, if an item is below i20kthen it won't be shared, and won't rely on anything outside of its own partition.

The IgnoreArchiveBitTrigger.txt file is placed at the root of the partition so if it just resides in that one partition it will only secure items for anything it finds in that partition alone, and if they're not shared then it will only secure parts of them.

The Creation Date , well just try it and see what happens
So lets do something like the following

1. Get a count of how many items are in the journal archive altogether awaiting backup from 1st January 2011

USE yourVaultStore
SELECT COUNT(TransactionId)
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'

2. Get a count of how many items are in the journal archive below the sharing threshold

USE yourVaultStore
SELECT COUNT(TransactionId)
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'
AND ItemSize < (SELECT SISPartSizeThreshold_KB FROM EnterpriseVaultDirectory.dbo.VaultStoreGroup)

3.  Get the oldest dates of the items awaiting backup

SELECT TOP 10 TransactionId, ItemSize, BackupComplete, RecordCreationDate
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'
AND ItemSize < (SELECT SISPartSizeThreshold_KB FROM EnterpriseVaultDirectory.dbo.VaultStoreGroup)
ORDER BY RecordCreationDate

4. Note that the first records displayed are the oldest, so lets say some of the oldest are 01-01-2009
Download a utility old FileTouch from here: http://www.softtreetech.com/24x7/extras/FileTouch.exe

5. Next go to the Root of your partition and create IgnoreArchiveBitTrigger.txt in notepad
6. Open a command prompt to where you extracted FileTouch.exe
7. Run the following Command using the date you determined previously

FileTouch /C /D 01-01-2010 /T 13:00:00 E:\Enterprise Vault Stores\Partition1\IgnoreArchiveBitTrigger.txt

8. Afterwards on the EV Server itself, open a Command prompt and CD to your \Program Files\Enterprise Vault Directory
9. Type "Dtrace" and press Enter
10. Type "set StorageFileWatch v" and press Enter
11. Type "log C:\TriggerFileTest.txt" and press Enter
12. Minimize the command prompt
13. Either restart the Storage service or wait for the scan time to come around (on the partitions you have a Backup tab that talks about the trigger file and a scan every 60 minutes if set)
14. Await for the file to be called .OLD
15. Go back to the DTrae in the command prompt and exit
16. Go back to the SQL Management studio, and run the first three queries again, you should now see that the counts have gone down

Also this is what i see from my dtrace

 

(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists (Entry) |
(StorageFileWatch)<7244> EV:L CWatchFileTimer::CheckTriggerFileExists|Trigger file E:\Enterprise Vault Stores\Partition1\PartitionSecuredNotification.xml not found so searching for .txt file
(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists|Trigger file E:\Enterprise Vault Stores\Partition1\IgnoreArchiveBitTrigger.txt found
(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists|PartitionSecuredDate = 2009-01-01 19:00:00 TZ|

 

 

https://www.linkedin.com/in/alex-allen-turl-07370146

View solution in original post

9 REPLIES 9

JesusWept3
Level 6
Partner Accredited Certified

yeah it will update the BackupComplete to 1 on each record and removing the respective WatchFile record.

What i would suggest is set the Vault Store databases to be in Simple recovery mode for the transaction logs instead of Full and then ust put the IgnoreArchiveBitTrigger.txt file in and let it go, the biggest hammering you would see normally is the Transaction logs growing larger and larger, it wont be that bad in simple mode though.

One thing that you *could* do to do it in batches is to manipulate the CreatedDate of the Trigger file so that it only "secures" items that have been created on or before that date

So for instance lets say you have EV going back to 2009 and you've never had a trigger file
If you create your ignoreArchiveBitTrigger.txt file today and run it with a createDate of 05/31/2011, it will do every file up until today

However, if you were to modify the CreatedDate to be 01/01/2010 then it would do everything up until January 1st 2010, then if you modify the file to have a created date of 07/01/2010 then it would do up until july etc etc etc

Also remember that the IgnoreArchiveBitTrigger.txt file is per partition, and not an all or nothing

So if you have 7 partitions in your vault store, you could do the ignoreArchiveBitTrigger.txt on one partition at a time....

Also in addition to this, make sure that if your Vault Stores compatibility mode is at level 90 (2005) or 100 (2008) and not 80 (2000) otherwise the items will just climb and climb and climb in the table and never be released, regardless of if/when the items were backed up.

https://www.linkedin.com/in/alex-allen-turl-07370146

JesusWept3
Level 6
Partner Accredited Certified

I wrote more about the backup changes here:
https://www-secure.symantec.com/connect/articles/changes-backup-procedures-enterprise-vault-8

https://www.linkedin.com/in/alex-allen-turl-07370146

TimHudson
Level 3

Thanks for your quick response.  Both of your suggestions are exactly what I was hoping we could do, however Symantec support says that neither will work:

I asked this:

"We do have sharing enabled, so I assume that means we cannot do the partitions one at a time. 

 What if we use a txt file that was created in the past?  For example, find a txt file dated in Dec of ’09 and delete everything in it and rename it to IgnoreArchiveBitTrigger.txt

Would that then only modify the records up to the created date (dec ’09)? "

 

And received this response:

" You are correct. Because you have sharing enabled, it would have to be done for all.  With the method you are proposing, I am not sure that can be done.  We base our process off the creation date of the file. However, the creation date must be the current date. That is why a script runs prior to backups, which creates the txt file, then the backup runs, and our process renames the file  to .old.  If the file is sitting in the location with .txt and properly named everything, it still will not run, because the creation date is not current.  "

 

so I'm not sure if we are miscommunicating or if it's just that the support tech is uncertain and doesn't want to tell me to do this and have it fail.  Is there any risk in attempting either of your proposed methods?

JesusWept3
Level 6
Partner Accredited Certified

OK so yes technically with sharing enabled for an item to truly be secure any items that its shared across from other partitions would also have to be "backed up" in order for the BackupComplete = 1

However that being said, if an item is below i20kthen it won't be shared, and won't rely on anything outside of its own partition.

The IgnoreArchiveBitTrigger.txt file is placed at the root of the partition so if it just resides in that one partition it will only secure items for anything it finds in that partition alone, and if they're not shared then it will only secure parts of them.

The Creation Date , well just try it and see what happens
So lets do something like the following

1. Get a count of how many items are in the journal archive altogether awaiting backup from 1st January 2011

USE yourVaultStore
SELECT COUNT(TransactionId)
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'

2. Get a count of how many items are in the journal archive below the sharing threshold

USE yourVaultStore
SELECT COUNT(TransactionId)
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'
AND ItemSize < (SELECT SISPartSizeThreshold_KB FROM EnterpriseVaultDirectory.dbo.VaultStoreGroup)

3.  Get the oldest dates of the items awaiting backup

SELECT TOP 10 TransactionId, ItemSize, BackupComplete, RecordCreationDate
FROM JournalArchive
WHERE BackupComplete = 0
AND RecordCreationDate < '2011-01-01'
AND ItemSize < (SELECT SISPartSizeThreshold_KB FROM EnterpriseVaultDirectory.dbo.VaultStoreGroup)
ORDER BY RecordCreationDate

4. Note that the first records displayed are the oldest, so lets say some of the oldest are 01-01-2009
Download a utility old FileTouch from here: http://www.softtreetech.com/24x7/extras/FileTouch.exe

5. Next go to the Root of your partition and create IgnoreArchiveBitTrigger.txt in notepad
6. Open a command prompt to where you extracted FileTouch.exe
7. Run the following Command using the date you determined previously

FileTouch /C /D 01-01-2010 /T 13:00:00 E:\Enterprise Vault Stores\Partition1\IgnoreArchiveBitTrigger.txt

8. Afterwards on the EV Server itself, open a Command prompt and CD to your \Program Files\Enterprise Vault Directory
9. Type "Dtrace" and press Enter
10. Type "set StorageFileWatch v" and press Enter
11. Type "log C:\TriggerFileTest.txt" and press Enter
12. Minimize the command prompt
13. Either restart the Storage service or wait for the scan time to come around (on the partitions you have a Backup tab that talks about the trigger file and a scan every 60 minutes if set)
14. Await for the file to be called .OLD
15. Go back to the DTrae in the command prompt and exit
16. Go back to the SQL Management studio, and run the first three queries again, you should now see that the counts have gone down

Also this is what i see from my dtrace

 

(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists (Entry) |
(StorageFileWatch)<7244> EV:L CWatchFileTimer::CheckTriggerFileExists|Trigger file E:\Enterprise Vault Stores\Partition1\PartitionSecuredNotification.xml not found so searching for .txt file
(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists|Trigger file E:\Enterprise Vault Stores\Partition1\IgnoreArchiveBitTrigger.txt found
(StorageFileWatch) <7244> EV:L CWatchFileTimer::CheckTriggerFileExists|PartitionSecuredDate = 2009-01-01 19:00:00 TZ|

 

 

https://www.linkedin.com/in/alex-allen-turl-07370146

MichelZ
Level 6
Partner Accredited Certified

Hi

 

Use the "PartitionSecuredNotification.xml" instead of the legacy IgnoreArchiveBitTrigger.txt.

You can set a "save date" in the XML file, which allows you to do it in batches:

http://www.symantec.com/business/support/index?page=content&id=TECH67559

 

Example:

The following XML shows a valid example of the content in PartitionSecuredNotification.xml:
 
The following XML shows a valid example of the content in PartitionSecuredNotification.xml:
 
<?xml version="1.0" encoding="utf-8" ?>
<PartitionSecuredNotification
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="PartitionSecuredNotification.xsd">
<VendorName>required free form string</VendorName>
<VendorAppType>required free form string</VendorAppType>
<PartitionSecuredDateTime>
2008-03-25T09:30:10+05:30
</PartitionSecuredDateTime>
<VendorTransactionId>optional string</VendorTransactionId>
</PartitionSecuredNotification>
 
Cheers

cloudficient - EV Migration, creators of EVComplete.

FreKac2
Level 6
Partner Accredited Certified

Or I guess you could leave EV for now on Archive Bit, and use e.g.:

Attrib -A /S

To change the "granularity" just go down one step in the path:

e.g. "e:\mbxptn1\2009\01-30" instead of "e:\mbxptn1\2009"

TimHudson
Level 3

JW2's method works, however the behavior was not what I expected.  I got the impression that the load would be entirely on the sql server (basically EV giving sql a command to change a bunch of records from 0 to 1).  What really seems to be happening is that EV is going through each item in the vault and marking the db records from 0 to 1 as appropriate.  No noticable impact on SQL, but substantial CPU utilization on the EV server.  This is taking a very long time, especially with small batches (using older trigger files).

Am I correct about the way this is working?  Will it go through all items in the vault partition every time I run it and check to see if each has been secured?  My concern is that let's say I start this process on 100 million items in the vault store, and it gets half way through them (so it cleared 50 mil records in the tables) before we have a service restart, which would kill the process.  If I start the process again, will it have to go through all 100 million items again to clear out the remaining table entries?  Or, will it know that it's already gone through 50 million of those items, thereby only needing to to through the items that it hasn't identified as secured (the remaining 50 million)?  The reason I ask is that I'm afraid it will take weeks to go through all of these items, and if I have to restart services (this happens fairly often), I really don't want it to have to start over.

Again, thanks for your advice.  by the way, the xml method failed for me saying that it was invalid -- i probably just didn't format it correctly.  So I used the txt file trigger and it worked fine.

JesusWept3
Level 6
Partner Accredited Certified

What it should be doing is loading up the entries in WatchFile that has a full path to the item and then checking it

For instance if you have your IgnoreArchiveBitTrigger.txt set to 01/01/2010 it should basically be doing a select WHERE RecordCreationDate =< '01-01-2010', and then it will get back a list of items to check from the watchfile, and the location would be something like

E:\Enterprise Vault Stores\Ptn1\0E\1\2010-01\01\201001030.....dvs

And then EV will go to that location, make sure the file is there, and then delete the WatchFile record and then set BackupComplete to 1, then after 14 days it should delete from the JournalArchive table (or however long your retain history is set to in the Site Properties)

https://www.linkedin.com/in/alex-allen-turl-07370146

TimHudson
Level 3

ok, great, that tells me that it doesn't have to go through vault items that it has already secured (and cleared from the watchfile table.  It seems like if that's the case, running this process with an old date (small number of items to update) would be quite fast.  In reality it's running much slower with older dates (1,000 items per hour for 12/18/2009 vs. 10,000 per hour for 3/01/2010).  I'll figure out a balance between speed and performance.