07-22-2013 08:46 AM
Hello,
What is the best way to address the issue with items that are waiting to be indexed...
Log Name: Symantec Enterprise Vault
Source: Enterprise Vault
Date: 7/22/2013 11:32:37 AM
Event ID: 41021
Task Category: Monitoring
Level: Warning
Keywords: Classic
User: N/A
Computer: MYEVSERVER.MYDOMAIN.com
Description:
There are 524628 archived items in Vault Store 'MYVAULT' that are waiting to be indexed.
There may be a problem with Enterprise Vault indexing.
You can use the System Status feature in the Administration Console to help you resolve this issue.
For more information, see Help and Support Center at http://entced.symantec.com/entt?product=ev&language=english&version=10.0.2.0&build=10.0.2.1112&error...
I would like to understand better why this might be happenning... We are on EV v10.0.2.
Thank you,
Victor
Solved! Go to Solution.
07-22-2013 09:27 AM
Do you see any index volume failures or error messages in the event log? Any recent changes to the server such as hotfixes? You can also collect a dtrace for the following processes:
set IndexBroker v
set IndexServer v
set EVIndexAdminService v
set EVIndexQueryServer v
set EVIndexVolumesProcessor v
set EVIndexVerifyTask v
set EVIndexMoveTask v
set w3wp v
set StorageCrawler v
07-22-2013 09:01 AM
This issue might be caused by different reasons. One of them could that the Index locations are in backup mode. There is a technote that could help you troubleshooting this issue initially:
http://www.symantec.com/docs/TECH50268
07-22-2013 09:17 AM
Thank you, GabeV.
I think we have been through the obvious troubleshooting steps. Indexing locations are not in the backup mode.
Thank you for posting the article. Unfortunately, it does not go beyond basic troubleshooting steps.
Any ideas are very much appreciated.
07-22-2013 09:27 AM
Do you see any index volume failures or error messages in the event log? Any recent changes to the server such as hotfixes? You can also collect a dtrace for the following processes:
set IndexBroker v
set IndexServer v
set EVIndexAdminService v
set EVIndexQueryServer v
set EVIndexVolumesProcessor v
set EVIndexVerifyTask v
set EVIndexMoveTask v
set w3wp v
set StorageCrawler v
07-22-2013 10:22 AM
If you dtrace all of those , you're more likely than not going to get buffer overflows in the trace
Your best bet really is to dtrace the EVIndexAdminServers and EVIndexVolumesProcessor and then restart the indexing service and see what it does
Look for Event logs around the startup, both in the System and in the Enterprise Vault event logs.
Have seen some issues where Indexing forwhatever reason uses up all the WCF Ports that indexing allows for , and you ended up getting flooded with errors in the System log.
Also seen it for one reason or another just sit and do absolutely nothing, then if you go in to the Manage Index Tasks, you then do a Synchronization on the indexes affected, it kicks it back to life
It would be important though to find out which Archives are falling behind, so you'd need a query like
SELECT A.ArchiveName "Archive Name", CE.ComputerName "Index Server", (IRP.IndexRootPath + '\' + IV.FolderName) "Folder Path", COUNT(JA.IndexCommited) "Items Awaiting Index", IV.Failed "Index Failed", IV.ReadOnly "Index ReadOnly", IV.Offline "Index Offline", IV.Rebuilding "Index Rebuilding", IV.WorkPending "Work Pending" FROM EnterpriseVaultDirectory.dbo.Archive A, EnterpriseVaultDirectory.dbo.Root R, EnterpriseVaultDirectory.dbo.IndexVolume IV, EnterpriseVaultDirectory.dbo.IndexRootPathEntry IRP, EnterpriseVaultDirectory.dbo.IndexingServiceEntry ISE, EnterpriseVaultDirectory.dbo.ComputerEntry CE, EVVSYourVaultStore_1.dbo.ArchivePoint AP, EVVSYourVaultStore_1.dbo.JournalArchive JA WHERE JA.ArchivePointIdentity = AP.ArchivePointIdentity AND AP.ArchivePointId = R.VaultEntryId AND R.RootIdentity = A.RootIdentity AND R.RootIdentity = IV.RootIdentity AND IV.IndexRootPathEntryId = IRP.IndexRootPathEntryId AND IRP.IndexServiceEntryId = ISE.ServiceEntryId, AND ISE.ComputerEntryId = CE.ComputerEntryId AND JA.IndexCommited = 0 GROUP BY A.ArchiveName, CE.ComputerName, (IRP.IndexRootPath + '\' + IV.FolderName), IV.Failed, IV.ReadOnly, IV.Offline, IV.Rebuilding, IV.WorkPending ORDER BY "Items Awaiting Index" DESC
this should give you a list of archives who have items awaiting index and you can concentrate on the larger ones first and determine whether the numbers go up or down
07-22-2013 02:33 PM
In Addition to what JesusWept said, you can also run following SQL query to know top contributer archives where items are awaiting to be indexes.
Waiting to be Index
------------------------------------
Use <vaultStoreDatabase>
select Records, ArchiveName
FROM
(select ArchivePoint.ArchivePointId, count(*) Records
from journalarchive
inner join ArchivePoint on ArchivePoint.ArchivePointIdentity = JournalArchive.ArchivePointIdentity
where indexcommited=0
group by JournalArchive.ArchivePointIdentity, ArchivePoint.ArchivePointId) SQ
INNER JOIN EnterpriseVaultDirectory.dbo.ArchiveView ON ArchiveView.VaultEntryId = SQ.ArchivePointId
order by Records desc
WAITING TO UPDATE
------------------------------
Use <vaultStoreDatabase>
select Records, ArchiveName
FROM
(select ArchivePoint.ArchivePointId, count(*) Records
from journalupdate
inner join ArchivePoint on ArchivePoint.ArchivePointIdentity = JournalUpdate.ArchivePointIdentity
where indexcommitted=0
group by JournalUpdate.ArchivePointIdentity, ArchivePoint.ArchivePointId) SQ
INNER JOIN EnterpriseVaultDirectory.dbo.ArchiveView ON ArchiveView.VaultEntryId = SQ.ArchivePointId
order by Records desc
You can also take dtrace on EVIndexVolumesProcessor and compare again process flow tech note http://www.symantec.com/docs/HOWTO59168 to know what is NOT happening in your traces.
I have seen cases where DCOM OR Firewall related problem create index backlog issues.
07-23-2013 08:03 AM
Hello,
Thanks to all; you lead me to the solution.
I triple checked the logs and found earlier events of Index failures of the most recent volume of the Index in question.
Even though at the time of the event id 41021 the index volume appear to be healthy, it apparently was not.
We started “rebuild” just the most recent volume of the index and it completed with 12 hours and all problems got cleared.
Thanks to all once again.
07-23-2013 08:07 AM
just for future reference, rebuilds should be a last resort and a Synchronization should be sufficient
Because you may get to the point where you have so many items pending on a really large index that it could take 2 months + to complete etc
07-23-2013 08:12 AM
Synchronization was looping:
15/07/2013 09:21:03 Checking index volume for failed state.
15/07/2013 09:21:03 The failed reason on the index volume may be caused by a temporary problem. Will continue synchronizing the volume.
Failed volume reason: Indexing engine error
15/07/2013 09:21:03 Clearing the failed status of the volume.
15/07/2013 09:21:04 Failed status of the volume cleared.
15/07/2013 09:21:05 Performing initial verification for the index volume.
15/07/2013 09:21:06 Completed initial verification for the index volume.
15/07/2013 09:21:06 Retrying failed items.
15/07/2013 09:26:51 The synchronize paused as it encountered an error.
15/07/2013 09:26:56 The synchronize has resumed its processing.
15/07/2013 09:32:10 The synchronize paused as the index server processing the task is overloaded.
15/07/2013 09:33:11 The synchronize has resumed its processing.
15/07/2013 09:38:23 The synchronize paused as the index server processing the task is overloaded.
15/07/2013 09:39:24 The synchronize has resumed its processing.