cancel
Showing results for 
Search instead for 
Did you mean: 

Enterprise Vault Error On Multiple Servers

Ryan_Dong1
Level 4

Event Type: Error
Event Source: Enterprise Vault
Event Category: Storage Crawler
Event ID: 6760
Date:  11/25/2010
Time:  8:00:59 PM
User:  N/A
Computer: <EDITED>-SEV03
Description:
Error from EMC Centera FPLibrary
Function call: FPXMLTag.GetAttributeValue(AttachmentId)<FPTag.GetStringAttribute (AttachmentId)<_FPTag_GetStringAttribute(-,AttachmentId,768,-,128,-)<FPTag_GetStringAttributeW(-,AttachmentId,-,0)
Status: FP_ATTR_NOT_FOUND_ERR (-10018)
Reason: Attribute with that name not found
Pool Address: ###.##.###.144,###.###.###.145?\\<EDITED>\H$\Centera\Email_Archive_MERGED.pea
 

1 ACCEPTED SOLUTION

Accepted Solutions

JesusWept3
Level 6
Partner Accredited Certified

OK so its either a PEA error or a communication error to the centera (possibly because of mismatched Centera SDK's etc)

You need to get the log, repro it, and if its not obvious most likely talk to your Centera admins or get a case open with Centera

https://www.linkedin.com/in/alex-allen-turl-07370146

View solution in original post

11 REPLIES 11

MichelZ
Level 6
Partner Accredited Certified

Hi

How often do you have that error?

When do you have that error? (e.g. during archiving?)

What version of Enterprise Vault do you have?

We need to know more than just the error message....

 

Cheers


cloudficient - EV Migration, creators of EVComplete.

JesusWept3
Level 6
Partner Accredited Certified

Doing some research it looks like it may be an error communicating with the Centera itself
You are going to have to enable Centera SDK logging

But here's an excerpt from another issue from someone else seperate from EV

"As you can see, -10018 happened while getting a primary cluster information for writing. It means there is a missing reference attribute of cluster information from Centera due to the different version of SDK. It's not real error and no impacts on a write request."

https://www.linkedin.com/in/alex-allen-turl-07370146

AndrewB
Moderator
Moderator
Partner    VIP    Accredited

do you have more than one centera? does EV have proper access to both?

Ryan_Dong1
Level 4

We are receiving this error almost daily.  We have an 8 node environment, 7 active 1 passive.  This error message shows up on all 7 active nodes a different times.  It is happening during our archive task windows.  We are running EV 8, SP4.  If you need any other additional information, please let me know.

Ryan_Dong1
Level 4

We have 2 Centerras, EV has proper access to both.  One is active, the other for offloading (at least that's my understanding)

JesusWept3
Level 6
Partner Accredited Certified

get a centera SDK log and post here if you can

https://www.linkedin.com/in/alex-allen-turl-07370146

Korbyn
Level 5
Partner Accredited

JW2, are you refering to modifying the Environment variable to get the Centera log?

http://www.symantec.com/business/support/index?page=content&id=TECH43501

JesusWept3
Level 6
Partner Accredited Certified

yup, so before the archiving window takes place, put it on, and take it off after you hit the first error, these logs grow big and they grow fast, if you forget to take the environment variable off then expect after several days that your EV will shut down because all drive space has been consumed :)

But basically we want to see that 10018 error and see exactly where its erroring as previous examples ive seen havent exactly failed on that call, its on a call before that (a connection call)

https://www.linkedin.com/in/alex-allen-turl-07370146

Ryan_Dong1
Level 4

Here are 2 more sets of errors that I have attached:  I

JesusWept3
Level 6
Partner Accredited Certified

OK so its either a PEA error or a communication error to the centera (possibly because of mismatched Centera SDK's etc)

You need to get the log, repro it, and if its not obvious most likely talk to your Centera admins or get a case open with Centera

https://www.linkedin.com/in/alex-allen-turl-07370146

Ryan_Dong1
Level 4

Due to strict change management, I'll have to wait until Thursday to gather and post some of those logs.  Please stay tuned.