cancel
Showing results for 
Search instead for 
Did you mean: 

eDiscovery Clearwell 7.1.2 ERROR production exceptions

bc1410
Level 5

Hello

 

Has anyone seen the following Clearwell error exceptions from a production Export?

 

com.teneo.esa.production.job.ProductionJobException: [#416227] Production job failed with exception: com.teneo.esa.production.job.ProductionJobException: [#416224]

3 REPLIES 3

Daly
Level 5
Partner Employee Accredited Certified

Hello bc1410, are you able to provide the full error?

bc1410
Level 5

Hello Daly -

 

Here is the error we got..

2016-xxxxxx 21:30:15,948 INFO  [production.xxxxxxmon.ProductionSlipsheetBitmaps] (RPC-thread-1 - clearwell.XXXXXX.xxxxxx:2595:414195208951026303:) CaseName:[XXXXXX]  UserName:[ XXXXXX -880339586] Production Slipsheet Search Bitmaps -- Completed bitmap/label generation, nowretrieving information from t_productionslipsheetreport
2016-xxxxxx 21:30:15,963 INFO  [production.xxxxxxmon.ProductionSlipsheetBitmaps] (RPC-thread-1 - clearwell- XXXXXX.xxxxxx:2595:414195208951026303:) CaseName:[ XXXXXX]  UserName:[ XXXXXX -880339586] Production Slipsheet Search Bitmaps -- Retrieved information from t_productionslipsheetreport, now populating bitmaps.
2016-xxxxxx 21:30:15,979 INFO  [production.xxxxxxmon.ProductionSlipsheetBitmaps] (RPC-thread-1 - clearwell- XXXXXX .xxxxxx:2595:414195208951026303:) CaseName:[ XXXXXX]  UserName:[ XXXXXX -880339586] Production Slipsheet Search Bitmaps -- Done populating bitmaps.
2016-xxxxxx 21:30:15,979 INFO  [production.xxxxxxmon.ProductionSlipsheetBitmaps] (RPC-thread-1 -  clearwellxxxxxx.xxxxxx:2595:414195208951026303:) CaseName:[xxxxxx]  UserName:[xxxxxx-880339586] Production Slipsheet Search Bitmaps -- Now storing bitmaps.
2016-xxxxxx 21:30:16,166 WARN  [resource.lease.expiry] (resourcemgr.resourcemgr:) [#165005] Lease 0.4.2.1399450 used to access database was dereferenced without being released
2016-xxxxxx 21:30:16,665 INFO  [production.xxxxxxmon.ProductionSlipsheetBitmaps] (RPC-thread-1 -  clearwellxxxxxx.xxxxxx:2595:414195208951026303:) CaseName:[xxxxxx]  UserName:[xxxxxx-880339586] Production Slipsheet Search Bitmaps -- Completed generating production slipsheets for slipsheet search.
2016-xxxxxx 21:30:16,665 ERROR [production.job.ProductionJobProcessor] (RPC-thread-1 -  clearwellxxxxxx.xxxxxx:2595:414195208951026303:) CaseName:[xxxxxx]  UserName:[xxxxxx-880339586] [#416223] Exception while {0} - {1}
2016-xxxxxx 21:30:16,665 INFO  [utilitynodes.resourcemanager.RMRequestManager] (RPC-thread-1 -  clearwellxxxxxx.xxxxxx:2595:414195208951026303:) CaseName:[xxxxxx]  UserName:[xxxxxx-880339586] Releasing all pending requests from the Resource Manager.
2016-xxxxxx 21:30:16,665 INFO  [jobmanager.remote.lifecycle] (RPC-thread-1 -  clearwellxxxxxx.xxxxxx:2595:414195208951026303:) CaseName:[xxxxxx]  UserName:[xxxxxx-880339586] Remote Processing ended with error for job id 1.5.130.6771972349756445218
com.teneo.esa.production.job.ProductionJobException: [#416227] Production job failed with exception: com.teneo.esa.production.job.ProductionJobException: [#416224] Could not determine bates count for DocPart .05.05.xxxxxx:BODY: for Folder xxxxxx, source
    at com.teneo.esa.production.job.ProductionJobException.runFailed(ProductionJobException.java:74)
    at com.teneo.esa.production.job.ProductionJobProcessor.execute(ProductionJobProcessor.java:66)
    at com.teneo.esa.production.job.ProductionJobProcessor.access$000(ProductionJobProcessor.java:27)
    at com.teneo.esa.production.job.ProductionJobProcessor$1.run(ProductionJobProcessor.java:90)
    at com.teneo.esa.jobmanager.remote.RemoteJobCapsule.call(RemoteJobCapsule.java:73)
    at com.teneo.esa.jobmanager.remote.RemoteJobCapsule.call(RemoteJobCapsule.java:26)
    at com.teneo.esa.xxxxxxmon.remote.ServerSideRemoteExecutor$CancelHandler.callNested(ServerSideRemoteExecutor.java:507)
    at com.teneo.esa.xxxxxxmon.remote.ServerSideRemoteExecutor$CancelHandler.call(ServerSideRemoteExecutor.java:483)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: com.teneo.esa.production.job.ProductionJobException: [#416224] Could not determine bates count for DocPart .05.05.xxxxxx:BODY: for Folder xxxxxx, source
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at com.teneo.esa.production.job.Engine.run(Engine.java:130)
    at com.teneo.esa.production.job.ProductionJobProcessor.execute(ProductionJobProcessor.java:51)
    at com.teneo.esa.production.job.ProductionJobProcessor.access$000(ProductionJobProcessor.java:27)
    at com.teneo.esa.production.job.ProductionJobProcessor$1.run(ProductionJobProcessor.java:90)
    at com.teneo.esa.jobmanager.remote.RemoteJobCapsule.call(RemoteJobCapsule.java:73)
    at com.teneo.esa.jobmanager.remote.RemoteJobCapsule.call(RemoteJobCapsule.java:26)
    at com.teneo.esa.xxxxxxmon.remote.ServerSideRemoteExecutor$CancelHandler.callNested(ServerSideRemoteExecutor.java:507)
    at com.teneo.esa.xxxxxxmon.remote.ServerSideRemoteExecutor$CancelHandler.call(ServerSideRemoteExecutor.java:483)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:139)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:909)
    ... 1 more
Caused by: com.teneo.esa.production.job.ProductionJobException: [#416224] Could not determine bates count for DocPart .05.05.xxxxxx:BODY: for Folder xxxxxx, source
    at com.teneo.esa.production.job.ProductionJobException.zeroPagesProduced(ProductionJobException.java:64)
    at com.teneo.esa.production.job.Engine$ProductionBlockingTask.call(Engine.java:430)
    at com.teneo.esa.production.job.Engine$ProductionBlockingTask.call(Engine.java:416)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    ... 1 more
2016-xxxxxx 21:30:16,696 DETAIL [admin.shell.Lifecycle] (RMI TCP Connection(2890)-10.10.128.144:) Graceful shutdown requested for reason : Job Finished
2016-xxxxxx 21:30:16,806 DETAIL [admin.shell.Lifecycle] (RMI TCP Connection(2890)-10.10.128.144:) broadcasting exit status STARTING
2016-xxxxxx 21:30:16,806 DETAIL [admin.shell.Lifecycle] (RMI TCP Connection(2890)-10.10.128.144:) finished sending exit status STARTING
2016-xxxxxx 21:30:16,930 DETAIL [admin.shell.Lifecycle] (Shell : shutdown thread:) Waiting for services to exit (normally=true)
2016-xxxxxx 21:30:17,211 DETAIL [admin.shell.Lifecycle] (Shell : shutdown thread:) broadcasting exit status SERVICES_DESTROYED
2016-xxxxxx 21:30:17,211 DETAIL [admin.shell.Lifecycle] (Shell : shutdown thread:) finished sending exit status SERVICES_DESTROYED
2016-xxxxxx 21:30:17,227 DETAIL [admin.shell.Lifecycle] (Shell : shutdown thread:) *** EXITING NORMALLY NOW! ***
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor] Memory usage report:
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor] Young Collections:
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of collections = 3271.
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     total promoted =        714123889 (size 44708325176).
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     max promoted =          4217593 (size 276765888).
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     total YC time =         68.802 s (total paused 67.783 s).
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     mean YC time =          21.034 ms (mean total paused 20.722 ms).
2016-xxxxxx 21:30:17,227 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     maximum YC Pauses =     213.933 , 214.603, 240.214 ms.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor] Old Collections:
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of collections = 265.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     total promoted =        184012497 (size 11435325520).
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     max promoted =          3923462 (size 246893472).
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     total OC time =         185.579 s (total paused 32.155 s).
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     mean OC time =          700.299 ms (mean total paused 121.340 ms).
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     maximum OC Pauses =     1255.188 , 1256.858, 4620.307 ms.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of emergency parallel sweeps  = 43.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of internal compactions = 124.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]       15 of these were aborted because they timed out.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of internal compactions skipped because pointer storage overflowed = 7.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of external compactions = 128.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]       12 of these were aborted because they timed out.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]     number of external compactions skipped because pointer storage overflowed = 6.
2016-xxxxxx 21:30:17,243 INFO  [STDOUT] (remotejob:46010) [INFO ][gcrepor]
2016-xxxxxx 21:30:17,570 INFO  [STDOUT] (remotejob:46010) [End Of File]

 

Daly
Level 5
Partner Employee Accredited Certified

Based on the error, it reads as though some documents are not converting/have 0 pages when they go through the process.

Are you able to locate the case logs folder, it should be d:\cw\v712\data\case-logs\<case-name> - it should have a log for this production run, inside it it may provide a little more insight into which files are having issues, you could exclude them from the production to let the rest complete and work on this after.

If this doesn't help, please raise a support case as it'll be a matter of trying to determine which files are causing a problem for the production - might be over a few log files.

Hope this helps.