Optimizing duplication of SLP's
Hi Team,
I have an SLP which usually has enormous amount of data as backlog in it. I understand a single backup image of the client in it has huge data in TB's. However how can I accomplish in making them run successfully. As many of them fail with error code 50 after replicating around 800 GB's.
Here is the LIFECYCLE_PARAMETERS content.
MIN_GB_SIZE_PER_DUPLICATION_JOB 50
MAX_GB_SIZE_PER_DUPLICATION_JOB 200
MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB 60
DUPLICATION_SESSION_INTERVAL_MINUTES 30
Please suggest.
Thanks
Sid
Do you have firewall installed Sid1987 ?
Mark is right that nearly 2 hour elapsed before the SLP failed - however it seem that more than 2 hour can elapse without errors:
03/31/2013 14:35:13 - end reading; read time: 2:27:08
I also believe you should try to reduce the frgament size - Image read time in the range of 2 hours is very long. What about disk performance - have you verified it's any good ?