Slow duplication between twos sites having 5240 Appliance each
I would like to understand more how optimized duplication is happening between two 5240 Appliance.
I have a case of slow duplication between two sites both having 5240 each. Case was open with VERITAS, it turned out that the pipe between sites is not big enough to accomodate the huge duplication traffic. That's fine and somehow accepted by custoemr, however one thing that does not make sense is when Backline explained how data is being transfered from source Appliance to target. Backline said on SLP first job is Backup followed by Duplication. Dedupe at first Appliance happened when Backup job ran, once Backup job completed the whole backup image size is being trasferred to the target Appliance then it is being dedupe in that remote Appliance. For example, backup image size is 1TB which has to be duplicated to remote Applaicnce. This 1TB will be passing thru the link between sites into the remote Appliance, only then that it will be dedupe by that remote Appliance. Meaning the 1TB data already pass thru the link consuming that so much bandwidth on a small pipe (imagine). I was surprised but they said that's how it is designed.
My question is, is this really how oplitzed duplication works?