cancel
Showing results for 
Search instead for 
Did you mean: 

AIR replication restart

yguilloux
Level 4
Partner

Hello

I have a question regarding deduplication and AIR : when a replication is interrupted for any reason (link failure, server reboot, ....) are the block already sent to the other site totaly lost ? With other words, when the replication is restarted (from 0 of course) , is it really not optimized (supposing it is the first time it runs) or the first part of the replication can take advantage of already sent blocks ?

I imagine the blocks received by the remote MSDP are stored then could be potentially be used ... but maybe they are flagged "to be deleted" when the replciation process is broken and then can not be used ...

If anybody has knowlegde or ideas on this point ....

Best regards

1 ACCEPTED SOLUTION

Accepted Solutions

Michal_Mikulik1
Moderator
Moderator
Partner    VIP    Accredited Certified

Hello,

refer to GarbageCheckRemainDCCount parameter in Dedup Guide:


The number of containers from failed jobs not to check for garbage. A

failed backup or replication job still produces data containers. Because

failed jobs are retried, retaining those containers means NetBackup

does not have to send the fingerprint information again. As a result,

retried jobs consume less time and fewer system resources than when

first run.

Rgds

View solution in original post

5 REPLIES 5

sdo
Moderator
Moderator
Partner    VIP    Certified

*bump*   (...because I like this question... does anyone know?)

Michael_G_Ander
Level 6
Certified

Only way we have found is to run another backup that has replication too, this seems to restart the broken replication.

Please vote for the replication retry idea here:

https://www-secure.symantec.com/connect/ideas/retry-failed-scheduled-replications-failed-scheduled-b...

 

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

Michal_Mikulik1
Moderator
Moderator
Partner    VIP    Accredited Certified

Hello,

refer to GarbageCheckRemainDCCount parameter in Dedup Guide:


The number of containers from failed jobs not to check for garbage. A

failed backup or replication job still produces data containers. Because

failed jobs are retried, retaining those containers means NetBackup

does not have to send the fingerprint information again. As a result,

retried jobs consume less time and fewer system resources than when

first run.

Rgds

yguilloux
Level 4
Partner

Thanks a lot for this information !

 

Does this mean that failed/retried jobs are keeping some DCs, but cancelled ones do not ?

sdo
Moderator
Moderator
Partner    VIP    Certified

A cancelled job is a failed job.  I would expect the same DC / garbage rules to apply to any failed job no matter what the status is.  I would expect the garbage DCs to only exist (be retained) for a specific length of time.

What is not clear to me is...

1) Are all garbage DCs deleted when garbage DC expiry runs no matter how old they are?

2) Or, are garbage DCs aged, and when garbage DC runs is it only those garbage DCs that are over a certain age which are deleted?