Highlighted

AIR replication restart

Hello

I have a question regarding deduplication and AIR : when a replication is interrupted for any reason (link failure, server reboot, ....) are the block already sent to the other site totaly lost ? With other words, when the replication is restarted (from 0 of course) , is it really not optimized (supposing it is the first time it runs) or the first part of the replication can take advantage of already sent blocks ?

I imagine the blocks received by the remote MSDP are stored then could be potentially be used ... but maybe they are flagged "to be deleted" when the replciation process is broken and then can not be used ...

If anybody has knowlegde or ideas on this point ....

Best regards

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

Hello, refer to

Hello,

refer to GarbageCheckRemainDCCount parameter in Dedup Guide:


The number of containers from failed jobs not to check for garbage. A

failed backup or replication job still produces data containers. Because

failed jobs are retried, retaining those containers means NetBackup

does not have to send the fingerprint information again. As a result,

retried jobs consume less time and fewer system resources than when

first run.

Rgds

View solution in original post

5 Replies
Highlighted

*bump*   (...because I like

*bump*   (...because I like this question... does anyone know?)

Highlighted

Only way we have found is to

Only way we have found is to run another backup that has replication too, this seems to restart the broken replication.

Please vote for the replication retry idea here:

https://www-secure.symantec.com/connect/ideas/retry-failed-scheduled-replications-failed-scheduled-b...

 

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue
Highlighted
Accepted Solution!

Hello, refer to

Hello,

refer to GarbageCheckRemainDCCount parameter in Dedup Guide:


The number of containers from failed jobs not to check for garbage. A

failed backup or replication job still produces data containers. Because

failed jobs are retried, retaining those containers means NetBackup

does not have to send the fingerprint information again. As a result,

retried jobs consume less time and fewer system resources than when

first run.

Rgds

View solution in original post

Highlighted

Thanks a lot for this

Thanks a lot for this information !

 

Does this mean that failed/retried jobs are keeping some DCs, but cancelled ones do not ?

Highlighted

A cancelled job is a failed

A cancelled job is a failed job.  I would expect the same DC / garbage rules to apply to any failed job no matter what the status is.  I would expect the garbage DCs to only exist (be retained) for a specific length of time.

What is not clear to me is...

1) Are all garbage DCs deleted when garbage DC expiry runs no matter how old they are?

2) Or, are garbage DCs aged, and when garbage DC runs is it only those garbage DCs that are over a certain age which are deleted?