Forum Discussion

3 Replies

  • Afaik, there isn't a public whitepaper detailing the DeDupe process, however in a nutshell, this is the process -

    When a backup is DeDupe-enabled, the client initializes the Dedupe Engine which works in tandem with the DeDupe DB (for checking, comparing hash values etc) and when a new file is backed up the first time, it breaks the file into chunks which are stored into the DeDupe storage location. Then reference pointers (metadata) are stored in the NUDF. During the next backup, when the file undergoes changes, the "delta" (incremental) portion of the file gets stored in the NUDF and the reference pointers (metadata) are updated.

     

  • It occurs across storage for all users.