Forum Discussion

StefanosM's avatar
StefanosM
Level 6
9 years ago

NTFS deduplication best practice

Hello to all.

I do not have match experience with NTFS deduplicated volumes backups, so I'm asking if anyone has experience with big volumes.

I have a customer with 4 TB of NTFS deduplication data. Overall space reduce is 30%.

My question is if it is better to backup this NTFS volume to a basic disk using the“Enable optimized backup of Windows deduplicated volumes” and a scheme of weekly full and daily incrementals

or

use a MSDP pool with accelerator, to reduce the impact of full backups and the need of huge basic disk? (non optimised backup)

Further more, can I use windows change journal with accelerator to backup NTFS deduplicated volumes (non optimised backup)?

 

thanks

Stefanos

 

 

  • 1) My question is if it is better to backup this NTFS volume to a basic disk using the“Enable optimized backup of Windows deduplicated volumes” and a scheme of weekly full and daily incrementals

    no dedupe - but you could enable compression - but ultimately required storage expands with retention.

    2) use a MSDP pool with accelerator, to reduce the impact of full backups and the need of huge basic disk? (non optimised backup)

    this is what I would do, especially if you have on disk retention of over a week.

    3) Further more, can I use windows change journal with accelerator to backup NTFS deduplicated volumes (non optimised backup)?

    yes, good idea - combine it with client-side dedupe to move even less data via the LAN

7 Replies

Replies have been turned off for this discussion
  • 1) My question is if it is better to backup this NTFS volume to a basic disk using the“Enable optimized backup of Windows deduplicated volumes” and a scheme of weekly full and daily incrementals

    no dedupe - but you could enable compression - but ultimately required storage expands with retention.

    2) use a MSDP pool with accelerator, to reduce the impact of full backups and the need of huge basic disk? (non optimised backup)

    this is what I would do, especially if you have on disk retention of over a week.

    3) Further more, can I use windows change journal with accelerator to backup NTFS deduplicated volumes (non optimised backup)?

    yes, good idea - combine it with client-side dedupe to move even less data via the LAN

  • thank you for your answer.

    I agree with you, that MSDP backup is the best option to go. 

    My only concern is that I had not find any document that clearly states that I can use change journal with deduplicated volumes. I assume that it is supported, but I pref are to see it written in a Microsoft/veritas/veeam/networker/.. document.

     

  • I think you should refer this for official post

     

    https://www.veritas.com/community/blogs/frequently-asked-questions-netbackup-accelerator

     

    Also the statement

    Change journal in NTFS would work only when NetBackup client sees it as an NTFS file system. If you are NFS/CFS mounting that file system somewhere else and backing up, NetBackup Accelerator cannot take advantage of NTFS. However, track log is capable of tracking changes even without file system level change journals. In my opionion, that is the true value of NetBackup Accelerator. 

  • thanks nbutech, but this is not what I need.

    What I need is something that clearly says that I can use change journal on a 2012 NTFS deduplicated volume.

    I suppose that it is supported but it will be better to have it in a document.

     

  • There is a section in this:

    https://msdn.microsoft.com/en-us/windowsembedded/hh769304(v=vs.80)

    ...which, by implication, would seem to suggest that USN and NTFS de-duplication can co-exist.

    .

    Remember NetBackup Client Change Journal, when used on Windows, is NTFS USN.  The USN is file name based, not block based, and not bitmap based, and so one can reasonably assume that USN would not be expected to tie in, in anyway, with NTFS de-duplication which is a layer below the file system proper, and just above the volume layer.

  • Found a MSDN doc via:

    https://social.msdn.microsoft.com/Search/en-US?query=USN%20deduplication&beta=0&rn=Ask++Premier+Field+Engineering+%28PFE%29++Platforms&rq=site:http://blogs.technet.com/b/askpfeplat&ac=5#refinementChanges=-0&pageNumber=1&showMore=false

    ...which can be downloaded with:

    http://download.microsoft.com/download/8/8/F/88FB4BBF-8ADC-41E0-A64F-489CEEF5218E/WindowsServer2012R2DeduplicatedVDIDeployment.docx

    ...for a Word doc with a file name of:

    WindowsServer2012R2DeduplicatedVDIDeployment.docx

    ...and a title of:

    Large scale Virtual Desktop Infrastructure deployment using Windows Server 2012 R2 with Storage Tiers and Data Deduplication

    ...which says on page 10:

    2. Enabling “Partial File Optimization,” which allows the Data Deduplication jobs to set up USN range tracking on the volume and not reoptimize recently accessed hot data within a file.

    .

    Which definitely implies that the optimization feature of NTFS deduplication can leverage USN range tracking, which therefore implies not only co-existence, but also co-operation.