cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

I would like recommendations for increasing the speed of file system backups of Windows file servers running Windows 2012 and newer which are using Windows deduplicated volumes.  Currently, we have many file servers ranging in size from a few TBs to 12+ TBs.

We are using several options to try to speed up the backups:

  • NB Client version is 8.1.1
  • Client property "Use Change Journal" is enabled
  • Client-side deduplication is enabled
  • Accelerator is enabled
  • Multiple streams are enabled
  • Backup Selections is set to ALL_LOCAL_DRIVES

The main problem is the Shadow Copy Components portion of the client backups which can take multiple days to complete and is limited to one stream.

I often see messages like these:

Jul 12, 2020 12:01:35 PM - Info bpbkar (pid=6616) not using change journal data for <Shadow Copy Components:\>: not supported for non-local volumes / file systems
Jul 12, 2020 12:40:16 PM - Info bpbkar (pid=6616) not using change journal data for <Shadow Copy Components:\>: forcing rescan, each file will be read in order to validate checksums
Tags (2)
10 Replies
Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

1. Are you using DFSR? There are special configurations needed to efficiently back up DFSR servers.

2. There is an option in the policy Attributes tab called "Enable optimized backup of Windows deduplicated volumes'. You'll want to ensure that's checked.

3. Why do you have client-side deduplication enabled? Are the servers located outside of your data center/in remote locations? 

4. How many drives are mapped to these file servers? How often is the data on the servers changing? Are any of the drives mapped over the network?

5. If you're using Accelerator, you need to make sure you're running Accelerator Forced Rescan backups semi-regularly (I recommend using your Monthly backups for this, but it can be 3-6 months if you're not having any issues). 

 

Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

Thank you for the response.

  1. DFSR is in use
  2. Due to bugs in the past which would make any data from the file server unrecoverable by using this option, it is against company policy to use this option.
  3. Client-side deduplication is desirable to improve the efficiency of backups so that media servers do not become a bottleneck.  I work with many major backup techologies and client-side deduplication is a standard and best practice these days.
  4. There are anywhere from two to 13 drives that I know of on these file servers.  They are not mapped over the network.  They are local drives.
  5. Noted
Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

My condolences on needing to back up DFSR data. Please read the following technote with the correct procedures for doing this : How to perform backup and restore of Microsoft DFSR data using NetBackup https://www.veritas.com/support/en_US/article.100038589 The last time I had a DFSR setup that required backups I got to have several long, "fun" conversations with the customer about what they would be in for - starting with taking 7 days total every time we ran a full because of the single-stream nature of the job (most of the shared stuff ran fairly quickly, but there's always that one folder with millions and millions of 2k files & when combined with DFSR it got ugly - it took 3 days by itself to back up one particular DFSR folder). You will probably want to have a conversation with the customer before starting your reconfiguration - maybe one or more of those DFSR shares don't actually change and can just be backed up monthly, maybe they have 4 separate boxes hooked into the setup and you can spread the backup load out between them to minimize runtime (nodeA backs up folder1, nodeB backs up folder2, etc.). You are probably going to have to learn a fair amount about their setup in order to protect it properly and maximize your aggregate throughput though. Best of luck !
Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

On the forced rescan message, there's a bug in NetBackup 8.1.1, where the archive bit doesn't get cleared on the backed up files. This may cause Accelerator backups to behave like full backups. Ask for EEB 3989637. Or upgrade to 8.1.2.

We also have an 8.1.1 EEB in verification regarding DFSR performance. You can check out EEB 4003077 to see if it will help you. Version 2 of 4003077 includes 3989637.

Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

Besides those 2 previous helpful replies you may also want to be carefull with deduplicated data activated on the OS because, unless mistaken, you may have problems with the restore (it fails..) so please re-verify the compatibility list for the OS and Nbu’s versions
Good luck
Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

@richardw2 how many active streams (backup jobs reading data from your file servers) do you keep on the client? I would suggest 3-4 active streams per client, this will help boost completing the backup job sooner with better overall performance.

Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

The policies allow for multiple streams.  Since the primary concern here is regarding DFSR servers with deduplicated volumes, that is the focus.  The backup streams for the non-DFSR data (up to around 4 streams - 1 per drive) complete very quickly - say 15 to 20 minutes or so.  The stream for the Shadow Copy Components takes a very long time.  It can take many days at times.  I am trying to find any ways possible to speed up the backup process for the Shadow Copy Components.

Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

@hha_mea I do not understand this comment.  What do you mean about deduplicated data activated on the OS?

Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

Hello @richard,
Please take a look at supported filesystem (SCL)
Snip;

Regarding the Microsoft Windows Server data deduplication feature:
- NTFS deduplication volume can be backed up to any type of storage unit which does not further deduplicate the data.
- Optimized Backup occurs for NTFS deduplication volumes when possible. Per Microsoft design, any restore from Optimized Backup is non-optimized. This means, after restore, files are in non-optimized form until the next optimization is run by the OS schedule. Be sure adequate space is available for restore.
- By design, TIR is not supported on NTFS deduplication volumes.
- FlashBackup is not supported with NTFS deduplication volumes.
Highlighted

Re: Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

The DFSR data is *in* the Shadow Copy Components, which is why it takes so long. As others have suggested, your best solution will be to split the backups, per this TechNote: https://www.veritas.com/support/en_US/article.100038589.

You can split the backups into multiple policies, and specify the exact DFSR folders you want to be backed up within each policy. You will need to work with the appropriate sysadmin to identify the best solution to backing these servers up. Some of them may be small enough to only need one policy with a backup selection of ALL_LOCAL_DRIVES. This will protect DFSR, System State, and any other drives on the server.

For the ones with more DFSR data, you will need to split them into separate policies. One policy will protect the System State and the local drives, and another policy will specify the exact path to the DFSR folders that your sysadmin wants backed up. If there are 10 DFSR folders, and 8 of them are small, they could all go in one policy and the remaining 2 could be in a separate policy. You can test out performance/backup speeds of the folders and then determine which ones are taking the longest to back up.