Forum Discussion

richardw2's avatar
richardw2
Level 3
5 years ago

Recommendations for backing up Windows 2012 and newer file servers with deduplicated volumes

I would like recommendations for increasing the speed of file system backups of Windows file servers running Windows 2012 and newer which are using Windows deduplicated volumes.  Currently, we have many file servers ranging in size from a few TBs to 12+ TBs.

We are using several options to try to speed up the backups:

  • NB Client version is 8.1.1
  • Client property "Use Change Journal" is enabled
  • Client-side deduplication is enabled
  • Accelerator is enabled
  • Multiple streams are enabled
  • Backup Selections is set to ALL_LOCAL_DRIVES

The main problem is the Shadow Copy Components portion of the client backups which can take multiple days to complete and is limited to one stream.

I often see messages like these:

Jul 12, 2020 12:01:35 PM - Info bpbkar (pid=6616) not using change journal data for <Shadow Copy Components:\>: not supported for non-local volumes / file systems
Jul 12, 2020 12:40:16 PM - Info bpbkar (pid=6616) not using change journal data for <Shadow Copy Components:\>: forcing rescan, each file will be read in order to validate checksums

10 Replies

  • 1. Are you using DFSR? There are special configurations needed to efficiently back up DFSR servers.

    2. There is an option in the policy Attributes tab called "Enable optimized backup of Windows deduplicated volumes'. You'll want to ensure that's checked.

    3. Why do you have client-side deduplication enabled? Are the servers located outside of your data center/in remote locations? 

    4. How many drives are mapped to these file servers? How often is the data on the servers changing? Are any of the drives mapped over the network?

    5. If you're using Accelerator, you need to make sure you're running Accelerator Forced Rescan backups semi-regularly (I recommend using your Monthly backups for this, but it can be 3-6 months if you're not having any issues). 

     

    • richardw2's avatar
      richardw2
      Level 3

      Thank you for the response.

      1. DFSR is in use
      2. Due to bugs in the past which would make any data from the file server unrecoverable by using this option, it is against company policy to use this option.
      3. Client-side deduplication is desirable to improve the efficiency of backups so that media servers do not become a bottleneck.  I work with many major backup techologies and client-side deduplication is a standard and best practice these days.
      4. There are anywhere from two to 13 drives that I know of on these file servers.  They are not mapped over the network.  They are local drives.
      5. Noted
      • Lowell_Palecek's avatar
        Lowell_Palecek
        Level 6

        On the forced rescan message, there's a bug in NetBackup 8.1.1, where the archive bit doesn't get cleared on the backed up files. This may cause Accelerator backups to behave like full backups. Ask for EEB 3989637. Or upgrade to 8.1.2.

        We also have an 8.1.1 EEB in verification regarding DFSR performance. You can check out EEB 4003077 to see if it will help you. Version 2 of 4003077 includes 3989637.

    • Hamza_H's avatar
      Hamza_H
      Moderator
      Besides those 2 previous helpful replies you may also want to be carefull with deduplicated data activated on the OS because, unless mistaken, you may have problems with the restore (it fails..) so please re-verify the compatibility list for the OS and Nbu’s versions
      Good luck
      • richardw2's avatar
        richardw2
        Level 3

        Hamza_H I do not understand this comment.  What do you mean about deduplicated data activated on the OS?

  • richardw2 how many active streams (backup jobs reading data from your file servers) do you keep on the client? I would suggest 3-4 active streams per client, this will help boost completing the backup job sooner with better overall performance.

    • richardw2's avatar
      richardw2
      Level 3

      The policies allow for multiple streams.  Since the primary concern here is regarding DFSR servers with deduplicated volumes, that is the focus.  The backup streams for the non-DFSR data (up to around 4 streams - 1 per drive) complete very quickly - say 15 to 20 minutes or so.  The stream for the Shadow Copy Components takes a very long time.  It can take many days at times.  I am trying to find any ways possible to speed up the backup process for the Shadow Copy Components.

      • EthanH's avatar
        EthanH
        Level 4

        The DFSR data is *in* the Shadow Copy Components, which is why it takes so long. As others have suggested, your best solution will be to split the backups, per this TechNote: https://www.veritas.com/support/en_US/article.100038589.

        You can split the backups into multiple policies, and specify the exact DFSR folders you want to be backed up within each policy. You will need to work with the appropriate sysadmin to identify the best solution to backing these servers up. Some of them may be small enough to only need one policy with a backup selection of ALL_LOCAL_DRIVES. This will protect DFSR, System State, and any other drives on the server.

        For the ones with more DFSR data, you will need to split them into separate policies. One policy will protect the System State and the local drives, and another policy will specify the exact path to the DFSR folders that your sysadmin wants backed up. If there are 10 DFSR folders, and 8 of them are small, they could all go in one policy and the remaining 2 could be in a separate policy. You can test out performance/backup speeds of the folders and then determine which ones are taking the longest to back up.