Showing results for 
Search instead for 
Did you mean: 
Frequently Asked Questions on NetBackup Accelerator

NetBackup Accelerator is an exciting feature introduced in NetBackup 7.5 and NetBackup Appliances software version 2.5. This blog is not a substitute for NetBackup documentation. NetBackup Accelerator transforms the way organizations do backups, I am compiling a list of frequently asked questions from NetBackup Community forums and providing answers. If you have follow up questions or feedback, post them as comments. I shall try to answer the questions or someone else in the community can jump in. Fasten your seat belts! You are about to get accelerated! 

What is NetBackup Accelerator?

NetBackup Accelerator provides full backups for the cost of an incremental backup.

cost reduction in full backups = reduction in backup window, backup storage, client CPU, client memory, Client disk I/O, network bandwidth etc.

NetBackup Accelerator makes this possible by making use of a platform and file system independent track log to intelligently (there are a number of intellectual properties associated with this technology) detect changed files and send the changed segments from those files to media server. These changed segments are written to a supported storage pool (currently available only in NetBackup appliances, NetBackup Media Server Deduplication Pools and PureDisk Deduplication Option Pools) and an inline optimized synthetic backup is generated.

What is NetBackup Accelerator track log?

Track log is a platform and file system independent change tracking log used by NetBackup Accelerator. Unlike file system specific change journals (e.g. Windows NTFS change journal), there are no kernel level drivers that runs all the time on production clients. The track log comes to action during the backup and is populated with entries that are used by NetBackup Accelerator to intelligently identify changed files and segments within the changed files.

The size of the track log is a function of number of files and size of the files in the file system. The size of the track log does not increase with increase in data change rate in the file system.

So, NetBackup Accelerator is good for full backups. How about incremental backups?

The primary benefit from NetBackup Accelerator is for full backups. However, NetBackup Accelerator also reduces a subset of costs in running incremental backups. 

cost reduction in incremental backups = reduction in client CPU, client memory, Client disk I/O 

Since I can get full backups for the cost of doing an incremental backup, should I simply delete my incremental schedules and increase the frequency of full backups? 

Not recommended unless you are not concerned about the catalog growth. Note that full backups will have catalog files (the "*.f" files in NetBackup image database) larger than those of incremental backups. Running full backups instead of current incremental backups would mean that your image catalog size would increase. Larger catalog requires more space on your master server and it takes longer to run catalog backups. 

As I mentioned in the answer to the previous question, NetBackup Accelerator does help with incremental backups as well by significantly reducing the impact of backups on client's resources. Stick with your current schedules and take advantage of NetBackup Accelerator. 

What is NetBackup Client Side Deduplication?

NetBackup Client Side Deduplication deduplicates data and sends unique segments directly to storage server. Media server is not involved in this data path. For example, if your storage target is a NetBackup 5020 appliance sitting behind a media server, the NetBackup client is sending the unique segments directly to NetBackup 5020 appliance. This design makes it possible for a media server to support many storage pools and clients (scale-out on front-end as well as on back-end).

If your storage pool is media server deduplication pool (MSDP) or a NetBackup 52xx appliance, the storage server and media server co-exist on the same physical system. Even in this case, NetBackup Client Side Deduplication is sending unique segments directly to storage server (which just happened to be a media server as well) and hence you get both front-end and backend-end scale out. For example, it is possible to have a NetBackup 5220 to host a media server deduplication pool while also serving as a media server for another NetBackup appliance or media server. 

What is NetBackup Optimized Synthetic Backup?

NetBackup Optimized Synthetic Backup is a feature where a full backup can be synthesized on storage server by making use of previous full and subsequent incremental backups without reading those component images and writing a new image. This technology had been in NetBackup since 6.5.4. It is available on all NetBackup appliances, Media Server Deduplication Pool and PureDisk Deduplication Option Pool. Recently some of the OpenStorage partners have also announced support for this feature.

Can you compare and contrast NetBackup Accelerator and NetBackup Optimized Synthetic backup?

NetBackup Accelerator provides all the values you get from NetBackup Optimized Synthetic backup and a lot more. In fact, if you have a supported storage pool for NetBackup Accelerator, there is really no need for you to use NetBackup Optimized Synthetic backup.

NetBackup Optimized Synthetic backup is a post-backup synthesis. You need a separate schedule that generates synthetic backup after incremental backups are done.  NetBackup Accelerator generates the full backup inline when data is being sent from client.

NetBackup Optimized Synthetic backup requires you to run traditional incremental backups. Hence all the limitations of a traditional incremental backup are applicable. For example, the incremental backups do require NetBackup client to enumerate the entire file system to identify the changes. NetBackup Accelerator makes it possible to intelligently identify changes and read just the changed files.

I can list a lot more, but you get the point by now. Bottom line is… if your storage pool supports NetBackup Accelerator, there is no need to use the older NetBackup Optimized Synthetic backup schedules.

Can NetBackup Accelerator and NetBackup Client Side Deduplication co-exist?

Of course! In fact, these two features are like milk and cookies. They are tasty by themselves but delicious when eaten together!

NetBackup Accelerator reduces the cost of doing a full backup (see the question “What is NetBackup Accelerator?”  for the definition of the cost). When you combine it with NetBackup Client Side Deduplication, some of the advantages are…

  • Global deduplication without deduplication processing penalty on client. For our competitors, turning on source side deduplication would imply resource consumption on production client system to support dedupe fingerprinting. Because of NetBackup Accelerator, the resources needed on client are significantly lowered in NetBackup. In fact, it is fair to say that NetBackup Accelerator lets you dedupe at source confidently. 
  • Ability to directly write to storage server. If client side deduplication is not enabled, Accelerator sends changed segments to media server first. With client side deduplication enabled, the changed segments are directly sent to storage. The result is scalability (front-end and back-end scale-out). A media server can support 100s of clients and 10s of storage pools.
  • Support for in-flight encryption and compression (configurable at pd.conf on clients)
  • No incremental licensing cost. Both Accelerator and NetBackup Deduplication are on the same SKU. You already have both capabilities if you paid for one or the other.  Turning on/off these features are at the click of a mouse, try it anyway! 

Are they any workloads where NetBackup Accelerator cannot be used?

The graphical user interface is designed to grey-out enable accelerator check-box for the policy types where NetBackup Accelerator is currently not supported. Furthermore, if you happen to choose a storage unit that does not support NetBackup Accelerator, the policy validation is designed to fail when you try to save changes to the policy.

Are there any design considerations when not to use NetBackup Accelerator when NetBackup Client Side Deduplication is used? 

No! NetBackup Accelerator does not have any negative effects on NetBackup Client Side Deduplication.

Are there any design considerations when not to use Client Side Deduplication when NetBackup Accelerator is used? 

No functional limitations.  But there are a couple of situations where you do not get the advantage of NetBackup Client Side Deduplication at this time.  The good news is that there is nothing you need to do during the design or implementation.  NetBackup knows not to attempt NetBackup Client Side deduplication in these scenarios. I am listing them for the sake of awareness.

  • Remember is that NetBackup Client Side Deduplication is available for a subset of NetBackup Client platforms. Refer to NetBackup Deduplication documentation for more info. NetBackup Accelerator is available for ALL supported client platforms with the exception of OpenVMS.
  • If your storage pool is NetBackup Cloud (to Nirvanix, AT&T Synaptec, RackSpace, Amazon etc.), NetBackup Client Side Deduplication is not currently available. NetBackup Accelerator is supported. 

Note: Thank you for so many follow up questions! I have made sincere attempts to answer all of them down below. Furthermore, I would like to bring your attention to two additional blogs you may be interested in


Thank you for this informative post;
g-force was felt.



A very good synthesis, thanks.

Can you explain us, how to work with the "Accelerator Track log" ?  (for troubleshooting, tunning, ...)


Hi Stephane, 

   I shall get back to you on this after consulting with Support to see if there are any plans to publish a TechNote. Typically troubleshooting steps are documented in TNs and in Troubleshooting guides.

Interesting, but what I would like to know is if my dededuicating storage units are supported? I have DataDomain 690 with Boost in use, and also HP B6200 with Catalyst. Can you please let me know if these can be used with Nbu Accelerator? If not, when will they be supported?!? Thx!


NBU Accelerator is available through the Open Storage API to all Symantec OST partners. EMC and HP will need to implement this feature as part of their OST implementation with Data Domain and the B6200 respectively. Once they have done this, you can use accelerator on these devices.

Please note, use of NBU accelerator requires the NBU Data Protection Optimization Option license.

Hope this helps!


Hi Fred,

  As Mayur had already mentioned, the APIs required for Accelerator are made available to all OpenStorage vendors and you could follow up with EMC or HP (in your case) to see where they are with updating their OST plugins to support NetBackup Accelerator. 

  Some of the OST features had been quite difficult for partners to implement on account of architectural limitations of the backend device. For example, it took almost 3 years for partners to build a plugin that can support Optimized Synthetic Backups. With that it mind, what Symantec had done with NetBackup Accelerator is to provide NetBackup Deduplication at no additional cost (both features are in the same SKU). Hence if you have some decent storage (for instance, depreciated production storage that you can repurpose), simply attach to your media server and make a media server deduplication pool (MSDP) for now. That way you can start using NetBackup Accelerator for a set of workloads that could really use performance improvement today. Once the vendors provide you the plugin, you can migrate backups (bpduplicate) to their storage and transfer the license. 

  Just a thought. In any case, do talk to your vendors about their plan. 

Helps somewhat, thank you Smiley Happy I have no idea who to ask how far they are with implementing stuff (lol: they probably wouldn't tell me anyway!) Don't EMC/HP have to work with Symantec together to get the API's working on their machines?!? Would be nice if somewhere we all could see which hardware manufacturer is working on getting it implemented on their systems. It could give certain hardware an edge over a competitor's hardware if one does support it and the other doesn't. For that we would need an overview who has it, who doesn't and who is still working on it. To me, someone in Symantec must know this info. Would be great if it could be made public (if at all possible)
I have 9 TB of SATA disk in RAID on each mediaserver, currently not in use, would that be suitable for use in Accelerator? Also: you mention this is a separate license option. We have a backend TB license for our company. Does this license encompass the Accelerator option? Thanks!


You would be able to use it. Please read the media server sizing guidelines in this document: Take a look at pages starting from 15. You need to make sure that the raid you have supports a write speed of 200 MB/sec or more. Since you are running Boost and Catalyst, it looks like you may have media servers with decent processing power and memory. Those requirements are also given in this document.

The platform per terabyte license does not cover data protection optimization add-on you need to use NetBackup Accelerator. Talk to your sales rep for an evaluation key and try it out. 


Hi Fred,

   You are absolutely right. The OpenStorage partners do work with Symantec to develop these plugins. Symantec needs to qualify the plugins for OpenStorage API conformance.

  Note that partners are developing these plugins so that they get competitive advantage over devices without OpenStorage support. Further, the more OpenStorage features a partner support (e.g. Optimized duplication, Optimized Synthesis, Auto Image Replication, NetBackup Accelerator etc.), the better competitive advantage they get over others with lesser number of OpenStorage features. Hence Symantec offers NDA to partners so that their development and release plans for the plugins are protected until they formally make their announcement.

   Thus, Symantec cannot share anything related to where a particular partner is in terms of supporting a specific OpenStorage feature. We encourage customers to contact respective partners for their backend device. Hope this helps to understand why we cannot speak for the upcoming features in partner OST plugins. 

This is one of the key benefits of SYMC Integrated appliances. All of the capabilities work together with the backup software and the storage. This helps eliminate someof the lags between when the software provider delivers a feature like Accelerator and the backend storage provider like EMC in this case support it. 

LOL Smiley Happy

If everybody keeps it a secret is is very hard for customers to decide whether to invest in Accelerator...

I do understand the problem, but for us it is really impossible to find out whether a vendor is working on a technology or not:

All our contacts are sales people and not the 'core' developers. They have no idea what OST is, except that 'it' is supported (Whatever 'supported' entails is not clear to most).

EMC & HP support OST, but what specific features are supported is harder to find out...

Some extra questions:

1) Does Accelerator work with ALL policy types?
2) Is there an andvantage for using Accelerator with FULL VMWare backups?
3) If so: How much advantage in runtime can one expect generally?





Hi Fred, 

   Unfortunately, I cannot speak for the road map from partners. Sorry for not being able to help on that matter. All I could say is that you do have NetBackup Dedupe in the same license and hence you could use it until partners offer support. Thus you are not stuck in case things does not happen quick enough from a partner. 

   1. Currently NetBackup Accelerator supports Standard and Windows policies

   2. Are you referring to NetBackup for VMware? That is a true offhost backup solution. NetBackup Accelerator is not currently supported with NetBackup for VMware. 

      If you are referring to running a NetBackup Client within a VM (be it VMware, Hyper-V, XenServer, Solaris Zones.... ) NetBackup Accelerator helps significantly reduce the resource overhead (client CPU, client memory, disk I/O etc. See the "cost of doing backups" section in the FAQ) and hence recommended. 

   3. It is inversely proportional to data change rate. The lower the daily rate of data change, the better is the runtime advantage. 

I have seen DataDomain has a sizing tool that EMC guarantees that the product it recommends will not be the bottleneck for performance on the data type and mix defined.

Symantec’s Symantec TECH77575 seems like a good point to start with.

Is there an automated tool (even an Excel sheet) from Symantec to calculate a stable solution given the input mix of data, size and backup window?



Do you foresee any potential problems running Optimized backups on clients being on a Netbackup version less that 6.5.4 (i.e. ???

Our master servers are at 7.1 with NBU5020 appliances 1.4.2 but with several solaris clients running at Netbackup version 6.5.3.

Thoughts ???



Sorry, I had been away for a while. 

Yes, there are calculators and other tools available for partners. Please talk to your channel/partner account manager to get this for you. 

Hi Luis, 

   Optimized Synthetics is an OpenStorage API. It requires NetBackup 6.5.4 or higher at the media server. This feature is not really client version dependent. But the amount of testing done for older versions will be minimal. I would recommend opening a Support case. 

Policy validation doesn't appear to succed with Accelerator enabled and the Policy storage configured to a Storage Unit Group containing PureDisk disk storage units.

Hi Nick, 

Did you upgrade from an earlier version of NetBackup? You are entitled to use Accelerator since you have NetBackup Deduplication Option/Add-on license.However, the older licence keys do not turn on the specific bit needed for NetBackup Accelerator. Please work with sales team for a new Data Protection Optimization Addon/Option license for the same quantity of dedupe license you currently have. The new key (which you are getting at no additional cost) will resolve the issue. 

Hi All :


We used to add new tape drives for holding the backup window into 24hr for full backup in Direct-NDMP . 

We wants to shorten the backup window in efficient way.


Does Accelerator works in Direct-NDMP or Remote-NDMP backup?

How can Accelerator works in NDMP environment with large data?


Keep Adding tape drives seems not a good solution.

We need some suggestion and solution.

Hey Abdul, what about Shadow Copy Components and Accelerator... especially thinking about Microsoft DFS-R servers, which keep files in the Shadow Copy Component area.... I am just thinking that it may not be able to index files in Shadow Copy Components, or is it?

I plan on testing this, but if you had some insight before attempting, that would be great.

Hi Rasheed,


We too have per TB licensing. We were expecting to have the Accelerator option included, but it's greyed out in the policies.   Our license includes everything else. So I'm a bit suprised we need to purchase the accelerator option, or have I misunderstood your comment?

After some testing I think the conclusion is that I either did something wrong or Accelerator does not work on "Shadow Copy Components". This is a daily backup with backup selection "Shadow Copy Components:\" and approx 350Gb DFS data:

28-09-2012 11:30:15 - Info nbjm(pid=3612) starting backup job (jobid=573055) for client remoteDFSserver.domain, policy DFS_Backup, schedule Daily  
28-09-2012 11:30:15 - estimated 368500807 Kbytes needed
28-09-2012 11:30:15 - Info nbjm(pid=3612) started backup (backupid=remoteDFSserver.domain_1348824615) job for client remoteDFSserver.domain, policy DFS_Backup, schedule Daily on storage unit MSDP01
28-09-2012 11:30:16 - started process bpbrm (9792)
28-09-2012 11:30:17 - using resilient connections
28-09-2012 11:30:27 - Info bpbrm(pid=9792) remoteDFSserver.domain is the host to backup data from     
28-09-2012 11:30:27 - Info bpbrm(pid=9792) reading file list from client        
28-09-2012 11:30:28 - Info bpbrm(pid=9792) accelerator enabled           
28-09-2012 11:30:43 - connecting
28-09-2012 11:30:44 - Info bpbrm(pid=9792) starting bpbkar32 on client         
28-09-2012 11:30:44 - connected; connect time: 00:00:01
28-09-2012 11:30:48 - Info bpbkar32(pid=8044) Backup started           
28-09-2012 11:30:48 - Info bptm(pid=11856) start            
28-09-2012 11:30:49 - Info bptm(pid=11856) using 524288 data buffer size        
28-09-2012 11:30:49 - Info bptm(pid=11856) setting receive network buffer to 2098176 bytes      
28-09-2012 11:30:49 - Info bptm(pid=11856) using 256 data buffers         
28-09-2012 11:30:49 - Info msdpserver.domain(pid=11856) Using OpenStorage client direct to backup from client remoteDFSserver.domain to msdpserver.domain  
28-09-2012 11:30:56 - begin writing
29-09-2012 09:18:23 - Info bpbkar32(pid=8044) accelerator sent 318017739264 bytes out of 317158541312 bytes to server, optimization 0.0%
29-09-2012 09:18:26 - Info bpbkar32(pid=8044) bpbkar waited 863647 times for empty buffer, delayed 1701842 times.   
29-09-2012 09:18:31 - Info msdpserver.domain(pid=11856) StorageServer=PureDisk:msdpserver.domain; Report=PDDO Stats for (msdpserver.domain): scanned: 309740188 KB, CR sent: 755547 KB, CR sent over FC: 0 KB, dedup: 99.8%, cache hits: 0 (0.0%)
29-09-2012 09:18:32 - Info msdpserver.domain(pid=11856) Using the media server to write NBU data for backup remoteDFSserver.domain_1348824615 to msdpserver.domain
29-09-2012 09:18:33 - Info bptm(pid=11856) EXITING with status 0 <----------        
29-09-2012 09:18:33 - Info msdpserver.domain(pid=11856) StorageServer=PureDisk:msdpserver.domain; Report=PDDO Stats for (msdpserver.domain): scanned: 2 KB, CR sent: 0 KB, CR sent over FC: 0 KB, dedup: 100.0%
29-09-2012 09:18:33 - Info bpbrm(pid=9792) validating image for client remoteDFSserver.domain        
29-09-2012 09:18:35 - end writing; write time: 21:47:39
the requested operation was successfully completed(0)

Let me do some digging and get back to you on this, Morten. 

Hi River, 

   NDMP backups are not currently supported with NetBackup Accelerator. However, Accelerator can help you with your situation in a different way at this time. 

   Your NAS volumes can be mounted on a NetBackup client, NetBackup media server or NetBackup 5220 appliance and you can make use of NetBackup Accelerator from there. Your first backup will be slow as it needs to read all the data in your NAS volumes. After that your full backups will be much faster. You also have the ability to scale out this kind of backup processing. You can have different volumes mounted on different clients (or media servers, appliances) in case of very large NAS devices and concurrently back them up.  

Hi Lee, 

   NetBackup Platform per Terabyte license does not include Data Protection Optimization Add-on. The latter is needed for using NetBackup Intelligent Deduplication and NetBackup Accelerator. 

  Having said that I am wondering if you already have dedupe license (since you mentioned 'Our license includes everything else'). If you already have dedupe license bit turned on, the problem could be that you do not have the correct license key to turn on the Accelerator bit although you are already entitled for it. Please talk to customer care or your sales rep if you already have dedupe turned on with your current license, they can provide you another key that will turn on Accelerator at no additional cost. 

Thanks Rasheed,


We did have the Data Protection Optimzatiom.  We had to get a new license re-generated.  The Option is now ungreyed.

Find out anything? I haven´t been able to get any answers through my channels...

It looks to me like this is causing an issue with this customer, as their backups are extremely slow, and I suspect this is due to thousands of queries to the MSDP storage server.

Hi River, 

  RE: Does Accelerator work in Direct-NDMP or Remote-NDMP backup?

No, NetBackup Accelerator currently does not support NDMP method of backup 

 RE: How can Accelerator works in NDMP environment with large data?

So your question really is how NetBackup Accelerator help where you have a NAS system with lots of data, right? 

  NetBackup Accelerator can indeed help here. You would need to mount the volumes to a NetBackup client or media server (or a NetBackup 5220 appliance) and you can turn on NetBackup Accelerator for backups. After the initial backup (this first backup might be a bit painful!), you can do the furture full backups much faster. The performance gain depends on the data change rate, but in general most NAS workloads have a lot of static data and hence it is certainly worth trying. Furthermore, for a very large NAS system with multiple exported file systems, you can scale out the perforamance by mounting different file systems on different NetBackup clients. You just need to make sure that the same file system is mounted on a given client across backups. 






This is worthy to try.

But Avamar got the first step of POC.

How could I convince boss to keep stay in NBU solution?


Hi River, 

   Invite your boss to look at the big picture. What are we trying to solve? You have the business need to protect ever-growing data on your NAS systems. 

   Historically NDMP backups were the only solution to protect data on NAS devices. While it is still widely in use, it is not scaling to meet the growing demands. See this blog from one of my colleagues:

  Your environment is a classic example of a case where old school NDMP backups are not fitting the business needs. What Avamar is going to offer is what they call an NDMP Accelerator. These are really dedicated Avamar nodes (extra $$$) in an attempt to increase the performance. The idea here is based on doing incremental backups forever. NetBackup Accelerator is the third generation of that idea (first generation is synthetic backups, second generation is optimized synthetic backups and third one is NetBackup Accelerator) 

   If you are going to use NetBackup Accelerator, your boss could save his IT budget by reducing TCO as following. 

   1. For an environment of your size, Avamar would require a huge Data Store: Heavy capital expenditure as EMC would also require professional services to install this

   2. You do not have a good way to store data with heavy retention requirements within Avamar. They may talk about a media access node, however it requires additional nodes (more $$$) and would need additional maintenance tools. 

  3. NetBackup Accelerator lets you make use of your existing resources in media server or clients to scale-out performance for you NAS backups. (save $$$) 

  4. You also have the option to consider NetBackup Replication Director if your NAS devices are from NetApp. This goes back to what I was referring earlier about the blog (future ready, investment protection, stop adding short-time steroids to boost NDMP performance temporarily) 

  5. Talk about additional operational expenditure and overhead if you also need to maintain Avamar. A short-term bandaid is likely to cost more $$$ overall. 


   Send me a note with your contact info if you like, I can arrange for a sales rep to visit you guys and provide a briefing on NetBackup Accelerator and NetBackup Replication Director. He/she has access to tools to provide an estimate on TCO so that your boss could compare and contrast his/her IT budget with a long term vision for investments in backups.  

  Disclaimer:  Symantec's policy is to respect competitors in social media. The opinions expressed in the comment section should not be treated as those of Symantec. 



I totally missed this during my travel. My apologies. Let me check on this. 

Do you mind sending me the case number if you had already worked with technical support on this? You can e-mail me through Connect. It will help if we already have logs/data about your environment. 

Hi Rasheed,

Do you have any information on what information the track log stores, and how it compares the filesystem state so quickly?  Are the headers of every file on the filesystem checked in the same way as an incremental?



This blog provides very useful information about the backups. Thank you for increasing my knowledge.

How to solve extremely high random reads with client mounted shares and performing NBU Accelerator type backups?  

In lieu of doing NDMP and performing these types of backups, I would mount the shares to a client and these clients would scan for changes, and backup changes via the "Accelerator," technology.  Thats a lot of random reads.  A lot.!  Not only am I reading each file for the archive bit, but I'm doing it probably across multiple clients to improve throughput and shrink my backup window.

I don't see how a journaled file system would work in this case of CIFS/NFS mounted shares?  As an example NTFS Change journal doesn't work this way across shares, only NTFS.


Or am I not understanding how files are scanned and backed up?

Hi Rasheed,


We have a per tb base licence and duplication option also per tb. Would i be required to purchase another licence for Acclerator?

Hi Chris, 

  I won't be able to reveal the actual IP behind the processes. NetBackup Accelerator is designed such that it works on any file system. In addition, it is also architected such that it can make use of exisiting change journal mechanism* built into the file system (this is user configurable).

  The track log keeps essential information needed to verify if a file has changed since a previous point in time. This does include the file meta data collected. In addition, it also includes hashes for various segments of a file so that we only need to read the changed blocks if a file has changed. During the run time, we do a file system meta data walk to detect the current statuses of files. There is IP involved in this area to make this process quicker. (You may recall that FlashBackup, V-Ray technology etc. also has similar processes to optimally detect changes without a full scan). 

*Currently NTFS change journals are supported

The 'random read' overhead is mainly for the data blocks of a file. Note that most of the meta data for the file (needed in a incremental backup) comes from the directory. Although we are used to thinking of directory as a 'folder' containing a bunch of files, directory itself is special file in the file system that associates file meta data (owner, group, various time stamps, data block addresses and offsets etc.) to its file name. Your random read would have overhead when the file blocks are scattered around disk segments. When it comes to NetBackup Accelerator, there is no (or mimimum) impact with such fragmentations becuase it needs to seek the actual data blocks only when the file is changed which it knows from reading the directory (directory-file, to be precise).

Thus, with the exception of the very first backup, NetBackup Accelerator can help you with file systems where there is huge random read overhead. However, note that your mileage depends on how frequently files change. 

Change journal in NTFS would work only when NetBackup client sees it as an NTFS file system. If you are NFS/CFS mounting that file system somewhere else and backing up, NetBackup Accelerator cannot take advantage of NTFS. However, track log is capable of tracking changes even without file system level change journals. In my opionion, that is the true value of NetBackup Accelerator. 



You have everything needed to make use of NetBackup Accelerator! Do you know if the license keys you have were issued during pre-NetBackup 7.5 days? If yes, contact your sales rep or customer care center to issue new license keys for use with NetBackup 7.5. You are not paying anything more. You need the new keys to turn on Accelerator bit. 

Thank you Rasheed,

I understand that not everything can be in the public domain smiley


Hi Rasheed,


I have not seen the answer about Shadow Copy Component. I am doing some tests now, and it looks like Accelerator will not improve SCC backup : On my little clients, Accelerator optimizes only 40% of my backup.

I investigate now but may be you have the answer.



PS : I have a Netbackup, master and clients are under Windows 2008R2.


Useful info... Clear many point... Thx

I want to ask if I can use the accelerator with a clustered file server.

I'm thinking to move the track log to a shared disk and configure both nodes to "see" the same tack file.

Do you think that this will work ? Is it a supported configuration?

I may have missed this, but where is the Accelerator log file stored? 

Would we need to put in a backup exclusion ? And if so what is the path unix and windows?

And, 1 more, I understand that the size of the file is file count and size dependant, but is there some type of scale for estimating Accelerator track log ?


Hi Rasheed,

I have have just upgraded from Version 7.0.1 to and i have updated keys to allow me to select the Accelerator attribute.

I have got to work on a new media server (fresh install) i installed, none of the other media servers that i have i enabled the option on the backup policies for work. all the policies are using MSDP...

is this a known issue from an upgrade?






I am trying to get acceleartor working to a Data Domain storage unit. I have upgraded the DDOS version and DDBoost plugin on the media servers (which happens to be running on a Netbackup 5220 Appliance in this environment). To get the policy to accept the "use accelerator" setting I had to ensure the device mappings file version was 1.114.

When I try to run an accelerator backup the job fails immediately with error 154 (storage unit characteristics mismatched to request).

Surely the fact that the policy accepts the "use accelerator" setting means that the storage unit "checks out".....?

The Appliance is still running Netbackup, is this the issue?

Hi Gautler, 

   Let me investigate this with our development team. Please note that the optimization you get from NetBackup Accelerator depends on how less the data change rate is. In Windows systems, Shadow Copy Components are quite dynamic and changes often. That could be the reason. 

  In any case, I shall get back to you if our developers have a different opinion. 

So long as the track log and file system are from the same node, and the node's identity in the policy is using a virtual name; this would work just fine. In fact, we have customers backup up large NAS volumes (similar to a shared file system) using this method. 

In the worst case scenario, if the track log could not be located; NetBackup will do a traditional full backup. Although it may take longer to finish, you are losing any data. 

Managing the mechanism to co-locate and failover the tracklog is your responsibility and technical support may not be able to help. But the capability itself is supported. 

Hi Gautler, 

  I did hear from our engineering team. What you see for SCC at this time is normal mainly bacause of the way system files are returned from API calls. The good news is that our team had found a way to optimize this significantly but it would require extensive testing cycles before we can include in a release update. Stay tuned, we got you covered! I am unable to state a date as it is against Symantec policy to talk about road map items.