Showing results for 
Search instead for 
Did you mean: 

NDMP backup - Netbackup

Level 4

How Netbackup read data from NDMP clients (e.g Oracle ZFS/ Netapp Filer) ? Is it a block read or file read ?

Can number of files be a factor in slow NDMP backup performance ? Is there any parameters for performance tuning ?


Thanks !


Level 6
Partner    VIP    Accredited Certified

Please help me to understand the following:

NDMP clients (e.g Oracle ZFS ...

What is the relationship?

Are we talking NetApp NAS (Network Attached Storage) mounted as NFS?
Or SAN (a LUN mounted directly on SUN/Oracle server with ZFS filesystem)?

Level 4

I am taking backup of the storage directly using NDMP - netbackup

Level 6
Employee Accredited Certified

Technically, with NDMP, NetBackup is not doing any of the reading, the filer is. This is also file level and will cause rehydration if the volume has any sort of compression or deduplication.

There are a five passes.

Pass I is where the filer maps the files.

Pass II is where the filer maps the directories.

Pass III dumps the directories (backs up to the storage unit)

Pass IV dumps the regular files

Pass V dumps the ACLs

Pass 1 and 2 can take a while and potentially reach NetBackup's NDMP Progress Timeout (Can be changed, Default is 8 hours, max 1 week)

Things that can slow this portion down do include a large number of files and/or fragmentation. (As the inodes are being mapped)

Second bottle neck can be the storage location. The filer can dump to direct attached tape, in which case we are looking at tape drives speed. The faster the better.

3-Way NDMP invokes LAN traffic over to another filer, in which case the LAN or the other filer's tape drives can be a bottle neck.

Lastly with a remote NDMP, the traffic is once again across the LAN to the NetBackup server and its storage unit.  Standard tuning for buffers apply here.

In addition to a large number of inodes slowing down Pass I and/or II, this also can affect the performance in another way. The filer will send this file history information through the media server to the master server. This meta informational data is the same size regardless of the size of the file. So millions of small files will increase the amount of meta data.  What I have seen here is that by making use of or and the MAX_FILES_PER_ADD touch file set to 100k (max of 50k in previous versions, default 5k) actually helps with the load when passing this information to the Master server's processes.


Beyond this, any other issues that arrise generally become one-off situations that result in a support call.


P.S. An additional "performance" enhancer when running local ndmp backups, is to utilize the max number of streams the filer can support, break the selections up into smaller ammounts by utilizing "allow multiple data streams" with the NEW_STREAM directive and to have enough drives available locally to the filer to support the max number of streams. (As filer hardware is capable; memory and such items.)

Level 6

Consider carefully whats going on might find it tricky to recover files from your backups which is why Marianne is asking those questions. In other words does your implementation enable you to satisfy your requirements? Jim

Level 4

Thanks for the your replies. They are really helpful.

I'm still working on this issue.

What is this?

"Info ndmpagent (pid=4922) XXXXXXXXXX: Tape record size: 64512"


I have touched the BUFFERS to 256KB including NDMP, why I'm getting this ?



Level 6
Partner    VIP    Accredited Certified

This seems to be a filer setting.

You may want to look for documentation for your make/model filer.

I found example of filer settings in this online manual:

See 'ndmpd probe' output on page 49/50:

mover.recordSize: 64512

So - clearly not NBU setting...

As per mnolan's excellent post above:

Technically, with NDMP, NetBackup is not doing any of the reading, the filer is.