cancel
Showing results for 
Search instead for 
Did you mean: 

NDMP Incremental backup on Netapp filer

Drama
Level 3
Certified

Good morning everyone.

Do any of you have a NDMP inclremental backup of a filer or Celerra in your platform?


We are trying to switch to incremental backup from full backup, due to media comsuption, but we are having some problems.

The share we are testing over has around 15 million of files and near to 1 TB, so incremental and full backups spend almost same time to complete.

The reason is that incremental backup remains in "Begin writing" state for hours, trying to retrieve the metadata about the files of the share.

I'd be very grateful if you could help me to reduce the time of this incremental backup, either modifying NBU conf or Filer conf.

 

Thanks in advance!!!

1 ACCEPTED SOLUTION

Accepted Solutions

Andy_Welburn
Level 6

as we only carry out one full save per month anyway.

Could you possibly break your save down into more manageable "chunks"?  I know NDMP saves do not allow the use of wildcards, so this would only be viable if you have a reasonable number of "folders" within the volume your are backing up. [EDIT You'd also need to utilise more drives if these streams were to run concurrently due to the inability (at the moment?) to multiplex NDMP saves to a single drive. /EDIT]

e.g.

NEW_STREAM
/vol/vol1/dir1
/vol/vol1/dir2
NEW_STREAM
/vol/vol1/dir3
NEW_STREAM
/vol/vol1/dir4
/vol/vol1/dir5
/vol/vol1/dir6

The only other way we've got around this lack of NDMP wildcard functionality is to NFS mount the volume to our Solaris master & utilise the wildcards there - obviously you lose the speed of the NDMP in such a configuration, but it enabled us to reduce the backup window in the long run.

View solution in original post

4 REPLIES 4

Andy_Welburn
Level 6

as we only carry out one full save per month anyway.

Could you possibly break your save down into more manageable "chunks"?  I know NDMP saves do not allow the use of wildcards, so this would only be viable if you have a reasonable number of "folders" within the volume your are backing up. [EDIT You'd also need to utilise more drives if these streams were to run concurrently due to the inability (at the moment?) to multiplex NDMP saves to a single drive. /EDIT]

e.g.

NEW_STREAM
/vol/vol1/dir1
/vol/vol1/dir2
NEW_STREAM
/vol/vol1/dir3
NEW_STREAM
/vol/vol1/dir4
/vol/vol1/dir5
/vol/vol1/dir6

The only other way we've got around this lack of NDMP wildcard functionality is to NFS mount the volume to our Solaris master & utilise the wildcards there - obviously you lose the speed of the NDMP in such a configuration, but it enabled us to reduce the backup window in the long run.

Drama
Level 3
Certified

First at all, thanks for your answer Andy.

Actually we have agreed with client to run two full backups of his mailboxes, that are called from vol1 to vol32 spreaded in two filers.

Each share is over 1,5 TB, and we can't compress because data are already compressed by application itself, so with our SDLT600 tapes, we use around 5 tapes to backup each share.

Due to drive limitations, we have set up one STU dedicated to each filer, and it's allowed to use only ONE drive per STU, so we have only two drives to backup these filers.

All these issues force us to move our full backups to incremental ones.

What you're proposing is a very razonable configuration, but with the limitation regarding drives use, we can't apply it.

Thanks again!!!!

watsons
Level 6

I second Andy's suggestion, at least it would tell us which volume holds the millions of files and we could separate the volume out of the main policy to avoid long backup time. Well.. unless all your volumes basically had more or less same amount of data & files.

I don't think you can have much flexibility here using NDMP type of backup - as naturally the backup is done via the filer, not Netbackup. The external mount will be alternative and you can try to backup via Flashbackup type (to cater for millions of files performance issue). However, there is a tradeoff as Andy mentioned.

Maybe a offhost backup (snapshot the data and mount it onto another host for further backup, which offload the filer) would help as well, but I don't know much about this type - it's more complex in terms of configuration.

Drama
Level 3
Certified

Finally we had to leave the idea of incremental backup, due to compatibility between NBU (6.5.5) and Filer OnTap version.


We opened a support case with NetApp regarding this subject, but they closed the case claiming that our OnTap version, was not supported with NBU 6.5.5

Nevertheless, about mounting an NFS share in some host, would create an overload in our service network, that's why we decided to use the SAN to perform this backup.

 

In the end, we are force to remain in a full backup configuration :(

 

Thanks all anyway.