cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

Backing up big data through NDMP

Hi Guys,

We have volume size of 42TB data to be backup with local NDMP backup going to tape library, it is taking more than 20 days to complete, luckly we don't have much network issues.

I was wondering if some one suggest me having succesful backup in minimum time? any other methods?

 

Netbackup master: 7.1.0.3

Media: 7.1.0.3

Netapp filer: Ontap 8.1.2P2

I am taking backup using snapshot ex: /vol/TP_MYDOCS/.snapshot/weekly.0

We have snap mirror configured from Production to DR filer and NDMP backups are being taken from DR filer.

Please let me know if any aditional details.

 

Thank you so much in advance.

Sagar

 

31 Replies
Highlighted

how many tape drives you are

how many tape drives you are using?

what the type of take Drive (lto 4 or lto5) ?

what is the average speed that you are getting when sending it to tape?

Highlighted

One thing that can help is to

One thing that can help is to tune the buffer size for ndmp backup, SIZE_DATA_BUFFERS_NDMP. I have used the same value as for normal tape backup in SIZE_DATA_BUFFERS.

But the ndmp process on NetApp is a down prioritized process, so other things can slow down the backup

Another way to minimize the backup time could be to split the backup in mutliple streams if possible

Hope this helps

    

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue
Highlighted

Hi Nagala/Michael, Thank for

Hi Nagala/Michael,

Thank for your reply, I am using LTO 5 tape drives and maximum speed was 30MBPS.

We have 11 tape drives and tape drive availablity is not an issue, it will always get free tape drive.

Below are the buffer values which are already set.

NUMBER_DATA_BUFFERS - 32

SIZE_DATA_BUFFERS - 262144

SIZE_DATA_BUFFERS_NDMP - 65536

I can't split the backup as it single volume, it is actually user profile share which is something can't be changed.

 

Any further suggestion will be great help.

Highlighted

FYI - just some simple

FYI - just some simple calculations:

Actual  
42 TB
43,008 GB
44,040,192 MB
20 days
1,728,000 seconds
25 MB/s
   
Desired  
100 MB/s
440,402 seconds
122 hours
5.1 days

 

Even if the NetApp was able to 100% consistently send the data at maximum near wire speed of a 1 Gb/s interface, then it would still take 5 days.

I'd start by looking at the source system, and seeing whether you have any performance issues, i.e. inability to sustain traffic.

How many active NICs do you have on the media server that is ingesteing the data?

Does the media server have to perform any other backups?

Highlighted

Are you using direct NDMP

Are you using direct NDMP from NetApp to tape? Or three-way NDMP from NeApp to NetBackup to tape?
Highlighted

Nagalla's question of average

Nagalla's question of average speed would be good to check/know, especially during times when the NetBackup environment is quiet, and nothing else is running. Does the intermittent average speed increase above the overall average of 25MB/s? And Michael's suggestion of splitting the backup into smaller pieces is worth looking at, especially if you have qtrees within the snapshot - and if so, then you could multi-stream and multi-plex the NDMP backup streams (assuming that the NDMP data is being ingested by the NetBackup media server (I.e. three-way NDMP) and not direct NDMP from NetApp to SAN attached tape).
Highlighted

It is direct NDMP from NetApp

It is direct NDMP from NetApp to tape.

Highlighted

Thanks Nagalla, I have 11

Thanks Nagalla, I have 11 LTO5 tape drives and max speed i saw was 25MB.

Highlighted

Thanks Michael, Below are the

Thanks Michael,

Below are the buffer size set already set, please let me if any changes required.

NUMBER_DATA_BUFFERS-32
SIZE_DATA_BUFFERS - 262144
SIZE_DATA_BUFFERS_NDMP - 65536

Unfortunately, we can't split the backup as it is single volume, it is actually a user profile shared volume and it is not that easy to break into multiple volume/streams.

Highlighted

Thanks, I have 2 NICs with

Thanks, I have 2 NICs with teaming of 10GB speed, i don't have much experience on NetApp i will request my storage to team verify the performance issues on filer.

Yes, we do have other backups running for other server backup such as Windows, Flash backup and Standard type but they are not running all the time, they scheduled to run  for 12 hours.

We have 5 media servers shared with 2 libraries, 1 library dedicated for NDMP backups and other one for client backups.

 

Highlighted

Thanks, as per my earlier

Thanks, as per my earlier discussion with Storage guy, qtrees are not present in volume

As mentinoed earlier, it is user profile shared volume, it is like 5000 profiles(folders with user name) in the volumes how can i split them?

Highlighted

another though that i got is

another though that i got is using the netbackup accelerator.

but you can not use this feature into tapes.. 

you also need to use it as CIFS or NFS share backup... 

you need to have disk target in place.. initial backup might take long long time.. but later on you will see the full backups compleated in less time.. 

see the below blog from AbdulRasheed with benchmark test on this

https://www-secure.symantec.com/connect/blogs/frequently-asked-questions-netbackup-accelerator-part-ii

Highlighted

I have no magic solution to

I have no magic solution to add here....

Just wish that sys admins could understand the backup nightmare they create with these massive volumes...

Highlighted

I would try with

I would try with SIZE_DATA_BUFFERS_NDMP 262144, have seen around 100 MB/sec with this on LTO5 connected a NetApp filer with this.

About the splitting I think you can choose the folder the niveau below the snapshot

e.g. /vol/TP_MYDOCS/.snapshot/weekly.0/<folder>

If splitting is possible you also can utilize more tape drives, but a depends on how good the filer is to handle multiple sequentiel reads on the same volume.

A problem on Netapp has been that internal kahuna process was singlethreaded, don't know that is still the case.

 

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue
Highlighted

Wow, first time ever that you

Wow, first time ever that you don't have a solution or at least guide him in the right direction.

And that is actually a compliment, not a negative comment.

Sorry for off topic, just had to say that.

Highlighted

I have this same issue - I

I have this same issue - I was told with NB 7.6 there should be the ability to use wildcards, so you could set it up to backup /vol/TP_MYDOCS/.snapshot/weekly.0/* - with each subfolder as a seperate stream. This way, most of the smaller directories would be done quickly, so will soon find which is the "long pole" ...

 

You may find you can then split that particular home directory up as a sperate policy.

 

 

NetBackup 8.1.2 on Solaris 11, writing to DataDomain 9800 6.0.2.50
duplicating via SLP to LTO5 in SL8500 via ACSLS
Highlighted

Hello, I have already seen

Hello,

I have already seen 400MB/s on one Head of FAS3140 (disk 15K) to a Virtual Library. (SAN Tape 4Gbs)

You must have an issue on your SAN Tape !

Highlighted

With 7.5.0.7 and 7.6.0.2 you

With 7.5.0.7 and 7.6.0.2 you can adjust MAX_FILES_PER_ADD to 100000 from a previous max of 50000

I have seen increase in the meta data processing speed and reduce backuptime by this.

 

I do believe that NDMP data buffer tuning had been taking out starting at with a bundle for 7.0.1

http://www.symantec.com/business/support/index?page=content&pmv=print&impressions=&viewlocale=&id=TECH147938

Definitely use the max speed drives you can and with a local NDMP backup as well as the above recomended to split out the selection into streams to utilize the max available streams/devices.

Highlighted

If it is user home drives on

If it is user home drives on NetApp, then will definitely be one qtree - but let's not worry about that right now. Because it is user home drives then the sub-folders of the user home drives are addressable under the primary qtree - so you should be able to split your backup in to say 52 streams of a* through Z*. If you multi-stream then you should be able to get 11 streams. But if you changed to three-way NDMP then you could multi-plex too - but you would need a very powerful media server, or several, and have some careful max concurrent drives config of the 11 drives across specially created additional tape storage units - I reckon at max you could run 4*11=44 concurrent streams) but this is probably not practical. At least you should be able to get between 5 to 11 concurrent streams. Ok - the important stuff now... What model of NetApp? Pls describe the aggregates, raid groups, and volume layout - and disk spindle type - that this data set resides upon? It could be that the NetApp just isn't up to it - and never will be - and so any amount of NetBackup tuning and re-configuration might be pointless.