cancel
Showing results for 
Search instead for 
Did you mean: 

Backing up big data through NDMP

Sagar_K
Level 3
Partner

Hi Guys,

We have volume size of 42TB data to be backup with local NDMP backup going to tape library, it is taking more than 20 days to complete, luckly we don't have much network issues.

I was wondering if some one suggest me having succesful backup in minimum time? any other methods?

 

Netbackup master: 7.1.0.3

Media: 7.1.0.3

Netapp filer: Ontap 8.1.2P2

I am taking backup using snapshot ex: /vol/TP_MYDOCS/.snapshot/weekly.0

We have snap mirror configured from Production to DR filer and NDMP backups are being taken from DR filer.

Please let me know if any aditional details.

 

Thank you so much in advance.

Sagar

 

31 REPLIES 31

Michael_G_Ander
Level 6
Certified

Pretty sure the block size still showed up in ndmpd probe <session-number> with later versions of NetBackup.

The article mentioning reducing the SIZE_DATA_BUFFERS_NDMP, which would suggest to me that it has not been taking out.

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

mnolan
Level 6
Employee Accredited Certified

Ah yes, you're right. I have my wires crossed about when it went into effect for non-remote backups only.

Sagar_K
Level 3
Partner

Thanks Nagalla, I have 7.1.0.3 version so no netbackup accelerator, we are in process of upgrading it 7.6 but it will take some time, as you mentioned accelerator feature can't be used for tapes.

Do you really suggest taking backup of 42TB volume backup to disk? We don’t have disk based backup currently and my boss/company will not be happy to buy it J

 

 

 

Sagar_K
Level 3
Partner

Thank you so much Marianne, i understand that there is no magic solution but still wanted to improve the backup speed at least,

Yes, it is really a nightmare for us backup that big volume, it is something admin didn't planned it well in the beginning and data growth was increased day by day, it was 20TB when the volume was created.

It is actually my new project and still trying to find out some alternate solution from NetApp end.

 

Cheers

Sagar

Sagar_K
Level 3
Partner

Thanks Michael, I will try with SIZE_DATA_BUFFERS_NDMP 262144, backup is still running for this volume and 30TB so far and I don't think changing the buffer size will help the running backup?

Please correct me if I am wrong.

I will set these buffer size next time when it runs but it will take few more days. I will update you on this.

Sagar_K
Level 3
Partner

Thank you so much Genericus.

We have 7.1.0.3 right now and we are thinking of upgrading it to 7.5 very soon, we can't upgrade it to 7.6 since we have few 2003 media servers which will not support 7.6.

About using wildcard in backup selection, i have around 9100 user profiles in the volume and it will be difficult to manage these many streams :( i have attached the screenshot of it.

 

 

sdo
Moderator
Moderator
Partner    VIP    Certified

What is the volume's "minra" setting?

See: https://communities.netapp.com/thread/12698

watsons
Level 6

SagarK, back to your first post - just something that caught my attention, you mentioned about SnapMirror is configured between production & DR netapp box. How frequent is the SnapMirror?

I've heard from a customer who setup a SnapMirror every 15min between the box, and they kept having NDMP backup error (slightly different from your case which is just slow backup). Once they increase the interval of SnapMirror to 2 hours (less frequent), the backup started to work. 

jim_dalton
Level 6

There are some very good points made here and I can add some support for pointers as my setup is nigh on the same.

Direct attached: you can use wildcard and then multitsream one stream per drive. So if your NAS can drive one tape at  X M/s and there is no storage ie READ bottleneck then you should be able to scale up 2X 3X 4X... till you reun out of drives or the filer performance hits a plateau. I'd go here first. Once youve tuned the params set out above, this is the only thing you can experiment with. Wild card was introduced in 7.6 master. You only need to u/g the master to get it. Might be your only option. If there is a bottleneck then sdo has aksed a bunch of questions about the underlyiing infrastructure of the filer that are relevant.

I  use netapp/ndmp/wildcard to do this very thing.

 

3-way , ok so the data wont be going direct to tape and you'll likely need to reconfig the setup but you can run MPX with this config so you can do multistream and mulitplex together but you still need to get more than one stream to achieve, so wildcard again.

Jim

jim_dalton
Level 6

You could probably make some sense out of whether the bottleneck is read on your data by running some dump jobs and timing them. You'd need to run on subsets of the data to make it a valid test. Dump  is the ontap cmd. You might even be able to use the null device , then you can take your tape drives out of the picture for the purposes of this test. Certainly there is a null device and theres an ontap youtube video on it.

Jim

jim_dalton
Level 6

eg:

ontap> dump 0f null /vol/yourvol/userx

Its not entirely clear if wildcards can be used in the last arg, I guess they can as netbackup can. An ontap chap may clarify.

Jim

By applying MAX_FILES_PER_ADD to 100000 and SIZE_DATA_BUFFERS_NDMP to 26214, we went from 20 mbps to 70 mpbs on a 10TB volume.