cancel
Showing results for 
Search instead for 
Did you mean: 

Pre Seeding Backup data before enabling SLP Replication

Moose100
Level 3

Hi All, I’m in a bit of a predicament here and I am appealing for a bit of advice and help. I have inherited a rather lovely Netbackup issue.

 

We have a Netbackup 7.6  server at a customer site. The Data it backs up doesn’t actually change that much but it currently  holds about 2.8 TB Data (mostly BMR data). We are obliged to hold a copy of this backup data as an "Offsite" backup, but due to reasons I can't go into the link between the sites wasn't set up for over a year.

 

Once the link was in place, We initially set up a SLP policy to get the Data to our organisation but this maxed out the 10 mbps link 24/7 for about 10 days and our customer asked us to throttle the traffic (which I believe can’t be done with SLPs) or turn it off. We turned it off.

 

 Our problem is this initial “First Sync” of the Data which I guess is rather large at 2 TB + I guess once that has been done the performace shoud be OK due to Netbackups clever dedupe technology.  We were wondering if we could perform some sort of pre seeding of that Backup customer  Data to our site.

 

I was wondering if this is achievable using the following method.

 

  • Install Netbackup 7.6 on laptop with Server 2008 R2 and a Large USB Hard Drive (3-4 TB)
  •  
  • Go to customer site, add NetBackup Laptop as a Trusted Master server / replication target.
  •  
  • Duplicate data from Customer Netbackup server to Netbackup Laptop using SLP. (change backup polices back to standard from SLP after finished).
  •  
  • Ship NetBackup Laptop back to our site.
  •  
  • Somehow get the Data from NetBackup laptop into our NetBackup system. (unsure about this part).
  •  
  • Configure SLPs on polices at remote customer site to replicate to our NetBackup server, in theory this shouldn’t hammer the link 24/7 as it will only be replicating data that has changed.

 

 Perhaps I'm overthinking this, perhaps there is another way. I'm open to any suggestions, surley I can't have been the only one in this situation.

 

Unfortunatly beefing up the Link is out of the question (yes I have asked).

Traffic shaping using network equipment is also out (due to the way the VPN tunnel works apparatly).

The server at the customer site is running NBU 7.6.03 and so is the Netbackup server we are running at our site. We are only using disk storage, no tapes are involved.

Thanks in advance for any input.

 

 

 

 

 

3 ACCEPTED SOLUTIONS

Accepted Solutions

Nicolai
Moderator
Moderator
Partner    VIP   

You can throttle network bandwidth in Netbackup

Documentation: Is there a way to throttle or limit VERITAS NetBackup (tm) so it will not use all available bandwidth on the network?

http://www.symantec.com/docs/TECH30467

More documentation can be found in Netbackup admin guide 

http://www.symantec.com/docs/DOC6488

Pre-seeding of fingerprint cache has the following technote: 

NetBackup deduplication client WAN backup: how to seed the fingerprint cache to speed up the initial backup

http://www.symantec.com/docs/TECH144437

A whole other thing : is this MSDP to MSDP replication ?. Pre-seeding will only work with MSDP disk pools. Meaning normal disk or OST based device like data domain does not use any seeding technology

View solution in original post

sdo
Moderator
Moderator
Partner    VIP    Certified

A fair set of infrastructure contraints.  :\

.

This may help re setting a throttle for NetBackup AIR optimized duplication replication:

https://support.symantec.com/en_US/article.HOWTO89099.html

.

I think you may end up spending as much time planning and executing a pre-seed as you would with gradually implementing SLP based NetBackup AIR.

.

7) Is the 2.8 TB on just one NetBackup Client?  Or the total across multiple NetBackup Clients?  If it is just one NetBackup Client, then judicious use of 'folder paths' across/between two backup policies could be used to gradually implement SLP based AIR replication (and thus pre-seeding).  But this coudl get complex if there is just one main top folder whith hundreds of second level folders.

8) If implementing SLP gradually will take too long, or you would like to explore actual pre-seeding in more detail - then... it may be possible to do it more simply without having to build a 'master server' installation on a laptop - and instead use a media server with a basic disk storage unit on a laptop + USB drive.  However, the suitability of this is dependent upon what form this 2.8 TB is.   Sorry, but it's still not clear to me whether the 2.*8 TB is pre or post de-dupe?   Can you say exactly where/how you are getting a figure of 2.8 TB from?

View solution in original post

Andrew_Madsen
Level 6
Partner

If you are using MSDP in both sites then throttling applies to the SLP. We do it all the time with a few of our customers. The Deduplication Administrators guide http://www.symantec.com/docs/DOC6466 outlines how to use set the bandwidth. My recommendation is to use the agent.cfg setting to attain the throttle limit. Look on page 119 and you will see reference to the agent.cfg and its location (<storage location>\etc\puredisk) edit it and set the bandwidthlimit to a kilobyte value equal to the amount of bandwidth you want to use. In your case if the customer wanted you to use only 5 Mbps (megabits per second) then you would convert that to KBps (kilobytes per second) or 5*1024/8=640. This setting affects the duplication / replication as a whole not just by the job. so as the replication job count goes up each job uses less and less bandwidth and as the job count decreases each job uses more and more bandwidth. Kind of like a tunnel in a tunnel in a tunnel in your case. 

View solution in original post

9 REPLIES 9

Nicolai
Moderator
Moderator
Partner    VIP   

You can throttle network bandwidth in Netbackup

Documentation: Is there a way to throttle or limit VERITAS NetBackup (tm) so it will not use all available bandwidth on the network?

http://www.symantec.com/docs/TECH30467

More documentation can be found in Netbackup admin guide 

http://www.symantec.com/docs/DOC6488

Pre-seeding of fingerprint cache has the following technote: 

NetBackup deduplication client WAN backup: how to seed the fingerprint cache to speed up the initial backup

http://www.symantec.com/docs/TECH144437

A whole other thing : is this MSDP to MSDP replication ?. Pre-seeding will only work with MSDP disk pools. Meaning normal disk or OST based device like data domain does not use any seeding technology

sdo
Moderator
Moderator
Partner    VIP    Certified

Sounds as if what you've inherited is not a NetBackup issue, but an infrastructure issue.

Some questions, answers to which may help some of us steer your planning:

1) Is the 2.8 TB pre-dupe (i.e. the size of a full backup of everything), or post de-dupe (the size of the MSDP pool)?

2) What is the LAN latency between sites?

3) You really should run iperf or some other network tool to test and confirm 100% what the actual usable/achievable/sustainable bandwidth is between each site?

4) What is the daily rate of change - e.g. this figure is possibly something similar to the size of a differential backup of everything?

5) NetBackup MSDP at each site?

When you've determined the above - this will help you calculate whether it will ever be acheivable even after pre-seeding somehow - or after completing first successful full AIR copy.  Without the above you will never know whether it should work.

6) Have you thought about gradually/incrementally introducing AIR, e.g. one or two backup polices at a time?

Moose100
Level 3

Hi Guys, Thanks very much for the input. It's very helpfull.

 

@sdo, you have hit the nail on the head. This is making do with a badly thought out solution. I have more info on this link between the custmers Netbackup server and the NBU server we have here.

 

The connection from the customer site to our infrastructure is handled by two VPN boxes that only work with one network port, as NetBackup uses multiple ports a further open VPN connection was set up Within this VPN tunnel between a server in our site and the NetBackup server on the customers site. (Yes, a tunnel within a tunnel!!!).

 

To make matters worse this connection is using the same connection our customer uses to access the internet + plus the VPN connection on our side of the network is shared with 8 – 9 other customer sites! It’s fair to say you could not class this as a reliable connection.

 

 

 

I'm have since found out the customer has other Data centres one can only assume the connection between the customers data centres is reliable (I’m led to believe it is). I'm going to suggest we install another NetBackup master/media  server in another datacentre to host an offsite copy of the Backup data as the best solution going forward.

 

If not it’s plan B, make do with what I have.

 

 

 

In answer to your questions

 

  1. Bit unsure if I’m answering you correctly but the Backup Data folder on the Customer NetBackup sever is 2.8 TB in size. So this what I’m basing my estimate on.

  2. LAN latency is very difficult to measure according to our network engineer, we don’t have access to the network equipment at the customer site and those VPN boxes are rather limited with regards to measurement I’m told.

  3. Thanks for the tip, I plan to download iperf and have a play with it later.

  4. On average the Server is backing up about 40 GB every night, on a weekend the BMRs run this generates an extra 152 GB of extra data on top of the 40 GB. (one of these jobs is sever that averages 100 GB + per backup job).

  5. Erm, showing my ignorance here. All SLP policies were deleted from the customer server Both servers are certainly licensed to duplicate to each other (Full enterprise licensing) is there anywhere I can check?

  6. Doh, this has never been tried. Certainly will try this if I have to go to plan B.

 

 

 

@Nicolai,

 

Poking around, I have noticed you can throttle Network bandwidth (by subnet) in Netbackup, but other posts I have read suggest this throttling does not apply to SLP replication Data, do you know if this is the case, I’ll have a look at the admin guide to ty and clarify.

 

I have chanced upon that pre-seeding guide before but isn’t that for clients performing  a first time backup to a master server, the clients are fine with that. My problems is SLP data between two Netbackup Servers, or have I missed the point.

 

Last point I’m unsure how to check this regarding MSDP disk pools, do you know any quick way to check? I guess I need to read through the Admin guide, the dedupe Guide and the best practice for SLPs L

 

 

 

 

 

 

sdo
Moderator
Moderator
Partner    VIP    Certified

A fair set of infrastructure contraints.  :\

.

This may help re setting a throttle for NetBackup AIR optimized duplication replication:

https://support.symantec.com/en_US/article.HOWTO89099.html

.

I think you may end up spending as much time planning and executing a pre-seed as you would with gradually implementing SLP based NetBackup AIR.

.

7) Is the 2.8 TB on just one NetBackup Client?  Or the total across multiple NetBackup Clients?  If it is just one NetBackup Client, then judicious use of 'folder paths' across/between two backup policies could be used to gradually implement SLP based AIR replication (and thus pre-seeding).  But this coudl get complex if there is just one main top folder whith hundreds of second level folders.

8) If implementing SLP gradually will take too long, or you would like to explore actual pre-seeding in more detail - then... it may be possible to do it more simply without having to build a 'master server' installation on a laptop - and instead use a media server with a basic disk storage unit on a laptop + USB drive.  However, the suitability of this is dependent upon what form this 2.8 TB is.   Sorry, but it's still not clear to me whether the 2.*8 TB is pre or post de-dupe?   Can you say exactly where/how you are getting a figure of 2.8 TB from?

Moose100
Level 3

Hiya, apologies for the lack of clarity with my answers

 

7) I'm with you here. We are only talking about 4-5 servers and a handful of policies. I believe this is do-able If I split the SQL and file jobs up  they will much smaller and mangable to duplicate via SLP.

There is only one BMR job that  that is likely to cause me problems that's averaging about 110 GB each time it runs. This is the only snag.

 

 

8)

OK the 2.8TB  is the size  the Data folder stored on the customer NetBackup sever  (it's the one full of numbered folders containing .bhd and .bin files). This is one sever with both Master and media roles.

This 2.8 TB of Data is the sum of backup data from 5 NetBackup clients, it's a mixture of file, SQL and BMR backup jobs. File and SQL run every DAY, BMR jobs run on the weekend.

Am I right in assuming this data stared on the NetBackup server is deduped?

 

Thanks for the link to that guide, I'll take a look.

 

sdo
Moderator
Moderator
Partner    VIP    Certified

7a) Sounds good - as you could always break 4/5 clients into probably just a few more than 4/5 policies, i.e. have a one-to-one relationship between 'client_name + backup_selection' to 'policy_name' and then implement 'SLP AIR' gradually on a per policy basis.   BUT - remember to do any policy splits just before the next 'full' backup is due to run - because, for example...

...creating a new policy, or adding a client to policy, or amending the selection of a backup policy during mid-week before an incremental/differential is due to run will result in the incremental/differential being an effective full - because NetBackup thinks that it hasn't seen that combination before.

7b) The 110GB, is that from one backup client only, or the total from several backup clients?

7c) The 110GB, is that from a weekly(/monthly) full only, or does the daily (cummulative? differential?) incremental generate 110 GB?

7d) If the 110GB comes from weekly/weekend full backups - when in teh weekend do the 'full' schedules trigger?  Friday night?  Saturday morning?  Saturday night?

7e) Depending upon the answers to the above, then you may have enough time to replicate 'de-duped' 110GB - IF - the backup schedules start and complete early enough - but then, is the 'company' active at the weekend and thus need uncongested WAN links?

8a) Ok - 2.8TB is the size post de-dupe.  And not the total size of backup data stored.

8b) My thinking re pre-seeding was to perhaps duplicate/re-hydrate from MSDP to DSU (Disk Storage Unit - aka 'basic disk'), and then transport this to the other site/NetBackup domian, and then import and then duplicate/ingest (this time the other way around from DSU to MSDP).  But the re-hydrated size of (what appears to be 2.8TB) the backup data will very likely be much larger than the portable USB drives which rules out this idea.  And, more worrying is that the source environment will think that it has still got a copy on DSU, and so will the target environment - and so catalog entry maniplulation/expiry is fraught with risk... Thus, your idea of a portable Master/Media+MSDP is a better idea - and so maybe some scripts could be written to 'reverse AIR' (i.e. 'push' NetBackup AIR) replication from one environment to another - along the lines of:

source master -> laptop master

laptop master -> target-master

(...but - apologies for using the term 'master' in the two '->' push (just above), i.e. apologies... because... I need to help make it a bit clearer to you that... whilst NetBackup AIR appears to be between 'master servers'... it is in fact actually between 'MSDP Storage Servers' - and the apparent 'master' involvement is purely a 'trust' configuration - i.e. the master servers are configured to allow NetBackup MSDP Storage Servers (usually media servers with MSDP, but also sometimes master/media servers with MSDP) to transfer (replicate) data between one another.)  HTH.

Moose100
Level 3

Hiya, thanks again for the info.

 

7)a - yep, I already know how I can break the backups up, we are only talking about 5 servers and a about 9 polices shouldn't be to much Drama. Thanks for the confirmation about changing the Data policy just before a incremental.... I have least this lesson the hard way on another Netbackup server  a few months ago :)

Anyway it doesn’t apply here, all the jobs on this particular server are full backups, not an incremental in sight.

7)b - This 110 GB weekly BMR job  is from One client, a great big management server. , the other 4 production servers have much smaller footprints for BMR jobs.

7)c - Its a full BMR job - no incremental or differential  jobs are in place on this Netbackup system .

7)d - Every night a full backup of some SQL databases and File locations run (this is the 40 GB per night) on the weekend the BMR jobs run for all 5 servers 2 on a Saturday 3 on a Sunday (inc the large 110GB job).

8) I'm not ruling this out this copy/slp/seeding laptop routine but in the meantime I  have proposed that we install another netbackup server in another data centre with a decent useable link. In the grand scheme of things it makes more sense as this customer Data is VERY important to the company. Obviously I can’t say who the customer is but this system is super critical.

 

If not I plan to take time splitting up the policies and gradually  introducing the AIR replication (as you have suggested above) I think this is possible with the exception of the huge management server 110 GB  backup. The customer will just have to accept the risk on this as this management server backup will never replicate over that link.

 

Thanks again for all input it's very helpfull.

 

 

 

 

 

 

Nicolai
Moderator
Moderator
Partner    VIP   

Sorry for the late reply.

I don't know if bandwidth limitation apply to SLP's. I would test it out.

 

 

Andrew_Madsen
Level 6
Partner

If you are using MSDP in both sites then throttling applies to the SLP. We do it all the time with a few of our customers. The Deduplication Administrators guide http://www.symantec.com/docs/DOC6466 outlines how to use set the bandwidth. My recommendation is to use the agent.cfg setting to attain the throttle limit. Look on page 119 and you will see reference to the agent.cfg and its location (<storage location>\etc\puredisk) edit it and set the bandwidthlimit to a kilobyte value equal to the amount of bandwidth you want to use. In your case if the customer wanted you to use only 5 Mbps (megabits per second) then you would convert that to KBps (kilobytes per second) or 5*1024/8=640. This setting affects the duplication / replication as a whole not just by the job. so as the replication job count goes up each job uses less and less bandwidth and as the job count decreases each job uses more and more bandwidth. Kind of like a tunnel in a tunnel in a tunnel in your case.