cancel
Showing results for 
Search instead for 
Did you mean: 

Sharepoint 2013/2010 backup over WAN

_Johan
Level 3

Hi

I need some advice about securing sharepoint 2010/2013 farms located at remote sites with pretty slow links between them and the netbackup servers.

Today we have a central sharepoint 2010 solution and the problem for some of our remote sites are that bigger files and so on is slow to open and also fails to upload and so on.
 

My thought is to set up a new sharepoint farm at one of our remote sites used for local sharing of files and sites, but then we have a backup/restore challange for that site.

Is client side deduplication and accelerator supported for sharepoint 2010/2013? Both GRT (not 2013 for now) and non GRT backups.

Even if you can backup that sharepoint farm, restore would be a problem as you need the whole file, db or whatever to be sent over the WAN link.

Our netbackup environment (7.5.0.6) located at our main site: One viritual master server (win 2008r2), Two netbackup 5220 appliances (media servers), one at our main site and one at our dr site (fiber connection) that holds a "copy" of all data with the help of SLP and optimized deup.

 

Any suggestions about how to best solve this scenario without having to buy a new appliance and set up on the remote site?
 

 

thanks

 

/Johan

 

 

3 REPLIES 3

RLeon
Moderator
Moderator
   VIP   

Both Sharepoint 2010 and 2013 support Client Side Dedup, but not Accelerator. GRT works in SP2010 but not SP2013 like you said, with or without Client Side Dedup.

Even without Accelerator, Client Side Dedup alone should help lower network overhead over your WAN link during backups. It's effectiveness would of course depend primarily on the data's rate of change at the remote site. If it is 'reasonably' low, and that your WAN link has 'reasonably' good capacity and throughput, then backing up this remote Sharepoint farm should be fine.

 

Restoring large volumes of data to remote clients has always been a problem, since NetBackup rehydrates deduplicated data before sending them during a restore.

There is a new feature in NetBackup 7.6 called Client-Direct Restore. It's like Client Side Dedup in reverse, where the Dedup Storage Server bypasses the media server and restores data in dedup form directly to the client. To take advantage of this you will have to upgrade your master server to 7.6.x and your Appliances to 2.6.x. I'm not sure how well it would work with Sharepoint data though, never tested it.

 

If that doesn't work out, then you could always create a 'token' media server at the remote site. It doesn't necessarily have to be another Appliance, a MSDP media server in a VM works fine too. This is unless we are talking about some huge amount of data, in which case the NetBackup Appliance could be a better option, all things considered, including its built-in WAN Optimization feature. You would still need a license for this media server, of course; unless you're on the Platform based licensing model.

_Johan
Level 3

Thanks for the answer RLeon!

i will check a little bit more on "Client-Direct Restore" and see if that could be something to conssider.

Also, im a little bit interested, and have thought about the solution with a "local" media server at the site.

Regarding the license for a new server i think (i have sked our license partner to confirmwith symantec) that i have license to set up as many as i want as we pay per protected TB.

but as i only have appliance media servers today, i have some questions that you also maybe could answer.

 

 

If i would set up a new netbackup media server at the remote site, could i use optimized duplication from that site to our other 2 sites with SLP:s?

If the media server is a viritual server in hyperv with for example 1TB disks, how is the deduplication handled on that media server, is it the sam as on the appliances?

 

My thought was then:

1. backup to the remote sites local media server and use deduplication

2. Use optimized duplication to "copy" all data backed up on the remote site to our two sites, main and dr.

 

With the above, we could backup and restore data on the remote site fast and we also gets protection if the media server "disapears" or whatever with only minimal "loss" of bandwith.

I also may want that the data backed up on the locally media server on the remote site shall have, lets say 2 weeks retention, but on our main sites that may be different, for eg 2 years. would that be a problem?

 

also, what kind of netbackup "options" do i need to accomplish this.

 

 

Could you point me in the right direction or if im thinking real wrong here, light that up for me ;)

 

thanks

/Johan

 

 

 

 

RLeon
Moderator
Moderator
   VIP   

You got everything, there is nothing more to add! No, seriously.

It appears you are on the Platform based licensing model (a.k.a. Capacity Based), you are free to deploy additional NetBackup Servers, self-installed or Appliance.

Unless you want to "test the waters" with 7.6 and Client Direct Restore for Sharepoint, a local MSDP Media Server at the remote site would be the best option, in terms of simplicity.
The remote Media Server would still be part of the "NetBackup Domain" under the Master Server in the main site. Optimized Duplication happens between MSDP Media Servers within the same Nbu domain, regardless of their respective geographical location. And yes, you do this with SLP; and yes, data at the remote site can have different retentions to those "copied" to the main site, this is also done using SLP.

Deduplication, as well as Optimized Duplication in a "self-installed" MDSP Media Server works the same way as an Appliance based MSDP Media Server. But, the Appliance does have a built-in WAN Optimization feature, which makes data transfers "faster" between two Appliances.

Running a Media Server as a VM is supported, but do take into consideration the processor, memory and disk storage throughput requirements for Deduplication. Tape connectivity is not supported for a VM based Media Server. More info available from the Nbu Dedup guide, and from the Nbu in VM document.

 

Another option would be for the remote site to have its own NetBackup domain, i.e., its own Master Server and possibly some Media Servers. In this case you use A.I.R. to replicate data between the two sites/Nbu domains. A.I.R. also sends data in dedup'd form, much like Optimized Duplication within the same Nbu domain. This approach may be better for some, because each site would be self contained and self sufficient. But you'd effectively have to manage/administer two separate NetBackup deployments, in some sense.