cancel
Showing results for 
Search instead for 
Did you mean: 

Maximum distance between client and Master/Media Server

Kai_8z
Level 3

Hi all,

we are in the design planning phase and looking for ideas. (at the end to save money...)

we have Netbackup 7.6.0.4 in our Datacenter in Location A. there are a lot of Master/Media Server and Tape Robots.

for the future we will have a second location (B) about 600km away. there a several clients located.
These clients should have the backups in a different location.

To save NBU Server in location B we had the idea instead of backing up all the clients in Loc B and replicate the data to LOC A, that we connect the clients directly from Location B to the NBU Server in  Location A.

I think the problem can be the distance from 600 km?!?! 
is this possible?
is it working, but slow/bad

or whatever...

 

thanks

Kai

Any expiriences ?

3 REPLIES 3

vtas_chas
Level 6
Employee

There are some variables missing here, like what your link speed will be, how much data you have in site B, what the change rate is, but just on its face I probably wouldn't do this.  You're not likley to have a fast enough link with low enough latency to push large amounts of data.

If I were to try it, it wouldn't be without client side dedupe and accelerator turned on.  But you're still painting yourself into a corner you don't want to be in -- your only option for restore is dragging the data back across the wire.  

If you're dealing with a small amount of data, have you considered using the virtual appliance?  It would be simple to set up, you'd have a local recovery copy, and you could use AIR, which would make this simple to execute against.  The VA is currently only able to support 2TB of configured storage, so it's limited in its scope, but depending on what site B has in it, it may be enough.

Charles
VCS, NBU & Appliances

Hi Charles

thanks for your answer.
Yes you are right,there are some variables missing. 
In the moment we are looking for "different" solutions  then the normal known.  At the end we want a solution which is easy to manage and have low costs.
The question we had is, is this a possible way, or not. So far i understand you in the right way, the distance will kill the  throughput because of the latency. right?
What is unclear for me is, which part of the "Backup stream" has the most problems with the latency?
is it the communication between Client and Master,  or master and Media  or whatever

Kai

 

 

 

vtas_chas
Level 6
Employee

Latency is one issue, overall bandwidth is another.  Your firewall capabilities could be another.  Your VPN throughput could be another.  

You're streaming potentially a lot of data over a long distance WAN link, everything is going to be sensitive to latency, noise on the link, what else has to traverse that network link, etc., not just one specific area.  There are lots of pieces that have the potential for issues.  In this kind of situation, if you can, test it and see what you get.  I know that's not a great answer, but it's the only reliable way to know what the impacts are going to be.  I've helped customers design in similar situations only to find that all our assumptions were wrong because of a QoS setting for a specific applicatin we couldn't change.  Weeks of calculation and modeling went out the window because of this one thing the backup team didn't know.  If we'd tested before we went down the assumptions path, we'd have known a lot of information.

Client to media server communications can be sensitive to latency and incur timeouts, which can be addressed with some tweaking of timeout values, but as you tweak to address network issues, what are you doing to your backup window?  

At the end of the day, with a lot more detail you could design something that will likely work, depending upon the constraints you have around backup window and amount of data.  It's just not a great way to design a solution, IMO, because as I said, you're basically ignoring restorability of the data when you design this way.  

Charles
VCS, NBU & Appliances