Showing results for 
Search instead for 
Did you mean: 

Design consideration, please share your thoughts

Level 3

We are going to have a Backup environment, where Solaris with netbackup 6.5 is application ( master ) server.  We will be having 200 clients.  I would like to know the following in designing the environment, i will be most thankful on sharing your thoughts.  This is going to be SAN environment and SSO environment.

1.  How many media servers are required.  What is the metric is used to have a media server.
2. How many Tape Drives are required to share among the media servers ( Tape LTO-3 with IBM library is planning)
3.  How do we over come the resource contentions
4.  What the guide lines are required to create policys in order to over come the resource contentions.
5.  The estimated data size during every day is 5 to 6 TB and during week end min of 10 TB.  Please advise the guide lines to follow for the design process.

Thanking You

Level 6
We have the same set up you describe, and backup between 20 - 30T per day.
We use three media servers for network backups and round robin them.
If the storage is over 2T we have SAN media servers and backup over fiber.

We have 24 tapes drives and we ended up getting a Virtual Tape Library to handle the need for more drives than we had available during our peak loads - although you will see "backup to disk then copy to tape" or deduplication both as viable options.

How fast can your servers 'push' the data? Even with multiplexing, you will find backups taking as long as the slowest server - you get a windows box that outputs at 128K and you tie up a drive all day. Things like that make your question tough to answer.

Level 4
This is generalized to any backup product but should help answer your questions.

1. First you need to get the estimated size of each client's backup and then add 20%.
Next take that number and divide it by the number of hours in the backup window to get the MB/s needed to meet the backup window should the client increase by up to 20% in size.
For network backups you can expect to get 90% of a Gig-E connections 125 MiB/s throughput or around 112 MiB/s.  Using the above computed MiB/s needed per client and the client's backup window, you would be able to chart out when each client will have it's backups and that would give you good estimate of how many media servers you would need to meet those backups.  Of course multiple bonded nics can be used to increase the bandwidth on a single media server providing the media server OS and your network hardware support it. 

When you figure out the number of media servers from the estimates above you should add 1 or 2 additional media servers both for high availability as well as unforseen growth of the environment.

2. For the tape drives I would suggest mapping all of them to the master and media servers with full access to all drives.  NetBackup is really good now juggling drives as needed. If you do have an issue with one or more media servers hogging too many drives you can decrease the number they can use at one time but they will still be able to get to all of them. 

3. Resource contentions will always be a problem unless you get a really big budget to way overbuild.  When looking at options do not forget to consider restore speeds as well.  Multiplexing sounds wonderful from a time compression / tape utilization on backups but when restoring you have to take the time for the backup and multiply it by the number of multiplexed streams to get the restore time.  It is pretty much on par with a full defragmented file's read time versus a heavily fragmented one.  To read in the file the tape has to get repositioned a lot of times to find the next blocks of data somewhere after other backups have written to that tape as well.

One thing I found to help a lot was both the use of a local disk storage unit that let me run backups there before it would replicate off to tape later.  This allowed multiple backups that might have required multiplexing on tape to meet the backup window to be done without that need.  The replication was setup easily enough to run in a window when there was less tape usage.  When it was done the disk images were expired.

Another option is a deduplication appliance and/or software option.  Again these take in data to disk and put pointers to repeated data bits already on the appliance.  It not only gives you a very fast restore time compared to tape but also saves media cost for backups that really do not need to goto tape immediately.  Just as with the local disk backups, processes can be put into place to move these backups to tape later during low tape usage periods.  

4. Before creating a policy it is best to look at when you are able to backup a client and how much data.  Larger amounts of data may need multiple streams to complete in the window.  Also the frequency of backups makes a big impact as well.  When I have looked at backups I plot out what is running on that day and what resources I will need to make it all happen.  I used that in the policy to make sure I have enough tape drives / disk resources in that window.  

Another thing I have used in the past was to assign specific backups to specific media servers instead of using the round-robin assignment method.  This helps a lot to know exactly what media server and resources will always do that backup so I can rebalance if I have one server getting hit harder than others.

5. As noted above to split up the amount of data being backed up you need to look down to a client level.  Bigger clients might be good candidates for being their own Enterprise Client (What I think was called a SAN Media Server). 

Check the sizes of the backups as well since you can use multiple streams to get the data off bigger clients faster until you hit the point where additional streams begin to negatively impact performance. 

Your biggest challenge will be to make sure you have the network, SAN, and Media Server resources to spread out your load without over saturating your infrastructure.