cancel
Showing results for 
Search instead for 
Did you mean: 
CJett
Level 3
Our Background:

As an ASP/MSP company that strictly focuses on the Microsoft front, finding a functional yet cost-effective backup solution can certainly be a challenge. While there are many, many choices when it comes to backup software, through research we came to the conclusion that most backup software suites offer advanced features that are either not fully effective, difficult to implement, or costly to license due to the granular way that the add-on features are broken up. It seemed to be a hopeless case to find a product that would suite our narrow niche, while remaining cost effective and covering all of the options that we need to provide a broad spectrum of protection features to our customers. Historically, we had simply been providing network shares for our users to store data, but with the growing demands for managed services, along with stringent requirements for data retention, encryption, and compression, we knew that finding a centralized platform that met these needs was mandatory.

Our Requirements:

Since we are an MSP, our requirements revolved around being able to offer our customers a number of products from a singular software suite as well as being able to meet requirements and restrictions placed on our customers regarding storage of their data. Our requirement list at the time was:
  • Deep data compression to reduce backend storage requirements
  • Backup “resume” in the event of a failure
  • Job throttling server to side to avoid network and disk congestion
  • Backup of Network Drives and Mount Points
  • Encryption of data storage
  • Hostname validation to avoid data poaching
  • Multiple defined schedules
  • Integration with VSS provider to provide Open File access
  • Integration with SQL to provide active database backups
  • Ability to define excludes from the client side
  • An endpoint client to provide our customers with the ability to manage data selection and recover data
  • Server resiliency. A method to recover the server in the event of a failure
  • Extensive logging to facilitate troubleshooting
  • Easy agent deployment
  • Easy location of failed backups including email notification on failure
  • Integration into our central customer control panel for backup metrics
Evaluating software:

With a long list of mandatory requirements, we found ourselves evaluating lots of software that did not meet our needs. There were many software suites that came close to meeting our requirements, however it seemed there was always one critical function missing. Then we came across Netbackup. By first glance through whitepapers, it seemed that Netbackup would meet a lot of our needs. Just the sheer amount of third party support for Netbackup alone made it worth looking into (meaning, if our needs couldn’t be met with the product alone, there could be the potential of meeting those needs with third party components). Our testing went as normal. We installed a Netbackup server in our lab to begin testing with. One of the first hurdles that we hit was the reverse DNS validation that Netbackup does (validation the hostname of the machine creating the inbound request against the IP PTR record and matching them). At the time of testing, our environment consisted of close to 800 servers running in workgroups, and our DNS was not configured to facilitate reverse DNS lookups for those IPs. We ended up circumventing this issue by implementing a policy of using the host file on the Netbackup server to create 1-1 server name to IP mappings and controlling it that way. Since then, we have moved to a centralized management domain, however to this day we still use the host file to control reverse lookup security.

Another issue we ran across was agent deployment. One of our requirements was an idiot proof way of deploying agents and keeping those agents up to date as new releases of the product came out. Since we were not using a centralized domain, we had no easy method of deploying software from a central point. We ended up creating a deployment batch file that used set values depending on certain parameters such as backup policy and using global windows variables to set other parameters of the installation. This was not ideal as it still required engineer interaction with the endpoint, but it met our needs.

After getting past the basic deployment and operation issues, we began testing some of the feature sets. High on our list was server resiliency and recovery. Our environment uses remote storage as a data store for our customers. We found that by utilizing this remote storage to place catalog backups, we could essentially lose the entire Netbackup server, and through rebuilding the machine, setting up the fiber attached storage with the same drive names as before, and restoring the catalog, we could completely recover from hardware failure. While it was not a true failover environment, it allowed us to be protected again hardware failure by utilizing highly redundant storage and remote catalog backups.

As we continued to add more and more machines to our test Netbackup environment, we found ourselves approaching maximum capacity on our ISL (Inter-Switch Link) into our backup environment. After some research, we found that by throttling the amount of active connections to a storage unit, we could effectively control network saturation without having to resort to actually throttling throughput. Since all of our equipment was in the same datacenter across a 100Mb/s backbone, lack of network speed was not an issue, but more of a problem especially after testing multiple streams (which we later decided against using due to performance issues.

Implementation and acceptance:

After putting the product through the ringer, we decided to roll it out in small scale to gauge potential problems and acceptance from our customer base whom is the ultimate judge/jury for new services. We began to move customers over and found an almost instant adaptation. The end user experience provided just enough control to allow customers to define their backup experience, while remaining secure and not over-exposing features. Also, by simply using the Activity Monitor in the Administration Console, we found it very easy to pinpoint issues and troubleshoot them quickly and proactively providing a seamless service. On the back end, we had found that by segregating data storage into multiple storage units, we could meet the security concerns of most of our customers without having to implement encryption (which later was a great thing when BMR was introduced into the product). We found ourselves removing archaic backup methods and replacing them with Netbackup which allowed us to monitor protection for not only our customers, but our internal equipment as well.

Ongoing use of a great product:

The implementation of our Netbackup environment happened several years ago, and the product has gone through many changes since the initial install. Since then, we have actively been using BMR for easy hardware migration and P-V conversion on both internal and customer equipment. We have also found ourselves heavily tweaking the exit script, through the use of job variables, to integrate reporting and metrics into our control panel for our customers.

Since testing, our organization has grown to over 2400 servers and spread across three different geographic locations and protecting close to 30TB of data. We have since started using other products in conjunction with Netbackup to meet some of the growing needs of our business, however none of them have worked as predictably and reliable as Netbackup. Netbackup has been a rock solid platform for us to build our business on both through its flexibility, continued development of great features, awesome support backed by the deep knowledge of a well established product, and a no-nonsense approach to providing backups that work every time.
Version history
Last update:
‎06-24-2009 07:56 AM
Updated by: