The story is way back 2004, to give background customer a banking one which had 322 branches with 3 datacenter spread across India. There were 200+ servers spread across and one of the major tasks was to ensure the data is safeguard for all the banking applications.
Core banking system
And the list of application would go on. For the core banking system before the batch process starts we were suppose to take the file system and application level backup just to ensure that the entries posted by the user group and financial transactions made are intact and data is safe. So that we can have the best data available before starting the batch process which would do all financial calculations and accruals for the customers both corporate and retail.
I know India was not open for the NetBackup kind of implementation and was believing distributed way of backing up, by tools, like native tar for Unix, NT backup for the windows based servers/ applications. And database like taking the cold backup at the database level for both oracle and SQL backups. But with growing technology and business needs and audit requirements of safe guarding the data for 7 years made the management force to opt for a solution that had centralized and auto managed backup system. One of the favorites of those time and era was Symantec and management without giving any second thoughts had opted for the solution for 3 datacenters.
There were various challenges in native OS backup tools and had to go for centralized management backup tool in our case Symantec.
Site specific backups, mainly resource dependent rather than then the application or business recommended backups
No track of medias, as for the banking applications required 7 years of retention and tracking for time to audits
High time for identification of restore requests
There was no way to check the native tools based backup being successful, unless we perform tape read activity which used to become time consuming and resource dependent activity
Taking all these things into considerations management had looked into multiple backup softwares like Veritas backup exec, NetBackup datacenter 4.5, other Competing Enterprise Softwares, comparisons were done based, ease of migration from native backups to tools based backup, future growth and application capability was taken into consideration in selection of the backup tool. And Veritas NetBackup was the clear leader in-terms of their vision on both technology and customer reach and their capability in handling the customer needs
We had “Zero” experience of working on NetBackup, to overcome these constraints we were provided class room session by Veritas vendor now known as Symantec. The GUI was user friendly and the technical understanding and policy management was on the easiest way of exploring the application we ensured that the core banking applications were backed up with immediate effect.
Centralized management and ease of operations were the key aspects which were considered during that time with minimum knowledge of what’s going to happen in the future.
Getting into tools based backup other than the native of backing each server on each of the internal drives
Getting the backups converted into 3 master and 3 media servers in each of the site thus giving away 200+ internal DAT drives
Getting the media migrated from 8gb,16gb to DLT media type
Ensuring the media are safeguarded for 7 years as per the business needs which was one of the key aspect of the solution
Centrally managing thus giving away the resources ( staff) requirement across datacenters and for each of the servers
Continuous Data Protection
System recovery within 15 minutes
Management reporting so has to give the business a comfort level on the data safeguard
Once the product was decided it was straight migration as the data was getting backed up through native tools thus we had plans to retain the tapes and internal DAT-drives for all the servers for the first 3 months until we had all full backup in NetBackup for the each of the servers across locations.
For implementation we had to make a list of servers with OS-Sizes, Application details and once database everything is populated we planned to do this in phases, first by building up a NetBackup master server and every week before firing a full backup our plan is move a set of clients from native to NetBackup.
Since the product was new to us and were doing the on job learning, we faced one of the main issue, one day all jobs starting failing with error code 84 and we were like went into panic state as this was happening to the media server that was backing up the core banking application and I still remember the CTO of the IT was on call with us and make a business critical escalation as to why the backup were failing with this status co. We had to raise support call to check on this, after a very long diagnosis in those days the support identified it’s really media got full and hence this error. This gave us the process of creating starch media pool for NetBackup to take those media and perform the backups. This gave us the learning of having the tapes managed into the scratch pool. Funny if I look back now
From my experience, planning is very important for any migration.
Collect customer environment data and analyze.
Ensure sufficient DLT tapes available across sites.
Have the server size information that had huge database size and thus be part of last phase of migration.
Divide the complete project in phases; get ready for the initial troubles, allocate extra days for use of product and troubleshooting.
Have pilot test done before moving to production and check the functionalities as product performance would alter with environment, this would give you chance to fine tune it.
Take maximum help from support; support plays good role in smooth Transitions.
NetBackup implementation has eased up most of the limitation that we had while on the native backup tools
Centralized backup management, ensuring the servers spreads across globally had single view for ease of administration and operations
As banking backup retention was implemented as per needs
As the backup retention was customized and standardize the restore failure rate got drastically down as data spread got minimized into limited medium ( tape / DLT )
Due to retention and backup type ( DINC , CINC ) the space utilization was controlled
Able to distribute remote software and software updated from a centralized host on all platforms for the servers getting updated into the environment
Ease of use, troubleshooting
Now I know the implementation was done on the Veritas datacenter version and had managed the migration and ongoing operations on the tools for close to 2 years post which I had departed the organization for better prospect. But that implementation of mine had given huge learnings both on the tool and the technical aspect of the product. Hope you liked the experience I shared that got carved into me and unforgettable experience that would cherish for long.
This particular article is for generic interpretation and is not intended to misrepresent any particular brand or its features. The author of this article has not permitted any company/ entity / person / organization to use / copy / print or disseminate the information presented in this article.