11-05-2014 04:12 PM
We are getting ready to do a DR test. Our Production Netbackup Master Server is running 7.5.0.6 on Linux Redhat. The catalog and all data tapes will be shipped to our DR site and we will be recovering the catalog to a Solaris 10 machine that has Netbackup 7.5.0.6 installed and we will be running restores from the Solaris box to hosts on the DR network
Will this work? Do I have to do anything special before or after the catalog recovery to make all of my images usable since they are different OS's or should this just be a straightforward catalog recovery.
Solved! Go to Solution.
11-05-2014 08:06 PM
Hello,
Please see the process below for migrating between hardware. This is usually not done as a DR process but technically its going to be the same.
http://www.symantec.com/docs/TECH77448
11-05-2014 08:06 PM
Hello,
Please see the process below for migrating between hardware. This is usually not done as a DR process but technically its going to be the same.
http://www.symantec.com/docs/TECH77448
11-05-2014 09:34 PM
I agree with the TN that Riaan posted above.
Just take note of requirements such as exact same NBU version and patch level as well as same installation path. The default installation path on Solaris is /opt while on Linix the default is /usr.
11-06-2014 05:58 AM
Thank you. Funny thing about this setup is we have done the DR several times....When we upgraded the production Linux server to 7.5.0.6 from 7.1.0.3 we had no issues. We were notified of a DR a month later and I upgraded the DR server(Solaris) to 7.5.0.6 from 7.1.0.3 and everything worked fine....Catalog Migration went flawlessly and I could see all my images and the DR was a success.
Fast Forward 8 months and we had another DR exercise and the catalog migration finished successfully but I couldn't see any of my images. I ended up having to import tapes manually in order to get the DR exercise to complete successfully.
Unfortunately I am only granted access to the DR site for 3 days at time during the DR exercise so I can't do a whole lot of troubleshooting.
Thank you for the link, I'm wondering if the last time I did the catalog migration was I unable to see any images in the catalog because I didn't perform this step:
"If the operating system has changed (for example from Solaris to AIX) run the command "nbemmcmd -updatehost -machinename <master server name> -machinetype master -operatingsystem <new O/S>". Running this command updates the operating system field in the EMM master server record to reflect the new operating system. A list of valid operating systems can be found in the on-line help for the nbemmcmd -updatehost command."
If that is the case it would beg the question why did it work fine the first time right after I upgraded the Solaris server?
Anyway I appreciate the help. Thanks Again!
11-06-2014 06:08 AM
Hi,
Please ellaborate on your "migration process". You said "Fast Forward 8 months and we had another DR exercise and the catalog migration finished successfully but I couldn't see any of my images."
The catalog changed from 7.1 to 7.5 so a simple copy of the /db/images folder wont work anymore.
11-06-2014 06:34 AM
The migration process I used was basically uploading the Production catalog DR image file to the DR server and restoring the catalog from tape.
11-06-2014 07:47 AM
Ok, there should be no issue with that.
11-06-2014 10:24 AM
The catalog restore with NBU 7.1 worked fine as far as image database was concerned because all parts of image catalog was under images folder.
As from NBU 7.5, the image headers were moved to EMM database. So, without changing OS type after catalog recovery, the EMM database would not be in a usable state. This is why you had a problem finding images - the headers in EMM db were inaccessible.
11-06-2014 11:12 AM
I probably don't need to bring this up, but in case anyone else reads this thread:
From a DR standpoint, I would highly recommend that your DR hardware/OS/etc. exactly duplicate your production hardware. In a true disaster situation, the last thing you would want is to encounter the problem you just did, where you were unable to recover the production data immediately. Your DR tests are a great way to ensure recovery is still possible, but any delay in being able to recover the rest of your environment could be a problem for management. By duplicating the important parts of your backup and recovery infrastructures, you reduce the recovery time for the rest of your systems.
11-06-2014 04:00 PM
I agree 100% Ron, the only issue here is that the customer that these tests are done for owns all of the DR hardware and does not want to upgrade the hardware. We've been trying to convince them and we are always being told that the DR environment is "going away soon".
I won't have access to the DR environment until next Wednesday so I can't see if that link and changing the OS with nbemmcmd resolves the problem....When I get to try it I will mark a solution. Thank you everybody!!!