Full Linux and windows restore without BMR option

Hello,

I am not sure if I need to open seperate two discussion for this or not. I have doubts for both.

 

WINDOWS

I was going thorugh the following tech note:

http://www.symantec.com/business/support/index?page=content&id=TECH56473

There it is told to restore every thing from the master server in step A (all the drives which contains anything  related to program files). My doubt is will it not stop the backup once we are restore will start overwriting the files related to netbackup client? I am asking  this after what happened with me with Linux restore explained below.

 

UNIX

What needed to be done if we need to restore full Linux system, including everything. The system is live now. I have tried to restore every thing on one running linux system. Restore stopped in between and then I lostt ssh access to the host, it messed the system very badly that we are still trying to bring it up.

I know one way is to restore every thing to a seprate disk then boot the system from that seprate disk, but what if I want to restore every thing to the original location not on the seprate disk, is it possible?

Let me know if I have not explained my problem well enough.

Thanks in advance!

1 Solution

Accepted Solutions
Accepted Solution!

If you can use BMR for this,

If you can use BMR for this, it is all the better. But, if you are not set up for a BMR restore and need to do a full system recovery, then the TECH article for Windows will do you well.  It has been used by many and it is a working solution.  The actions of Windows kernels during a overwrite action differ significantly from a Linux/Unix restore with overwrite. Windows recognizes i a file has already been loaded into to memory an will no0t disturb the running system,  It will do the restore and flag it.  In the details of the restore job you will see messages stating to the effect that the file will only be active upon a reboot.

For Unix/Linux, this is definitely not the case. An overwrite of a critical system file, such as "libc" will crash the system with a kernel panic. It is not a good thing. Recovery of such clients without use of BMR requires a few things to be done as part of the process.  This will make a nice TECH article for me to write. Anyway, here are the needed steps:

1. All system files need to be restored to an alternate disk. As such you will need to have at least two disks for this to work. One disk with be the working OS disk with the needed NBU client software installed. It will need to have the credentials of the restoring client, as seen by the NBU Master and any Media Server in use.The second disk will be the target of the restore actions.

2. The source client system can be as minimal as you care to have it. It needs a working OS at the same release as the backup client image. Kernel versions do not have to exactly match, but it helps. The host name and IP address information needs to be properly resolved on the NBU servers.  If the source machine of the backup needs to remain on the network during the recovery phase, you will need to "spoof" the Master Server by assigning an unused IP address to the host name for the duration of the restore. Add a temporary entry in the /etc/hosts file that has the needed host name to IP address resolution. Clean this up after recovery completes.

3. For the sake of an example, let us assume that the two disks are device paths /dev/sda and /dev/sdb.  The /dev/sda is the running recovery server, the /dev/sdb is the image target server. 

4. On the target /dev/sdb disk, allocate all of the needed partitions of the backup image. The sizes of the partitions need to be as large or larger than the amount of data that will be recovered into them. They do not have to match what the original server partition allocations were. 

5. Create the appropriate file system on each partition. 

6. Create a mount point directory on the source server. It can have any name you desire. Create sub-directories to match the 

target image. Here is example information:

Assume the following file systems noted in the backup image:

  /boot
  /  (the root file system)
  /usr
  /export
Create these mount points directories.
 
  mkdir /RECOVER
  mkdir /RECOVER/boot
  mkdir /RECOVER/export
    

7. Mount each file system allocation from the /dev/sdb disk to these mount points:

mount -o rw -t ext3 /dev/sdb1 /RECOVER/boot
mount -o rw -t ext3 /dev/sdb3 /RECOVER
mount -o rw -t ext3 /dev/sdb4 /RECOVER/usr
mount -o rw -t ext3 /dev/sdb5 /RECOVER/export

/dev/sdb2 will be the swap for the recovered client.

8. Create a rename file to be used for the restores of the file systems.  This will redirect the files from their normal  /dev/sda location to the corresponding /dev/sdb location. As an example:

change /usr to /RECOVER/usr
 

9. Initiate the restore to the client using the "bprestore" command. Use the "-r" option flag to designate the rename file. You can restore file systems individually from the command line or use a list file to get all included in one action.  The "-f " flag is used for that. I also recommend the use of the "-w" flag.

See the NBU Commands reference for more information on the named options above.

10.  Once all of the files have been restored, you will need to install a boot image/grub loader into the recovery disk to make it bootable. Make sure that the mount table is correct on the recovery disk.

11. Shutdown the recovered client and remove the original source (/dev/sda) disk.

12. Reboot using the newly recovered disk as the boot disk.


If all went well, it should come up normally. Log in an verify settings.  Make changes as desired.

View solution in original post

8 Replies

TECH56473 is the correct one

TECH56473 is the correct one to follow for Windows.
This doc reccommends to install NBU to another path. This will preserve logs as well as prevent binaries being overwritten during restore.

The NBU Troubleshooting Guide says that OS partitions/filesystems should be restored to a different physical drive.
This will prevent OS and NBU binaries from being overwritten during restore.
See 'Disaster Recovery' chapter in NetBackup Troubleshooting Guide  

You cannot restore Linux to original location - it will destroy the OS and NBU installation.

Hello Marianne, It is kind of

Hello Marianne,

It is kind of conflicting, In tech note it is being said that in first we neeed to restore every thing related to OS, then the system state to original hardware only.

In troubleshooting guide  you pointed to it is saying that it is needed to be restored in different location.

 

So, for linux we need to use BMR option if we need to restore full thing?

The TN (TECH56473) is for

The TN (TECH56473) is for Windows.

The section in Troubleshooting Guide that I have referred to is for Unix/Linux system disk restore.

Best to use BMR for both - Windows and Unix/Linux.

Accepted Solution!

If you can use BMR for this,

If you can use BMR for this, it is all the better. But, if you are not set up for a BMR restore and need to do a full system recovery, then the TECH article for Windows will do you well.  It has been used by many and it is a working solution.  The actions of Windows kernels during a overwrite action differ significantly from a Linux/Unix restore with overwrite. Windows recognizes i a file has already been loaded into to memory an will no0t disturb the running system,  It will do the restore and flag it.  In the details of the restore job you will see messages stating to the effect that the file will only be active upon a reboot.

For Unix/Linux, this is definitely not the case. An overwrite of a critical system file, such as "libc" will crash the system with a kernel panic. It is not a good thing. Recovery of such clients without use of BMR requires a few things to be done as part of the process.  This will make a nice TECH article for me to write. Anyway, here are the needed steps:

1. All system files need to be restored to an alternate disk. As such you will need to have at least two disks for this to work. One disk with be the working OS disk with the needed NBU client software installed. It will need to have the credentials of the restoring client, as seen by the NBU Master and any Media Server in use.The second disk will be the target of the restore actions.

2. The source client system can be as minimal as you care to have it. It needs a working OS at the same release as the backup client image. Kernel versions do not have to exactly match, but it helps. The host name and IP address information needs to be properly resolved on the NBU servers.  If the source machine of the backup needs to remain on the network during the recovery phase, you will need to "spoof" the Master Server by assigning an unused IP address to the host name for the duration of the restore. Add a temporary entry in the /etc/hosts file that has the needed host name to IP address resolution. Clean this up after recovery completes.

3. For the sake of an example, let us assume that the two disks are device paths /dev/sda and /dev/sdb.  The /dev/sda is the running recovery server, the /dev/sdb is the image target server. 

4. On the target /dev/sdb disk, allocate all of the needed partitions of the backup image. The sizes of the partitions need to be as large or larger than the amount of data that will be recovered into them. They do not have to match what the original server partition allocations were. 

5. Create the appropriate file system on each partition. 

6. Create a mount point directory on the source server. It can have any name you desire. Create sub-directories to match the 

target image. Here is example information:

Assume the following file systems noted in the backup image:

  /boot
  /  (the root file system)
  /usr
  /export
Create these mount points directories.
 
  mkdir /RECOVER
  mkdir /RECOVER/boot
  mkdir /RECOVER/export
    

7. Mount each file system allocation from the /dev/sdb disk to these mount points:

mount -o rw -t ext3 /dev/sdb1 /RECOVER/boot
mount -o rw -t ext3 /dev/sdb3 /RECOVER
mount -o rw -t ext3 /dev/sdb4 /RECOVER/usr
mount -o rw -t ext3 /dev/sdb5 /RECOVER/export

/dev/sdb2 will be the swap for the recovered client.

8. Create a rename file to be used for the restores of the file systems.  This will redirect the files from their normal  /dev/sda location to the corresponding /dev/sdb location. As an example:

change /usr to /RECOVER/usr
 

9. Initiate the restore to the client using the "bprestore" command. Use the "-r" option flag to designate the rename file. You can restore file systems individually from the command line or use a list file to get all included in one action.  The "-f " flag is used for that. I also recommend the use of the "-w" flag.

See the NBU Commands reference for more information on the named options above.

10.  Once all of the files have been restored, you will need to install a boot image/grub loader into the recovery disk to make it bootable. Make sure that the mount table is correct on the recovery disk.

11. Shutdown the recovered client and remove the original source (/dev/sda) disk.

12. Reboot using the newly recovered disk as the boot disk.


If all went well, it should come up normally. Log in an verify settings.  Make changes as desired.

View solution in original post

IMO, it has to be BMR - it's

IMO, it has to be BMR - it's designed to do what you want.  Any tech notes that cover laying an OS on top of, or to the side of, a running OS (and all subsequent steps) are (notwithstanding skill and experience of others), IMO, stabs in the dark.  I know there are some quality tech notes out there, that have taken a lot of sweat and tears to lay down - but they all share one common feature - some kind of caveat that implies that no responsibility is taken.

Ok, so there may be bugs in BMR yet to be discovered, but I'd rather use a product that is supposed to 'do what it says on the tin', as opposed to a tech note where the implication is 'it may work for you'.  BMR isn't dificult to setup, or configure, or test - I wouldn't waste time on anything else.  But then that's easy for me to say when I don't currently have a broken system to recover.

Hey - I even once reveiwed and compared in thorough detail the three tech notes that I could find re complete Windows system recovery without using BMR - and quickly discovered that one of them was only just slightly different - so that left two - and there were several conflicts within them even when referring to the same OS version.  It's actually frightenning to be relying on tech notes that 'may work' when you need to recover business critical data.  Stick with BMR.  If you do your own comparison of the tech notes for Windows non-BMR recovery, and take some time to lay the two next to each and match up the sections - you will find that each TN has bits missing from the other, and several conflicts - so your chances of success are also limited by which TN you follow - and or whether you hop between the two TNs.  A rocky path, and you're already in a stumbling situation.

To be fair, I have worked with poeple who swore blind that it was ok to restore RHEL over a running RHEL instance - but I never saw them get it to work.  That told me something at least.

Jamie, Thanks for the

Jamie,

Thanks for the explanation. When we tried restore libc got replaced and then we had to boot it using live cd and copied libc from another working host then only we got to boot the host normally. Good to know that it was expected. Now I am clear why it worked for windows and why it did not work on Linux.

We use same of recovery method for our DR for linux hosts and it works. But this time it was in production and user didnt want to add a new disk for some unknown reasons. Any way we are good now thanks for your support.

Sdo, Lesson learned. I will

Sdo,

Lesson learned. I will start re-congifuring policies to collect BMR info during backups. smiley

Highlighted

Re: If you can use BMR for this,

For what it is worth, we just recovered a Linux system using this method, and it worked pretty well however we did come across one issue.

The system we were restoring was the inactive node of an application cluster.  In order for me to get logged into the altboot disk that was setup on the failed server, the Linux administrator had copied the /etc/passwd file from the active node of the cluster to the system we were working on.  When restoring the home directory, we noticed that the UID's for every home directory that was restored was inheriting the UID from the /etc/passwd that was stored in the working OS (altbook disk).  So say John Smith's UID on the active node was 1, and his UID on the failed node was 2, rather than restoring the UID of 2 on the system, it was coming up as 1.

We ended up removing the current /etc/passwd file, and he moved the restored /etc/passwd file (restored to /mnt/etc) to the working OS, then restored (again) the home directory, and the subdirectories came up with the correct UID. 

We're still noticing some other strange things though that we are currently working out.