cancel
Showing results for 
Search instead for 
Did you mean: 

BMR for Solaris 11 in Virtual machine LDOM failed

enguessan
Level 4
Partner Accredited

Hello all,

I'm trying to make BRM on Oracle VM LDOM for Solaris 11.

I have the following error saying that there is no /cdrom.

cdrom on ldom is mount as disk on /.cdrom

bellow the message.

can someone help me resolv that issue?

Enter the NetBackup client name : client
Enter the client's netmask (dotted decimal form) : 192.168.200.8

Enter the client's netmask (dotted decimal form) : 255.255.255.0
Enter the NetBackup master server name : master
Enter the NetBackup master server IP Address (dotted decimal form) : 192.168.200.1
Enter the client's default gateway (dotted decimal form) : 192.168.200.250
add net default: gateway 192.168.2.250
add persistent net default: gateway 192.168.2.250
Hostname: client
Apr 21 17:42:33 netbackup[1353]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:33 netbackup[1353]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1420]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1420]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1423]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1423]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1426]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1426]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1429]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:35 netbackup[1429]: OID:128:V-1-6 Unable to load configuration settings.
Reconfiguring /dev and /devices for Bare Metal Restore
Apr 21 17:42:37 netbackup[1439]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:37 netbackup[1439]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:37 netbackup[1442]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:37 netbackup[1442]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:38 netbackup[1446]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:38 netbackup[1446]: OID:128:V-1-6 Unable to load configuration settings.
ERROR: Shared Resource Tree /cdrom/ not mounted.
ERROR: Please boot with correct Media Boot CD or network boot image.
Apr 21 17:42:38 netbackup[1450]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:38 netbackup[1450]: OID:128:V-1-6 Unable to load configuration settings.
Apr 21 17:42:38 netbackup[1456]: OID:122:V-1-6 Unable to load configuration settings.
Apr 21 17:42:38 netbackup[1456]: OID:128:V-1-6 Unable to load configuration settings.

Bare Metal Restore has failed.
Now, you will be dropped to a shell prompt.
You may reboot the system when you have completed diagnosis.

 df -h
Filesystem             Size   Used  Available Capacity  Mounted on
/devices/ramdisk-root:a
                       233M   142M        91M    62%    /
/devices                 0K     0K         0K     0%    /devices
/dev                     0K     0K         0K     0%    /dev
ctfs                     0K     0K         0K     0%    /system/contract
proc                     0K     0K         0K     0%    /proc
mnttab                   0K     0K         0K     0%    /etc/mnttab
swap                   5.3G   2.0M       5.3G     1%    /system/volatile
objfs                    0K     0K         0K     0%    /system/object
sharefs                  0K     0K         0K     0%    /etc/dfs/sharetab
/devices/virtual-devices@100/channel-devices@200/disk@0:a
                       630M   630M         0K   100%    /.cdrom

/dev/lofi/1            1.1G   1.1G         0K   100%    /usr
/dev/lofi/2            105M   105M         0K   100%    /mnt/misc
/mnt/misc/opt          105M   105M         0K   100%    /mnt/misc/opt
swap                   5.3G    16K       5.3G     1%    /root
swap                   5.3G    64K       5.3G     1%    /jack
swap                   5.3G   280K       5.3G     1%    /tmp

28 REPLIES 28

sdo
Moderator
Moderator
Partner    VIP    Certified

NetBackup version?  For each of master, media, and client?

Exact patch/update/service-pack level of Solaris 11 on BMR client?

List *ALL* of the volume managers (VxVM, LVM, others?) and their versions - *AND* - all of the file-system types (VxFS, ZFS, etc...) *AND* their versions - of all of these used within the BMR client?

.

No one can begin to help you without exact detail for all of the above.

enguessan
Level 4
Partner Accredited

NetBackup version :

Master : RHEL 6.5 : NBU 7.6.1.1

Media : RHEL 6.5 : NBU 7.6.1.1

Client source : LDOM Solairs 11 with zfs : NBU 7.6.1.1

 

the restore failed whil checkind le directory /cdrom that not exist.

enents log on the master server :

 EMPTY_TASK_STATE=0
+ export EMPTY_TASK_STATE
+ QUEUED_TASK_STATE=1
+ export QUEUED_TASK_STATE
+ ACTIVE_TASK_STATE=2
+ export ACTIVE_TASK_STATE
+ DONE_TASK_STATE=3
+ export DONE_TASK_STATE
+ WAITING_TASK_STATE=4
+ export WAITING_TASK_STATE
+ EMPTY_OPERATIONAL_STATE=0
+ export EMPTY_OPERATIONAL_STATE
+ READY_OPERATIONAL_STATE=1
+ export READY_OPERATIONAL_STATE
+ INITIALIZING_OPERATIONAL_STATE=2
+ export INITIALIZING_OPERATIONAL_STATE
+ MAPPING_OPERATIONAL_STATE=3
+ export MAPPING_OPERATIONAL_STATE
+ MAPPED_OPERATIONAL_STATE=4
+ export MAPPED_OPERATIONAL_STATE
+ FORMATING_OPERATIONAL_STATE=5
+ export FORMATING_OPERATIONAL_STATE
+ RESTORING_OPERATIONAL_STATE=6
+ export RESTORING_OPERATIONAL_STATE
+ FINALIZING_OPERATIONAL_STATE=7
+ export FINALIZING_OPERATIONAL_STATE
+ WAITING_FOR_REBOOT_OPERATIONAL_STATE=8
+ export WAITING_FOR_REBOOT_OPERATIONAL_STATE
+ PREFORMAT_EP_OPERATIONAL_STATE=9
+ export PREFORMAT_EP_OPERATIONAL_STATE
+ PRERESTORE_EP_OPERATIONAL_STATE=10
+ export PRERESTORE_EP_OPERATIONAL_STATE
+ POSTRESTORE_EP_OPERATIONAL_STATE=11
+ export POSTRESTORE_EP_OPERATIONAL_STATE
+ FIRSTBOOT_EP_OPERATIONAL_STATE=12
+ export FIRSTBOOT_EP_OPERATIONAL_STATE
+ DISCOVERY_EP_OPERATIONAL_STATE=13
+ export DISCOVERY_EP_OPERATIONAL_STATE
+ FAILED_OPERATIONAL_STATE=14
+ export FAILED_OPERATIONAL_STATE
+ biName=srtcd
+ bmrDir=/usr/openv/netbackup/baremetal/
+ bootServerAddress=192.168.2.203
+ bootServerName=bbgcidev
+ clAddressHex=C0A80210
+ clName=bbgcidev
+ clNodeName=bbgcidev
+ clOs=sol
+ clOsLevel=5.11
+ configName=restore
+ configType=3
+ importNonRootVgs=NO
+ onEpError=1
+ restoreNonRootVgs=YES
+ runEp=NO
+ runMode=RESTORE
+ serverAddress=192.168.1.167
+ serverName=srv-nbumaster
+ srtFakeRoot=Solaris_11/Tools/Boot
+ srtName=srtcd
+ listOfExt3Devices=''
+ tune2fsResult=''
+ export biName
+ export bmrDir
+ export bootServerAddress
+ export bootServerName
+ export clAddressHex
+ export clName
+ export clNodeName
+ export clOs
+ export clOsLevel
+ export configName
+ export configType
+ export importNonRootVgs
+ export onEpError
+ export restoreNonRootVgs
+ export runEp
+ export runMode
+ export serverAddress
+ export serverName
+ export srtFakeRoot
+ export srtName
+ export listOfExt3Devices
+ export tune2fsResult
+ [ -f /utilityFunctions ]
+ utilityFunctions=/utilityFunctions
+ export utilityFunctions
+ . /utilityFunctions
+ DebugUtilityFuncs=''
+ export DebugUtilityFuncs
+ defaultNetmask=255.255.255.0
+ export defaultNetmask
+ BMRC=/usr/openv/netbackup/bin/bmrc
+ export BMRC
+ BMRSAVECFG=/usr/openv/netbackup/bin/bmrsavecfg
+ export BMRSAVECFG
+ : /dev/console
+ export CONSOLE
+ echo '\c'
+ Echo_C='\c'
+ Echo_N=''
+ [ -z /utilityFunctions ]
+ exportFunctions='EpBail ExternalProcedure BmrcEpPull EpError EpInfo SetClientState EchoToConsole RunDiscovery SetVxSSDirs DoAuth AskAuth TestAuth ConfirmAuthFailure'
+ typeset -fx EpBail ExternalProcedure BmrcEpPull EpError EpInfo SetClientState EchoToConsole RunDiscovery SetVxSSDirs DoAuth AskAuth TestAuth ConfirmAuthFailure
+ EMPTY_TASK_STATE=0
+ export EMPTY_TASK_STATE
+ QUEUED_TASK_STATE=1
+ export QUEUED_TASK_STATE
+ ACTIVE_TASK_STATE=2
+ export ACTIVE_TASK_STATE
+ DONE_TASK_STATE=3
+ export DONE_TASK_STATE
+ WAITING_TASK_STATE=4
+ export WAITING_TASK_STATE
+ EMPTY_OPERATIONAL_STATE=0
+ export EMPTY_OPERATIONAL_STATE
+ READY_OPERATIONAL_STATE=1
+ export READY_OPERATIONAL_STATE
+ INITIALIZING_OPERATIONAL_STATE=2
+ export INITIALIZING_OPERATIONAL_STATE
+ MAPPING_OPERATIONAL_STATE=3
+ export MAPPING_OPERATIONAL_STATE
+ MAPPED_OPERATIONAL_STATE=4
+ export MAPPED_OPERATIONAL_STATE
+ FORMATING_OPERATIONAL_STATE=5
+ export FORMATING_OPERATIONAL_STATE
+ RESTORING_OPERATIONAL_STATE=6
+ export RESTORING_OPERATIONAL_STATE
+ FINALIZING_OPERATIONAL_STATE=7
+ export FINALIZING_OPERATIONAL_STATE
+ WAITING_FOR_REBOOT_OPERATIONAL_STATE=8
+ export WAITING_FOR_REBOOT_OPERATIONAL_STATE
+ PREFORMAT_EP_OPERATIONAL_STATE=9
+ export PREFORMAT_EP_OPERATIONAL_STATE
+ PRERESTORE_EP_OPERATIONAL_STATE=10
+ export PRERESTORE_EP_OPERATIONAL_STATE
+ POSTRESTORE_EP_OPERATIONAL_STATE=11
+ export POSTRESTORE_EP_OPERATIONAL_STATE
+ FIRSTBOOT_EP_OPERATIONAL_STATE=12
+ export FIRSTBOOT_EP_OPERATIONAL_STATE
+ DISCOVERY_EP_OPERATIONAL_STATE=13
+ export DISCOVERY_EP_OPERATIONAL_STATE
+ FAILED_OPERATIONAL_STATE=14
+ export FAILED_OPERATIONAL_STATE
+ umask 002
+ MNT=/tmp/mnt
+ export MNT
+ CONSOLE=/dev/console
+ export CONSOLE
+ BMRSHARE=/tmp/bmr
+ export BMRSHARE
+ BMRBINDIR=/tmp/bmr/sol/bin
+ export BMRBINDIR
+ PATH=/usr/bin:/usr/sbin:/bin:/etc:/tmp/bmr/sol/bin
+ export PATH
+ /sbin/uname -m
+ PLATFORM=sun4v
+ export PLATFORM
+ I386=i386
+ export I386
+ SUN4U=sun4u
+ export SUN4U
+ SUN4US=sun4us
+ export SUN4US
+ SUN4V=sun4v
+ export SUN4V
+ cd /
+ [[ ! -d /cdrom/ ]]
+ print 'ERROR: Shared Resource Tree /cdrom/ not mounted.'
+ tee -a /dev/console
ERROR: Shared Resource Tree /cdrom/ not mounted.
+ Bail 'ERROR: Please boot with correct Media Boot CD or network boot image.'
+ test -n ''
+ EchoToConsole 'ERROR: Please boot with correct Media Boot CD or network boot image.'
+ [ 'XERROR: Please boot with correct Media Boot CD or network boot image.' '=' X-n ]
+ _c=''

+ _n=''
+ [ -t 1 ]
+ tee -a /dev/console
+ echo 'ERROR: Please boot with correct Media Boot CD or network boot image.'
ERROR: Please boot with correct Media Boot CD or network boot image.
+ unset _c _n
+ SetClientState 3 14
+ test -n ''
+ [ 2 -ne 2 ]
+ t=restoretask
+ /usr/openv/netbackup/bin/bmrc -op change -res restoretask -client bbgcidev -state 3 -progress 14
+ return 0
+ EchoToConsole $'\nBare Metal Restore has failed.\nNow, you will be dropped to a shell prompt.\nYou may reboot the system when you have completed diagnosis.\n'
+ [ $'X\nBare Metal Restore has failed.\nNow, you will be dropped to a shell prompt.\nYou may reboot the system when you have completed diagnosis.\n' '=' X-n ]
+ _c=''
+ _n=''
+ [ -t 1 ]
+ tee -a /dev/console
+ echo $'\nBare Metal Restore has failed.\nNow, you will be dropped to a shell prompt.\nYou may reboot the system when you have completed diagnosis.\n'

Bare Metal Restore has failed.
Now, you will be dropped to a shell prompt.
You may reboot the system when you have completed diagnosis.

+ unset _c _n
+ /bin/sh
+ 0< /dev/console 1> /dev/console 2>& 1
Restoration log termination time: 2015/04/21 19:57:58

sdo
Moderator
Moderator
Partner    VIP    Certified

Solaris 11 - no patch level?

Sparc or Intel?

Which version of ZFS?

sdo
Moderator
Moderator
Partner    VIP    Certified

And absolutely and definitely no other volume managers and no other file system types present within the original backup client?

Marianne
Level 6
Partner    VIP    Accredited Certified

Pity you did not check compatibility before you started:

NetBackup Support for Oracle Solaris Virtualization
http://www.symantec.com/docs/TECH162994 

Extract:

All NetBackup components supported with Solaris 11 SPARC physical servers are supported in a Solaris 11 LDoms Control Domain and I/O Domain with the exception of Bare Metal Restore (server or client). Guest domain support is limited to standard client, database agents, master server and disk media server.
 

enguessan
Level 4
Partner Accredited

Solaris 11.1

Sparc

This system is currently running ZFS pool version 34.

sdo
Moderator
Moderator
Partner    VIP    Certified

Marianne beat me to it - my next step was to check compatibility matrices...

.

And finally, to bring the lesson home, re checking compatibility lists, you need to answer this - if not for us, then at least for yourself...

Q) And absolutely and definitely no other volume managers and no other file system types present within the original backup client?

.

...and so determine all volume managers (and versions) and all file systems (and version) and check the compatilibity matrices... because... BMR usually has very specific volume manager and file system compatibility requirements and exclusions.

enguessan
Level 4
Partner Accredited

Hi marianne,

 

I read that.

but in that docucument, Symantec say that it can work.

I done that on LDOM Solaris 10 with the same condition.

but now, in solaris 11.1, the boot iso disk CDROM is mount in /.cdrom. and the BMR process check for /cdrom.

BR

enguessan
Level 4
Partner Accredited

Volume manager use is only zfs.

there is also default soaris slice.

is it possible to modify script executed by BMR so that we can modify the /.cdrom or create a link between /.cdrom and /cdrom?

 

BR

 

sdo
Moderator
Moderator
Partner    VIP    Certified

Apologies, I'm unable to answer that.  Hopefully one of the BMR gurus will step in...

Marianne
Level 6
Partner    VIP    Accredited Certified

I read that.

but in that docucument, Symantec say that it can work.

Really?

That is not what I see.

It clearly states that BMR is not supported for Solaris 11 LDoms.

 

is it possible to modify script executed by BMR so that we can modify the /.cdrom or create a link between /.cdrom and /cdrom?

You will need to submit a request to NetBackup Engineering team... 

mnolan
Level 6
Employee Accredited Certified

I believe we have documentation that states you can BMR the entire server, and when restoring the entire server, by default, will restore all the LDOMS.

 

Jaime_Vazquez
Level 6
Employee

I need clarification of your contention that this worked OK for a Solaris 10 LDOM environment. Did the BMR ecovery run to completion without error for the Solaris 10 environment?  Is this to mean that for the Solaris 10 LDOM partition, the virtual CDROM is mounted to "/cdrom" while on Solaris 11, it is mounted to "/.cdrom"? Is the mount point generated something that is configurable from the definition of the LDOM itself?

 

 

Jaime_Vazquez
Level 6
Employee

OK, for starters, I am going under the assumption that this is a guest domain and as such it is supported as a standard client. But there is further clarification of support in another article that goes directly against what you are trying. This is the article and here are the salient sections:

Statement of Support for NetBackup 7.x in a Virtual Environment (Virtualization Technologies)  
http://www.symantec.com/docs/TECH127089

On page 11, under the section header Bare Metal Restore, it states:

Due to the inherent physical dependencies in the Bare Metal Restore (BMR) option,
BMR is not covered by the “General guidelines for support” section. Instead, BMR
is explicitly qualified and supported within specific virtual environments, as listed
below
.

It further states on the same page:

Solaris Logical Domains (LDOMs): The BMR master server is supported in the
LDOM Control Domain and on guest domains. The BMR client is supported on
an LDOM guest domain that has UFS attached.

See “BMR client support details for Oracle VM Server for SPARC” on page 13.

And going to page 13, the information in Table 7 states:

BMR client support details for Oracle VM Server for SPARC (Solaris
10 LDOM
)

The table specifies that:

You can back up and restore a BMR client that is installed on a LDOM guest domain
that has UFS attached
.

Note that there is no mention of Solaris 11 LDOM on SPARC in the article.. And, based on input, this is a ZFS system.
Of interest to me is that BMR does support Solaris 11 and ZFS for physical server restores (non-LDOM guest domains).

The level of supported functionality you are needing is adding Solaris 11 LDOM support which by its own nature will add ZFS support. That would be an enhancement to the current releases and there are avenues open on requesting them.

 

 

Jaime_Vazquez
Level 6
Employee

Yes, it is possible to manually edit the restore script for recovery of a Unix/Linix client under very special circumstances.  However, the process required is one reserved for support use only, as the modification of the script renders it technically not supported. I do this as part of my diagnostic work for customer situations in order to insert debug instructions as well as test a work-around. This is done as part of my support activity and under the umbrella of a customer support case. It is not a topic I feel comfortable in disclosing under this venue.

enguessan
Level 4
Partner Accredited

Hi Vazquez,

As explained, I well done BMR restore of Solaris 10 LDOM with NBU 7.5 using UFS en ZFS file systeme about 3 year ago.

I noted that on Solaris 10, SRT is created with normal Solaris CD. I don't remember wherether it contain /cdrom or not

but for Solaris 11, we use Solaris 11 AI.

the discover work, but the restore failed on /cdrom issue.

I know you can help me.

BR

enguessan
Level 4
Partner Accredited

Hi Vazquez,

As explained, I well done BMR restore of Solaris 10 LDOM with NBU 7.5 using UFS en ZFS file systeme about 3 year ago.

I noted that on Solaris 10, SRT is created with normal Solaris CD. I don't remember wherether it contain /cdrom or not

but for Solaris 11, we use Solaris 11 AI.

the discover work, but the restore failed on /cdrom issue.

I know you can help me

enguessan
Level 4
Partner Accredited

Did you have step by step procedure about How to boot on LAN with BMR on Solaris 11.

I tryed it. But the prepare to restore failed with error " V-126-53 Could not contact the Bare Metal Restore Boot Server daemon (bmrbd)"

I checked and noted the bmrbd was running on the boot server.

 

BR

Jaime_Vazquez
Level 6
Employee

This is a rather old TECH article but it should still be relevant.

 

Methodology used for the network boot processing of a Solaris Bare Metal Restore (BMR) client.
http://www.symantec.com/docs/TECH54765

 

Back to previous posting.  You state that a discovery boot (Prepare To Discover) followed by a CD boot worked for Solaris 11?   This is contrary to a failure for r a restore.  Both discovery and restore have the same basic outline of work. The Prepare To Discover and the Prepare To Restore use the same module and both create a boot script. The only major difference is that the Discover process does not look to verify a valid backup image.  The mount information is the same for both as it needs to make use of the same mounted SRT do the work.  It knows the difference between RESTORE and DISCOVER by the "runmode" environmental variable setting( " runMode=RESTORE" in the failure case).

For the network boot "Prepare To Restore" issue, you need to check two things:

1.  In the SRT definition on the Master (in the BMRDB) , it will show the host name of the Boot Server hosting the SRT and the file system path. of the NFS based SRT on that server.  The 'bmrd' process "talks" to the 'bmrbd' process on that server using IP packet and makes use of the IP address that it "knows" the Boot Server by.  Ensure that information is correct and that the Master can contact/connect to the Boot Server.

2. The Boot Server will answer back to the Master to indicate it has an SRT at the noted location and it is ready to work. It will reply to the server it "knows" is the BMR Master, based on the IP address of that host name. Ensure that the value is correct. One quick test to try is to run the command "bmrsrtadm" on the Boot Server. If all is well, the command should display a list of options to run. Use option 7 to list the SRT information it is suppose to be hosting. This should match the listing on the Master Server in the Admin Console (BMR Administration -> Resources -> Shared Resource Trees) or from the output of the command "bmrs -o list -res SRT" as run on the Master.

You can do some debug log checking by setting the "DebugLevel=5" for the 'bmrsrtadm" function on the Master (ID=126), retrying the PTR, and, if you the error, look at the log with "vxlogview -i 126". On the Boot Server, the "bmrbd" process logs to the 119 ID.

The boot image information for a network boot uses NFS to mount the SRT file system on the Boot Server.