oprd returned abnormal status (96)
Hi, I am not able to get media manager daemons after bouncing Netbackup Daemons. And i got this error #./vmoprcmd -d oprd returned abnormal status (96) IPC Error: Daemon may not be running Logs of daemon & reqlib & ltid as below:- Ltid:- 01:21:23.746 [23332] <4> CheckShutdownWhileInit: Pid=1, Data.Pid=25157, Type=100, Param1=0, Param2=-5056, LongParam=-23490544 01:21:24.281 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-01 is ACTIVE 01:21:24.281 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-02 is ACTIVE 01:21:24.281 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-03 is ACTIVE 01:21:24.281 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-06 is ACTIVE 01:21:24.282 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-07 is ACTIVE 01:21:24.282 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-08 is ACTIVE 01:21:24.282 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-10 is ACTIVE 01:21:24.282 [23332] <16> InitLtidDeviceInfo: Drive DLT8000-12 is ACTIVE 01:21:24.282 [23332] <16> InitLtidDeviceInfo: Drive IBM_ULTRIUM2_Drv07 is ACTIVE reqlib:- 01:00:01.207 [23580] <2> EndpointSelector::select_endpoint: performing call with the only endpt available!(Endpoint_Selector.cpp:431) 01:00:01.220 [23580] <2> EndpointSelector::select_endpoint: performing call with the only endpt available!(Endpoint_Selector.cpp:431) 01:01:40.430 [23805] <4> vmoprcmd: INITIATING 01:01:40.488 [23805] <2> vmoprcmd: argv[0] = vmoprcmd 01:01:40.488 [23805] <2> vmoprcmd: argv[1] = -d 01:01:40.488 [23805] <2> vmdb_start_oprd: received request to start oprd, nosig = yes 01:01:40.527 [23805] <2> vnet_vnetd_service_socket: vnet_vnetd.c.2046: VN_REQUEST_SERVICE_SOCKET: 6 0x00000006 01:01:40.527 [23805] <2> vnet_vnetd_service_socket: vnet_vnetd.c.2060: service: vmd 01:01:40.623 [23805] <2> getrequestack: server response to request: REQUEST ACKNOWLEDGED 650 01:01:40.649 [23805] <2> getrequeststatus: server response: EXIT_STATUS 0 01:01:40.649 [23805] <4> vmdb_start_oprd: vmdb_start_oprd request status: successful (0) 01:02:57.220 [23805] <2> wait_oprd_ready: oprd response: EXIT_STATUS 278 01:02:57.221 [23805] <2> put_string: cannot write data to network: Broken pipe (32) 01:02:57.221 [23805] <16> send_string: unable to send data to socket: Broken pipe (32), stat=-5 Deamon:- 01:07:42.537 [24144] <4> rdevmi: INITIATING 01:07:42.537 [24144] <2> mm_getnodename: cached_hostname tcppapp001, cached_method 3 01:07:42.583 [24144] <2> mm_ncbp_gethostname: GetNBUName <tcppapp001-bip> 01:07:42.583 [24144] <2> mm_getnodename: (5) hostname tcppapp001-bip (from mm_ncbp_gethostname) 01:07:42.584 [24144] <2> rdevmi: got CONTINUE, connecting to ltid 01:08:02.755 [24098] <16> oprd: device management error: IPC Error: Daemon may not be running 01:08:03.241 [23355] <2> vmd: TCP_NODELAY 01:08:03.242 [23355] <4> peer_hostname: Connection from host tcppvmg265-bip, 172.16.6.7, port 6735 01:08:03.335 [23355] <4> process_request: client requested command=43, version=4 01:08:03.335 [23355] <4> process_request: START_OPRD requested 01:08:03.369 [23355] <4> start_oprd: /usr/openv/volmgr/bin/oprd, pid=24164 01:08:03.638 [23355] <2> vmd: TCP_NODELAY 01:08:03.638 [23355] <4> peer_hostname: Connection from host tcppvmg265-bip, 172.16.6.7, port 18394 01:08:03.685 [23355] <4> process_request: client requested command=43, version=4 01:08:03.685 [23355] <4> process_request: START_OPRD requested Please help me out how toget rid of this issue and what is this issue about.4.7KViews1like8CommentsNetbackup AIR - restricting REPLICATION jobs
I have a situuation whereby the link between two NBU sites / domains has been inactive for some time, and the link has now been upgraded and is available again. I suspended secondary operations on the SLP's in the primary site during this period, and as a result now have a couple of thousand images awaiting replication to the secondary site. Before I re-enable the secondary operations on the SLP's I want to ensure that I don't have hundreds of replicaton jobs starting at once, as I have seen this in the past and the system becomes unresponsive. How can I limit the amount of replication jobs that will be submitted as I want to limit this to a manageable number (say 20 concurrent). I know I can limit the amount of streams on the Storage Server / Disk Pool, but I am afraid that the replications will all kick in and the backups will not have any available streams to work with. Bottom line - I want to continue to allow my backups to run but also have a limited number of replication jobs running concurrently allongside them. Any input appreciated. AJ5.9KViews0likes13Commentsnew media servers not showing in Storage Unit selection list
Hi, We're on Netbackup 7.5 using Data Domain storage. Master server is Windows 2008. Just had 6 Unix servers installed as media servers, the DD Boost plugin has been installed and the following commands run (we have successfully setup media servers for Data Domain in exactly this way before): nbdevconfig -creatests -stype DataDomain -storage_server [data_domain_box]-media_server [media_server] tpconfig -add -storage_server [data domain box] -stype DataDomain -sts_user_id ostuser -password password I can see the new servers registered on Data Daomin in the Enterprise Manager under Data Management / DDBoost/ Activities - they are listed along with the previously enabled ones. But they are not showing up as available media servers in the Storage Units. I assume that as they have registered with data domain then dd boost must be up and active on the servers. Can anyone advise? How do I check for what's not happening but should be? Thanks4.9KViews1like11CommentsView Volume Groups From CLI
Hello, I am trying to figure out to view the contents of a particular Volume Group from the CLI? Or I would like to view what Volume Group a particular tape is assigned to? Are either of these possible or only avaliable in the GUI? Thanks DanielSolved2.4KViews0likes5CommentsAutomated OKM key destruction
We are upgrading to LTO5 drives, with keys managed by Oracle Key Manager (OKM) Since each tape will get its own key certificates, destroying those key certificates effectively scratches the tape. I would like to implement a process that when I run my vault and it generates a list of scratch tapes to return from iron mountain, it takes that list of tapes and uses them as input to destroy the keys - effectively prohibiting me from ever importing the tapes. It would be like formatting every tape as it comes back, but with out the drive time. Does anyone either already have such ascript, or see the need for one? Should this be something Oracle and NetBackup team up to provide? NB 7.0.1 on Solaris 10 SL8500 moving to LTO5!499Views0likes1Comment- 4.4KViews1like5Comments
Error 2060022: software error Backing up to Disk
Hello People: Here my situation, We have a Quantum DXI for OST Backup to Disk. All my clients do the backup over my Master Server to the Quantum DXI. But many of those client have error 84: Here the error on 2 of clients, The error was the same on other clients: Client 1 2/10/2011 7:36:40 PM - Critical bptm(pid=9304) image write failed: error 2060022: software error 2/10/2011 7:36:40 PM - Critical bptm(pid=9304) sts_close_handle failed: 2060017 system call failed 2/10/2011 7:37:31 PM - Error bptm(pid=9304) cannot write image to disk, Invalid argument 2/10/2011 7:37:41 PM - end writing; write time: 00:39:08 media write error(84) Client 2 2/10/2011 7:36:24 PM - Critical bptm(pid=8396) image write failed: error 2060022: software error 2/10/2011 7:36:24 PM - Critical bptm(pid=8396) sts_close_handle failed: 2060017 system call failed 2/10/2011 7:36:50 PM - Error bptm(pid=8396) cannot write image to disk, Invalid argument 2/10/2011 7:36:50 PM - Critical bpbrm(pid=5204) from client CLIENTNAME: FTL - tar file write error (5301376) 2/10/2011 7:36:57 PM - end writing; write time: 01:41:26 media write error(84) Our Enviroment: All my servers with Windows 2003 SP2 STD 32Bits Some of those clients are Virtuals (VMWare). Master Server HP G5 (Very Good Specs). Policy don'tcreate any checkpoint on any client. Policy have available the option for Allow Multiple Data Stream. Policy are limited to run 2 per policy. Any HELP or Idea? ThanksSolved14KViews0likes14CommentsNetApp NFS and CIFS backup
Hi A quick overview on how we take NetApp backups, and why we ended up doing this way. Initially our plan was to take backup of NetApp directly to Tape by utilizing the NetApp plugin and Replication Director. Backup was very slow so we ended up with a staging disk to speed up backups. The negative side of this is that Replication Director utilize SLP to operate, and SLP does not support a staging disk so we ended up excluding the Replication Director. Later on we moved the backups to a MSDP pool. and the way we take backup of CIFS share todayis by mounting the CIFS share to a Windows client, and setting upWindows\File policies on Master Server using the UNC path as selection to take backup of theese CIFS share. Our Linux team has raised a concern that this method would require alot of time and manual steps in order to achieve similar for NFS shares mounted on a Linux server. My question is as simple as follows: Is there a way to take backup of NetApp volumes, without involving a client which have CIFS or NFS shares mounted?Would i be able to achieve this by implementing the Replication Director? My understanding is that NMDP is a way around that, but the problem is that it is very slow, and accelerator is only available on 7.7 version of Netbackup. Is there any other way?Solved3KViews0likes5CommentsRequirements for my Master Server (RAM memory and processors)
Hi all. I have great doubt I am implementing a new platform in which there are 15 clients (Sun X4270). the questions is: The master server with netbackup to manage clients, that memory (RAM) should have? the ram of the master server depends on the clients? And talking about processors, also depends on customer? Obviously already taking into account the necessities of the installed applications Thank you and good daySolved5.9KViews1like9Commentskernel: st2: Error 18 (driver bt 0x0, host bt 0x0)
Hello, i didn't find any solution on forums i would like to open top by sharing of tape drives between 2 media servers i understand that reservation conflict is normal behaviour, few times per day or one daily we have error 18 or Control daemon connect or protocol error on SAN switch is set failover for tape drives media servers are RedHat with newest kernel and nbu is 7.5.0.7 the tape drives are never down and are working properly have anybody suggestion? Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st 1:0:1:0: reservation conflict Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st 1:0:2:0: reservation conflict Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st2: Error 18 (driver bt 0x0, host bt 0x0). Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st 1:0:3:0: reservation conflict Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st 1:0:3:0: reservation conflict # more messages | grep "Error 18" Mar 2 05:00:04 gcs-amsdc1-nbmed02 kernel: st3: Error 18 (driver bt 0x0, host bt 0x0). Mar 4 05:00:05 gcs-amsdc1-nbmed02 kernel: st0: Error 18 (driver bt 0x0, host bt 0x0). Mar 4 05:00:05 gcs-amsdc1-nbmed02 kernel: st2: Error 18 (driver bt 0x0, host bt 0x0). Mar 5 05:00:05 gcs-amsdc1-nbmed02 kernel: st0: Error 18 (driver bt 0x0, host bt 0x0). Mar 5 05:00:05 gcs-amsdc1-nbmed02 kernel: st1: Error 18 (driver bt 0x0, host bt 0x0). Mar 6 05:00:05 gcs-amsdc1-nbmed02 kernel: st2: Error 18 (driver bt 0x0, host bt 0x0). ]# more messages | grep DOWN Mar 2 13:56:33 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 13:56:34 gcs-amsdc1-nbmed02 ltid[14446]: Request for media ID D177L5 is being rejected because mount requests are disabled (reason = robotic daemon going to DOWN state) Mar 2 13:56:34 gcs-amsdc1-nbmed02 ltid[14446]: Request for media ID D146L5 is being rejected because mount requests are disabled (reason = robotic daemon going to DOWN state) Mar 2 13:56:34 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 14:00:11 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 14:00:17 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error more messages | grep error Mar 2 13:56:33 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 13:56:34 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 14:00:11 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 14:00:17 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) going to DOWN state, status: Control daemon connect or protocol error Mar 2 14:02:11 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) unavailable: initialization failed: Control daemon connect or protocol error Mar 2 14:07:42 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) unavailable: initialization failed: Control daemon connect or protocol error Mar 2 14:13:13 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) unavailable: initialization failed: Control daemon connect or protocol error Mar 2 14:18:44 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) unavailable: initialization failed: Control daemon connect or protocol error Mar 2 14:24:16 gcs-amsdc1-nbmed02 tldd[14472]: TLD(1) unavailable: initialization failed: Control daemon connect or protocol errorSolved7.3KViews1like23Comments