Backup failing with error 13 for windows 2008 R2
Hi All, I am facing backup failure issue with multiple windows 2008 R2 servers which is failing with error 13. Master server: HPUX, NBU version: 7.5.0.6 Media server: HPUX, NBU version: 7.5.0.6 Client server: windows 2008, NBU version 7.5.0.4 Storage destination: DDBoost Issue details: If backups run without multi stream backups failing for servers with error 13, iam feeling strange that how it is possible, if backups run in multi streaming its completing successfully but when we run jobs without multi stream, jobs getting fail with error 13. If we run job in multistream bpbkar log s writing , without multistream bpbkar is not writing single byte of data and also its stuck in connecting state. Connectivity is working fine i have verified from master and media servers. ping Telnet btestbpcd bpclntcmd from client also fine. Below is the job logs, kindly refer and suggest me if anything that can be done. one more strange thing is if i run backups with prod IP jobs running fine in multistream and without multistream :) 04/22/2015 14:47:51 - estimated 0 kbytes needed 04/22/2015 14:47:51 - Info nbjm (pid=14586) resumed backup (backupid=xxxxxx) job for client A, policy yyyy, schedule Daily_Incremental on storage unit zzdd-stu-disk 04/22/2015 14:47:53 - started process bpbrm (pid=27195) 04/22/2015 14:47:54 - connecting 04/22/2015 14:47:58 - Info bpbrm (pid=27195) starting bpbkar on client 04/22/2015 14:47:58 - connected; connect time: 0:00:00 04/22/2015 14:52:58 - Error bpbrm (pid=27195) socket read failed: errno = 232 - Connection reset by peer 04/22/2015 14:52:59 - Info bpbkar (pid=0) done. status: 13: file read failed 04/22/2015 14:52:59 - end writing file read failed (13)1.3KViews1like9Comments5220 appliance IPMI confusion and mystery
Have a 5220 netbackup appliance which acts as master and media server. Recently upgraded to latest software, 2.6.1.1. Also, purchased additional memory and another storage shelf, which was successfully installed. Running along just fine, when i get a call from one of our "Networking Guys', tracing packets for some other problems, but see an awful lot of traffic for a certain MAC addr. And it's one - off from what's listed as the main interface for my 5220. Not mine, I say. Well, he assigned the mac address a name and IP number and now, when I https to it, VOILA, IPMI interface for my 5220. How did this happen? I looked at the back, and there is an ethernet-looking connection labelled IPMI, but there's no cable in it. Not causing any trouble, but I would like to know if this is something new, or somethings that's been broadcasting a while and someone just noticed it. Besides the "solving the mystery" aspect, I do have another appliance which I might like to get IPMI going on, and would really like to know what happened here.Solved6KViews0likes29CommentsSAN Client doesn't use FC to backup to 5230 Appliance
Hello everybody, I wanted to thank you in advance for any help or clarification. We just zoned one SAN client to a new 5230 appliance. After zoning was done, we did the following: - Enable SAN Client Fibre Transport on the Media Server [use FT for backups to this Appliance] and we rebooted the appliance. When I ran Show FC, it showed the 2 ports I zoned in the appliance (slot5 and slot 6) as Targets (that's what I wanted) and the status is "Fabric". I tried to run a test backup but it was using the LAN, not the SAN. What am I missing? Here is the result of the Show command on the appliance: FC HBA card(s) are configured correctly. **** FC HBA Cards **** 07:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 07:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 08:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 08:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 81:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 81:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 82:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 82:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) **** Drivers **** qla2xxx is loaded windrvr6 is loaded **** Ports **** Bus ID Slot Port WWN Status Mode Speed Remote Ports 07:00.0 Slot5 21:00:00:24:FF:4F:E6:FC Fabric Target 8 gbit/s ------------ 07:00.1 Slot5 21:00:00:24:FF:4F:E6:FD Online Initiator 8 gbit/s 0x500104f000baf899 0x500104f000baf83f 0x500104f000baf809 08:00.0 Slot6 21:00:00:24:FF:4F:E5:9C Fabric Target 8 gbit/s ------------ 08:00.1 Slot6 21:00:00:24:FF:4F:E5:9D Online Initiator 8 gbit/s 0x500104f000baf85d 0x500104f000baf875 0x500104f000baf84e 81:00.0 Slot4 21:00:00:24:FF:4F:E8:6C Online Initiator 8 gbit/s 0x500104f000baf854 0x500104f000baf80c 0x500104f000baf84b 81:00.1 Slot4 21:00:00:24:FF:4F:E8:6D Linkdown Initiator 8 gbit/s 82:00.0 Slot2 21:00:00:24:FF:4F:E6:CE Online Initiator 8 gbit/s 0x500104f000baf878 0x500104f000baf86c 0x500104f000baf860 82:00.1 Slot2 21:00:00:24:FF:4F:E6:CF Linkdown Initiator 8 gbit/s *** Devices **** Device Vendor ID Type Remote Port /dev/sg10 HP Ultrium 4-SCSI 0x500104f000baf875 /dev/sg11 HP Ultrium 4-SCSI 0x500104f000baf84e /dev/sg12 HP Ultrium 4-SCSI 0x500104f000baf899 /dev/sg13 HP Ultrium 4-SCSI 0x500104f000baf85d /dev/sg14 HP Ultrium 4-SCSI 0x500104f000baf80c /dev/sg15 HP Ultrium 4-SCSI 0x500104f000baf84b /dev/sg16 HP Ultrium 4-SCSI 0x500104f000baf854 /dev/sg17 HP Ultrium 4-SCSI 0x500104f000baf860 /dev/sg18 HP Ultrium 4-SCSI 0x500104f000baf86c /dev/sg19 HP Ultrium 4-SCSI 0x500104f000baf878 /dev/sg8 HP Ultrium 4-SCSI 0x500104f000baf83f /dev/sg9 HP Ultrium 4-SCSI 0x500104f000baf809 *** Remote Appliances over FC **** Please scan for remote appliances over FC first *** Notes **** (NOTE: Ports in mode "Initiator*" are configured for target mode when SAN Client FT Media Server is active, however, are currently running in initiator mode, i.e. SAN Client is disabled or inactive.)Solved1.2KViews0likes4CommentsCan Netbackup Appliance (NBU5220) be used to backup app_cluster with virtual_name?
We have a 2-node vcs cluster, both installed as NBU san media sever. db1 (NBU 6.5.4) db2 (NBU 6.5.4) NBU_master (7.6.0.3 nbu5220 (2.6.0.3) ----------------------------------------------------------------------------------------- I used nbemmcmd command to create a app_cluster:db-cluster nbemmcmd -addhost -machinename db-cluster -machinetype app_cluster nbemmcmd -updatehost -add_server_to_app_cluster -machinename DB1 -machinetype media -netbackupversion 6.5 -clustername db-cluster -masterserver NBU_master To use the virtual_name of app_cluster as the NBU client, a STU using virtual_name as media server needs to be created. But, IF I want to use the MSDP pool of nbu5220 to backup the app_cluster with virtual_name, NO STU associated with virtual_name can be created. Is there any way to use Netbackup Appliance (NBU5220) to backup app_cluster with virtual_name? 1)Transforming db1/db2 from san media servers to san clients, should be feasible? 2) Since we still need to use the Tape Library, and don't want to make them san clients. Any other way? Thanks.Solved1.5KViews1like7Comments