NB7.5 Remote Backup Procedure after upgrading & changing Master
Environment: NetBackup 7.5 Media Server with Puredisk and Tape Library attached. I need to backup remote site "ZXY" over a WAN Link. History: Previously we were backing up XYZ using Synthetic Backups to Media Server "B" with Master Server "A" in another location. We then moved to a new Master Server "A1"and media server "B1." Before moving we replicated data using SLP from B to B1. We used the following link to seed: http://www.symantec.com/business/support/index?page=content&id=TECH144437 In particular this section: For example, assume two new remote clients, remote_client1 and remote_client2, are being backed up for the first time.Data for both clients has been copied via a transfer drive and backed up locally to the media server media1, using a policy called “transfer_drive”. Run the following commands on the media server to setup a special seeding directory using the transfer_drive backup images for each client: $ seedutil -seed -sclient media1 -spolicy transfer_drive -dclient remote_client1 $ seedutil -seed -sclient media1 -spolicy transfer_drive -dclient remote_client2 Verify the seeding directory has been populated for each client: $ seedutil –list_images remote_client1 $ seedutil –list_images remote_client2 Run backups for remote_client1 and remote_client2. Clean-up the special seeding directory.: $ seedutil –clear remote_client1 $ seedutil –clear remote_client2 Clearing the special seeding directory is important.The source backup images referenced in the special seeding directory will not be expired until they are no longer referenced.To help with this, the special seeding directory for a client will automatically be cleared whenever an image is expired by NetBackup for that client.That being said, it is good practice to explicitly cleanup the special seeding directory when it is no longer needed Now: We attempted to run backups but continually failed with Error 14. We had the Accelerator unticked. Due to the way our VPN's and WAN links are setup we decided to point the Client to another Media Server C1 which connects to A1 Master Server. How should I progress with backing up the client. Shall I use the Accelerator, have Client Side Dedupe and just run a Full Backup over the link? Or shall I some how replicate from Media Server B to C1 or from B1 to C1, then follow the above link on seeding and then run a full backup? Im a little confused on how remote backups are actually suppose to work and whether the Accelerator works on the first backup. Also whether it will use the dedupe data. Any help is appreciated.442Views3likes1Commentclient side dedup issue
Hi there, First of all let me explain you our Backup infraestructure. We've got a Netbackup 5230 appliance (2.6.1.1) with 25TB cabine in the main site. Copies in the same site are working with no issues. Now, we've got a remote site with a VMWARE ESX 5.1. This server has a file server with Windows 2012 (fileserver) and vmware backup host with Windows 2008 R2 (BHserver), both servers connected to the same datastore in a HP SAS cabine connected to the server. Sites are connected by one 100 Mb/s point to point circuit. The issue is it seems deduplication isn't being done in the client side but in Appliance. We've got set in fileserver "Always use client-side deduplication" and in the job policy BHserver is selected as VMware backup host. I'm testing with boot disk (40GB). First copy took 77 minutes. (log1.txt) We expected second copy was faster because the data sent should be fewer, but it took the same time. (Log2.txt) As you can see were sent 444151 KB and a 98.9% of dedup theorically. How is it possible it's taking the same time? I was checking network with Traffic 10 0 and it was sending to 80Mb all the time. (and 40765163KB x 8 / 1024 / 80 / 60 = 66 minutes, near to 73 minutes it took.) Because of this I think dedup is done in target side not source. The rest of the tries are taking the same time aprox (log3.txt) How can we are sure or enforce client side deduplication? Many thanks for your help. Regards.Solved2.9KViews2likes24CommentsUnable to create MSDP
Hello, Currently I am unable to create a Media Server Deduplication Pool. I am using a Windows Server 2012 R2 for my Master / Media Server. NetBackup Version is 7.6.1.1 Trying to create a disk pool for deduplication using the storage server configuration wizard but ends with an error saying "cannot connect on socket rdsm has encountered an issue with an sts corba exception: getStorageServerInfo " The license I am using is the following:SYMC NETBACKUP PLATFORM BASE COMPLETE EDITION FLEX PACK 7.6 WIN/LNX/SOLX64 UP TO 9 FRONT END TBYTE BNDL MULTI LIC PARTNER ESSENTIAL 12 MONTHS REWARDS BAND E The credentials I am trying to use when attempting to create the dedpulication pool is the administrator user for the master / media server. below is a screen shot of what i get in the end I am sorry it is a Japanese version of Netbackup but the error basically says "cannot connect on socket. RDSMhas encountered an issue with an STSCORBAexception: getStorageServerInfo" additionally below is what I get from the command nbemmcmd -listhosts -verbose and nbemmcmd -getemmserver again I am sorry some of it is in Japanese but the hostname: "abtyh1mp" is the master / media server what am i doing wrong and why can't I create a deduplication pool? Any help would be appriciated. I have attached what I am doing and the error i get on a excel sheet. -------------------------------------------------------------- C:\Program Files\Veritas\NetBackup\bin\admincmd>nbemmcmd -listhosts -verbose NBEMMCMD, バージョン: 7.6.1.1 次のホストが見つかりました: abtyh1mp MachineName = "abtyh1mp" FQName = "abtyh1mp" MachineDescription = "" MachineNbuType = server (6) abtyh1mp ClusterName = "" MachineName = "abtyh1mp" FQName = "abtyh1mp" GlobalDriveSeed = "VEND:#.:PROD:#.:IDX" LocalDriveSeed = "" MachineDescription = "" MachineFlags = 0x17 MachineNbuType = master (3) MachineState = active for disk jobs (12) NetBackupVersion = 7.6.1.1 (761100) OperatingSystem = windows (11) ScanAbility = 5 abtyh1en.honsha.jfe-steel.co.jp MachineName = "abtyh1en.honsha.jfe-steel.co.jp" FQName = "abtyh1en.honsha.jfe-steel.co.jp" MachineDescription = "" MachineNbuType = virtual_machine (10) abtyh1mk.honsha.jfe-steel.co.jp MachineName = "abtyh1mk.honsha.jfe-steel.co.jp" FQName = "abtyh1mk.honsha.jfe-steel.co.jp" MachineDescription = "" MachineNbuType = virtual_machine (10) コマンドが正常に完了しました。 C:\Program Files\Veritas\NetBackup\bin\admincmd> C:\Program Files\Veritas\NetBackup\bin\admincmd>nbemmcmd -getemmserver NBEMMCMD, バージョン: 7.6.1.1 このドメインで、次のホストが見つかりました:abtyh1mp ホスト "abtyh1mp" に確認中です... サーバー形式 ホストのバージョン ホスト名 EMM サーバー マスター 7.6 abtyh1mp abtyh1mp コマンドが正常に完了しました。 --------------------------------------------------------------------------------------------------Solved1.3KViews2likes2CommentsStorage Unit Initiating Parent Jobs
Hello, I havea odd situation where i am using a storage unit group with 4 storage units. 2 are PureDisk and 2 are Advanced disk. The PureDisk stu's are number 1 and 2 in the list and we are using 'failover' for storage unit selection. we discovered that if the 1st storage unit was busy - the backup would be written to stu number 3 in the list which is advanced disk, and for some reason skipped stu number 2 altogether. This is not expected behaviour in itself because with failover, it should only go to the next available stu if the first is down or out of media, but i understand that NetBackup may think the stu is down if it is busy? i logged this with support and they said that you cannot use PureDisk volumes in stoarge unit groups [which is not helpful in itself] - anyway so that may explain why the storage unit group is not behaving as it should. we also discovered that 'on-demand' was selected, and we did not want this as we are not using any policy or schedule that uses any individual stu as all policies point to the storage unit group. since 'on-demand' has been de-selected, we are now seeing that stu number 2 (the 2nd in the list) is initiating backup parent jobs, but the child jobs are writing the data to stu number 1 (the first in the list). can anyone explain why this may be happening. thanks in advance TracySolved1.1KViews2likes3CommentsNetbackup Deduplication and Front End Terabyte (FETB)
Hi, I am hoping for some guidance in the area of Symantec calculating its (Capacity Licensing on FETB- Front End Terabyte) when data is initially deduplicated to disk and then backing up the data to tape after several months. Query: If we create an SLP (Storage Life Cycle policy) that continually dedupe data from the client machine and then backing up the deduped data from disk to tape after x months, what will be the total FETB (Capacity Licensing)? Will the calculation be based onthe initial"FULL" backup of the client or the eventual dedupe amount of data (as the dedupe data over a period of timewhen cutting over to tape)? Hypothetical scenario Client A = 500 GB (Full backup) = Deduped data 50 GB on disk (Total after 3 months) What is the correct calculation when executing nbdeployutil tool? Will the total FETB be 50GB or 500GB? Any information is much appreciated! Thanks!!Solved3.2KViews2likes2CommentsDuplication incrementals from MSDP to Tape
Hi, I have an SLP that backs up to MSDP and then duplicates to Tape. When i do Full, it will offcourse duplicate a full to Tape as well. But when i do Incrementals, i have the feeling it's always duplicating full images to Tape. Is this correct?Solved1.5KViews2likes8CommentsHP StoreOnce OST Plug in for NetBackup on Windows/Linux/Solaris & HPUX
There have been several queries on this forum about downloading the HP StoreOnce OST Plug in for NetBackup, which seem either out of date or don't give the actual information, so here is the latest info. An article from Dec 2012 on HP's Support Center HP D2D/B6000 StoreOnce Backup Systems - OST Software Information with Download Details directs you to: http://www.software.hp.com/kiosk But with the incorrect username and password! Please use the following: User Name: STOREONCE_KIOSK Password: STOREONCEAPPS Once there, the following HP StoreOnce OST Plug ins are available for download: HP StoreOnce Catalyst Plugin for Oracle RMAN Windows v1.0 HP StoreOnce Catalyst Plugin for Oracle RMAN Linux v1.0 HP StoreOnce Enterprise Manager V1.2 for Windows HP StoreOnce D2D OST Plugin v1.2 for Windows/Linux/Solaris & HPUX HP OST Plugin v2.0 for NBU for Solaris Sparc v10/HPUX 11.31i HP OST Plugin v2.0.1 for BE for Windows HP OST Plugin v2.1 for NBU for Windows/Linux Release notes and installation guides are usually included with each plug in's tar/zip file. Please ensure that your environment is compatible with the latest Symantec and HP matrices, including the HP Enterprise BURA Solutions (EBS) Compatibility matrix on SPOCK (requires an HP Passport account creating): http://www.hp.com/storage/spock [It will be updated for OST v2.1 shortly!].1.5KViews2likes1CommentAIR with Data Domain
Geeks ! Good Day ! We have setup AIR using DDBOOSt association and SLP's are going fine. The issue we are facing is that we getting very low throughput ,we also tried manually duplicating the images to target DD and even those are going slow. sysadmin@source# ddboost association show Local Storage Unit Direction Remote Host Remote Storage Unit ------------------ ------------ ----------- ------------------- air_source-psu replicate-to dest_host air_dest-rsu ------------------ ------------ ----------- ------------------- So ,my point is do we need to also setup trust relationship b/w two masters as we have not yet and have simply conffigured AIR using Data Domain. Storage units are created perfectly using Disk pool created on source and target Data Domain storage units. What all parameters we need to check as everything looks fine on the network side. Thanks, PranavBSolved614Views2likes1Commentbackup Performence
how to improve backup performance(For example I am running a backup operation it is taking 10 hours (consider ) then I need it to finish it within 5 hours or below mean as early as possible is it possible or not if possible means how Give me an example )Solved1.9KViews2likes8CommentsAdvice for my SQL backup schemes - what are you doing?
So currently all of my SQL servers have now been moved to VMWare (with most of my other servers). Currently, I have SQL server type backups dumped from every server to a share on my backup server \SQLDumps$\SERVERNAME\DB_NAME Also, since moving to Netbackup 7.5, I have the VMWarev2 type of backup, with no SQL server integration. I believe that I am correct in thinking that as long as I do VMWarev2 style type of backup, it will always pull all files. And because it quieses, I believe that SQL would work, if I restored the entire VM at once. I have tried using the SQL integration before, but it seemed like it was going to be a massive job as many servers were giving me different errors to go chase down, plus I'd still be backing up the files twice. Since I have no need for anything less than a night's backup, the SQLDUMPS are for individual database restores, and the full VMWAREv2 backups are for full emergency restores. I suppose this works, but it feels strange backing up the database files twice every time, (which I believe won't be de-duped well). I think it's about 600GB of databases, so a total of 1.2TB per night for the database files since they essentially back up twice. What do you think, does my plan as is make sense? Are you doing something similar or different?228Views2likes0Comments