How to identity Status of a backup using Netbackup CLI ?
I launch 'bpbackup' command in my program and it just gives the return code. There is no way to know the corresponding JobId. Before I launch the next backup, I want to check the status of the Previous backup and make some decision. 'bpimagelist' command can be used with 'keyword' as filter so that I can search my backup job uniquely. Problem is, it only lists the Successful backups. 'bpdbjobs' command can be used to list out all jobs (successful, failed, in progress) but there is no way to uniquely find the backup job because it does not support filtering the result with 'keyword' attribute. Also Want to filter the status of individual client what I still not able to get using bpdbjobs. Any help is highly appreciated. DebSolved6.4KViews0likes5CommentsOracle RMAN incremental to MSDP very slow
We are running RMAN backups to MSDP. The fulls and archive backups are very fast and dedupicate pretty well using filesperset 1. When the incremental backups run, it can take as much as 9 minutes to back up a 288k file, which makes no sense. It looks like bptm is waiting on something, but I cannot understand what it could be. Oracle is 10G, changed block tracking is enabled. The DBAs set filesperst to a higher number, and its faster, but it hardly deduplicates, so that's not gong to work. Anyone else see anything like this? I don't think its due to deduplication - it seems to be something on the oracle side. All the other jobs that run against the MSDP run very well. The behavior is the same using client side dedupe and server side dedupe. Master and Media are on 7.5.0.4 Oracle boxes are on 7.1.0.4 due to an old kernel they run.842Views0likes4CommentsWhy Verify speed is 10 times faster then write speed on LTO8 backing up ESXi?
We experiance very slow write speed when backing up from ESXi hosts (about 2500 MB/m). But when it comes to verify the speed jumps to 20000 MB/m. This is not a problem with write head of LTO8 tape drive, because if I try to write big file on LTO8 drive from local BackupExec server - the speed is 10000 MB/m. Why verify speed of ESXi hosts differ so much from write speed?Solved1.2KViews0likes3Commentsshutdown and restart Cluster with VEA
Hi all, I have to shutdown our Hardware for a while. Now I'm looking for the best way to do that. I have two storage SAN's connected to two hardware micosoft clusternodes(Windows Server 2003) with VEA 3.2. I will shutdown the sytsems this way: 1. passive Cluster node 2. aktiv cluster node 3. first san controller 4. second san controller 5. both storage's 6. FB switche and the restart this way: 1. FB switche 2. both storage's 3. both storage san controller 4. both cluster node's Are there any things i have to regard? Do the VEA start the to resync after the restart for hours? or it will only reconnect. I hope you can help me. Thank you!1.8KViews0likes3CommentsJobs stuck indefinitely in Queued Status
We have had an ongoing issue for about 2 months now, and since we have had a clean build (3 times now) for Backup Exec 2012. We have opened numerous cases with Symantec to resolve this, and they claim the first time that HotFix 209149 (see http://www.symantec.com/business/support/index?page=content&id=TECH204116 ) corrects the issue. The issue is also noted by seeing Robotic Element errors stating that OST media is full and it brings the "virtual" slot for the PureDisk or OST storage device offline. Restarting the services in BE only makes the problem worse and causes a snowball effect whereby jobs constantly error in the ADAMM log files. Essentially, the jobs never can get a concurrency/virtual slot and they stay Queued forever. I have seen others on this Forum with this problem, and while the Forum administrator seems to mark them as "Solved", they are not - because I see the threads drop off with no resolution identified. Are other people having this problem? If so, how are you overcoming it? Once it starts the environment is essentially dead in the water because the jobs never start (they sit Queued forever) - save for one concurrency which for our size environment is only 1/4 the need we have. We use CAS and 4 MMS servers with 2008 R2 with all patches applied, PureDisk 32TB volume on each MMS, Data Domain OST connection to DD-670 OST with OST-Plug 2.62 (28TB), replicated catalogs, and duplicate jobs for optimized deduplication between MMSs. We run BE 2012 SP3 clean - we reinstalled with SP3 slipstreamed because Symantec said this problem could be fixed through database repair by them manually or by reinstall...we chose reinstall (even though they did offer to "fix" the issue with database repair). We chose reinstall to validate whether SP3 truly fixes the issue. It is clear to us it does not. We are looking for anyone else who has had this problem to report into this forum. Thank you, Dana14KViews4likes15CommentsHow to Merge Archive PST Files in Outlook 2013?
Hi I want to merge archive PST files in Outlook 2013 but I don’t know which way is accurate for how to merge multiple Outlook PST Files into one. If anybody know about it then please solve my doubts for and which tool is prefer for it?965Views0likes3CommentsSlow backup to tape
Hi, We're having some trouble backing up virtual and physical servers to tape. We're not getting the speed it should get and it takes up a lot of time to back up the environment. 2,63TB of data takes up more than 24 hours. Information about job, tape & tape drive: Tape type:HPE LTO Ultrium 6 6,25TB MP RW Tape Drive: MSL 2024 Compression: Hardware Encryption: None Verify: Yes These backups are going through the internal network within the domain. The average backup speed is 4,300MB/m (which is way to slow) When we backup locally on the backup server (for example the D:\ drive where the incremental disk backups are saved on the backup server) it gets to a speed of 10500MB/m. Which is understandable considering it's a straight connection to the tape drive. What did we do to try and resolve it: * Checked the compression * Checked encryption * Replaced tapes for new tapes * Resetted the tape drive * Updated Tapedrive firmware * Checked out the network for inconsistencies (not found), file transfers via other apps around 126mbs from server to server) * Job from VM on hyper-visor to tape (around 2,900 MB/m) * Local disk to tape (between 10500MB/M and 13500MB/M) * VM to disk on the backup server (Around 6200MB/M) Help would be appreciated.1.9KViews0likes2CommentsHow to accelerate backup for a BIG Files servers and SQL servers.
I have 3 very big fileserver, more than one billions of files. Presently we using the MS-Windows policies to backup'em. I was reading that I could use the FLASHBACK policie (block level) for that kind of file server. So if i'm using this kind of policie, should I use the FLASHBACK or the FLASHBACK-WINDOWS policie if my server is a windows 2008? I dont fully understand the difference between MS-Windows, Flashbackup, Flashbackup-Windows. We using flashbackup-windows to backup our VM on ESX server. thanksSolved2.1KViews0likes4CommentsAbnormal growth of the catalog file system after upgrade
Hi Friends, I have recently upgraded my netbackup environment from Netbackup 6.5 to netbackup 7.5.0.4, after that I could see that my catalog file system increases 10GB by every week. Catalog compression already in place but not helping much. Any one is facing this issue after the upgrade? Any idea why this is happening? My environment: MAster and Media server running in Solaris 10 Number of clients: 3502.7KViews2likes14Comments