Duplication of Backup sets to be scheduled
Hello all, I am trying to find a way in BE 2014, how to schedule duplication of backup sets. I am duplicatingone server (backup set)this way:https://www.veritas.com/support/en_US/article.HOWTO98918 I have about 10 servers and I need to create monthly backup, which should consists of last full backup setof all servers backuped to Virtual disk (a link above is working only under 1 server). Only one possible solution how to backup month tapes is duplication backup sets for me. I do not want to do it manually each month, is it somewhere possible to create automatically? Thank you, RobertSolved1.5KViews0likes3CommentsOracle backup job failed - Unexpected error - E0001005
Hello, I do have a difficulty backing up the Oracle Instance. It was working fine for more than a month, but it started to fail a few weeks back. We are using Symantec Backup Exec Applicance - Symantec Backup Exec 2014 on Windows 2008 R2 server. All credentials tested and OK. I went through the recommened Oracle Agent setup on Windows server, but I am still getting the following error message: Unfortunatelly no RMAN log attached to the backup job log. Thank you in advance for any information on this. VladimirSolved1.3KViews0likes2CommentsDedupe doesn't seem to be working properly 3600 R3 was R2
Appliance was R2, was upgrading to R3 twice, once in September and once again at the end of October. Ops center crashed on the first appliance. 3 months ago I was sitting at 2.3 to 2.5TB of dedupe storage at a 16.1:1 ratio, this was directly before the first upgrade. So I had about 37.03-40.25TB of hydrated data. Since then I obviously have upgraded the appliance on around Sept 16. Looking in the adamm.log is where you can really tell it starts to grow from that point until now, because within 2 weeks it climbed a TB. Now I am at 4.88 TB and about to fill up the storage. I am manually having to delete the oldest backups sets trying to reclaim some data. My dedupe ratio is now at 10:1. Since I have only jumped up 7-8TB of hydrated data, but yet have doubled the amount of space taken up with the dedupe something seems wrong. I have only added 3 servers to the mix since Sept 16th, which each house about 50GB of data. All jobs follow a basic template with a few being different: 1 full a month with differentials in between. The fulls kept for 95 days and the differentials kept for 32 days. I have a ticket open on this but the tech doesn't believe anything is wrong. We looked in the audit log and it show media being reclaimed and deleted, but I don't think this is a consistent case. Right now I have a lot of Expired backup sets that are not being deleted. Some I think are being affected by the known bug that is not deleting backup sets that were taken with 2012. Other though I expired yesterday morning and they are still there, other are deleted right away. My environment is all Windows. I have almost half 2012R2, almost half 2003 and a couple of 2008 and 2 Windows 7. I only have 2 or 3 out of 20 servers being backed up that have more than 100GB on them and only 10 of the have more than 50GB. One of the ones with more than 100GB has 800GB of data on it, 1/4 of it being pictures which I know do not dedupe well, but this servers was being backed up pre upgrade to 2012 with no issues. The other server having more than 100GB of data is a file server with 200-300 GB on it, which resides on a windows dedupe storage which takes it down to 150-200GB. To me there is no way, in my environment that going from 37-40TB of hydrated data/2.3-2.4TB dedupe at 16.1:1 ratio, and it sat this way for 4 months leading up to the 2014 upgrade.....to 47-48TB of hydrated data with 4.88Tb when deduped at 10:1 ratio within 2 months of the upgrade is correct. The only reason I haven't filled up my dedupe yet is I am manually expiring backup sets. But it is still climbing and the dedupe ratio is going down and I won't be able to fight it off for longer without doing more drastic expirations. Is anyone else experiencing something similar? Is there anymore information I can give that would help?823Views0likes4CommentsBest Practices for Implementing Two Backup Exec Appliances
Greetings to the well of knowledge... I recently purchased two BE3600R2 appliances running BE2012. My intent is to have one appliace running locally in our main office for our regular backups and another located at one of our remote sites for replication and disaster recovery purposes. I am looking for best practices for implementing such a structure. My environment is currently backing up four file servers and two SQL Server instances. I am doing weekly full backups on Saturdays and incremental backups on weeknights for the file servers. I am doing weekly backups on Saturdays, transaction log backups every hour from 8A to 5P, and nightly differentials at 6 PM. How can I best take advantage of the second appliance to ensure that I have reliable DR backups? Regards, Ken Carter TILL, Inc.Solved859Views0likes2CommentsDuplicate secondary image on CAS directly to attached tape library
We have implemented BackupExec 3600 R2 appliances at remote sites as Managed BackupExec Servers (MBES) with a separte BackupExec 2012 SP3 server at main site as Central Admin Server (CAS). We have the following backup definition created from the MBES: 1. Take local backups to BE appliance dedupe pool. 2. Duplicate (Optimized Duplication-Shared dedupe pool of CAS) these backup sets from remote site to main site (2nd Stage). We then need to do another duplicate job for images (Monthly/Yearly) on the CAS to go directly to the attached tape library. Your valuable inputs are very much appreciated. How can we satisfy this requirement? Thank you in advance,Solved981Views0likes7CommentsPDVFSService runs a single thread ?
Hello, I've been taking a look at the Backup Exec 3600 appliances, and I found out that (apparently?) the PDVFSService runs a single thread, meaning that although the appliance is quad core, sometimes the service peaks at 25% (in task manager) but never crosses that boundary, meaning it must be using single thread process? I'm currently looking deeper into this behaviour, but it seemed to peak during "updating catalogs" phase. Although most of the time the service is not even peaking, it does seem like a waste if it running in a single thread, correct? Updating catalogs is heavy, particularly on systems with huge amount of info (exchange, etc), so it looks to me like that would be a lot to gain at that stage if this was improved/corrected... ? Again, this is not a "I need a fix" question, but it does seem (to me at least) like a clear case of something to improve, the sooner the better. Let me know your thoughts on this. [], RC345Views0likes1CommentBackup Exec 2012 sp2 Deduplication
HI I bought 2 Backup Exec 3600 apliances to replace my actual Backup Exec server. For the first time I will use Deduplication. One box will be on the main site an the other one to the remote site for replication I will use tapes for long-term data retention what are your advice on the frequence of the incremental/differentials and full backups? How deduplication will works for the DAG? will it be client side deduplication or not? Do you have any recommandations my environment is composed of SQL Server, Exchange 2010 SP3 Dag, Enterprise Vault Server, Windows File Server, Sharepoint Windows 2008 active directory server VmWare 5.1 servers few servers has volume (drive) connected to an ISCSI Equallogic SAN; those are backup up with Backup EXec and I want to be able to use deduplication on them thanks for your answerSolved693Views1like4CommentsClient side dedupe for Linux machines
For all our linux machines, physical and virtual, we get the following when backing up: Client-side deduplication is enabled for this job, but it could not be used. The link points us to KB article V-79-18273. I have tried following steps outlined in the article, but continue to experience the same issue. Any suggestions?629Views1like4Comments