Deduplication folder stays offline
Forum, We upgraded Backup Exec from 2010-R3 to 2012-SP1. We have the dedupe option for both versions. The upgrade was done two weeks ago and we have been running dedupe jobs successfully. Now the deduplication folder will not come online. We have about 10TB of backups extending back 6 months. Under “Storage” this message is displayed next to the dedupe folder: This device has not been discovered correctly. Cycle the services on _CoSRV-BEX to retry device discovery. This alert is also logged: Backup Exec was unable to initialize and communicate with the device [SYMANTECOstLdr MACF] (The handle is invalid.). Click the link below for more information on how to diagnose the problem. http://www.symantec.com/business/support/index?page=answers&startover=y&question_box=V-275-1017 note the above link is for 2010, not 2012. Actions taken: Using the BE Services mgr, the services were recycled (many times) – no improvement. The server was rebooted – no improvement. Using the BE Services mgr, the services (including the dedupe services) were recycled – no improvement. The server and drive array were powered off and powered back on. All is normal. We did recycle the services with and without the dedupe services after this power up - no improvement. FYI We have a Dell PowerEdge R710 and an MD1200 array for the local D: drive, which has the dedup folder and nothing else. The server runs Win Server 2008R2 and has 64GB RAM. There are no hardware errors. The physical array is normal and drive D can be browsed. Live Update shows we are up-to-date. Some google searches suggest solutions for 2010, not 2012 . . . that the solution is to remove the target servers from the devices window, and disable client side dedupe. How can that be done in 2012? I have opened a support ticket with Symantec, but I cannot get them to call me back. Symantec advises that they will have the Deduplication support team call me back. I was promised a call back 3 ½ hours ago, but that hasn’t happened. I have called back twice with the ticket number and been shunted over to the voice mail of engineer that owns the ticket. Is there any hope for this? Should I look for a replacement to Backup Exec? ... Frustrated and hoping we don't have to restore anything.Solved4.2KViews25likes2CommentsJobs stuck indefinitely in Queued Status
We have had an ongoing issue for about 2 months now, and since we have had a clean build (3 times now) for Backup Exec 2012. We have opened numerous cases with Symantec to resolve this, and they claim the first time that HotFix 209149 (see http://www.symantec.com/business/support/index?page=content&id=TECH204116 ) corrects the issue. The issue is also noted by seeing Robotic Element errors stating that OST media is full and it brings the "virtual" slot for the PureDisk or OST storage device offline. Restarting the services in BE only makes the problem worse and causes a snowball effect whereby jobs constantly error in the ADAMM log files. Essentially, the jobs never can get a concurrency/virtual slot and they stay Queued forever. I have seen others on this Forum with this problem, and while the Forum administrator seems to mark them as "Solved", they are not - because I see the threads drop off with no resolution identified. Are other people having this problem? If so, how are you overcoming it? Once it starts the environment is essentially dead in the water because the jobs never start (they sit Queued forever) - save for one concurrency which for our size environment is only 1/4 the need we have. We use CAS and 4 MMS servers with 2008 R2 with all patches applied, PureDisk 32TB volume on each MMS, Data Domain OST connection to DD-670 OST with OST-Plug 2.62 (28TB), replicated catalogs, and duplicate jobs for optimized deduplication between MMSs. We run BE 2012 SP3 clean - we reinstalled with SP3 slipstreamed because Symantec said this problem could be fixed through database repair by them manually or by reinstall...we chose reinstall (even though they did offer to "fix" the issue with database repair). We chose reinstall to validate whether SP3 truly fixes the issue. It is clear to us it does not. We are looking for anyone else who has had this problem to report into this forum. Thank you, Dana14KViews4likes15CommentsBE2012 WAN optimized deduplication with a communication or Central Adminstration Server (CAS) failure
Version 14.0 1798 Problem: I’m having trouble getting past some issues with this structure. It comes down to either optimized deduplication works, but I can’t use the MBES/MMS when the CAS is down or both sites are fault tolerant but I can’t setup optimized deduplication between them. Goal: My goal is to use optimized/shared deduplication to replicate server backups between my two sites and be able to restore those backups and local backups during a vpn failure or site failure. If site A is down, I want to recover site A servers to site B. If site B is down, I want to recover site B servers to site A. Details: I thought that I could place a Backup Exec 2012 server at each site and use optimized deduplication for this, but I ran into a few problems. Optimized deduplication, or shared deduplication as it is now called, seems to require the Central Administration Server Option and this has the following options for its Managed Backup Exec Servers: 1. Centrally managed Backup Exec server 1a. Centrally managed Backup Exec server with Unrestricted access to catalogs and backup sets for restore 2. Locally managed Backup Exec server Options 1 & 1a: With the “Centrally managed Backup Exec server” option chosen for the Managed Backup Exec Server. Having a CAS with a deduplication folder at site A and a MBES with a deduplication folder at site B works and staged jobs will run from one deduplication folder to the other. The problem here is that if the CAS is down or the VPN is down, I can’t even bring up the BE2012 console on the MBES let alone run jobs or restores. The MBES is totally dependant on a persistent communication link with the CAS under the “Centrally managed Backup Exec server” “Managed Backup Exec Server Configuration” option. Option 2: MBES that is a “Locally managed Backup Exec server”. While the CAS and MBES can still function without each other under this scenario, I can not share the MBES deduplication folder with the CAS. So I can’t do optimized deduplication between the two. Am I wrong about optimized deduplication requiring CAS + Centrally managed Backup Exec server? If so, please explain how to set it up. Is there a way to use the Centrally managed Backup Exec server when the CAS is down? I don’t understand how I’m supposed to recover if the CAS is gone.652Views4likes5CommentsVery slow job rate
Hi Actually using new backup exec 2012 Server 24 Gb RAM, 4 processor with 6 core each. Deduplication storage located on a HP StorageWorks with 12 disks10K Sometimes, everything is ok. But the week end during full backups, it may happen that 7 to 10 jobs run during the same time. and this may become fuckinngggg slow. Most of times, job rate is from 1000 MB to 2000 MB/mn but sometimes, during week end, all jobs run from 8 to 20Mb /mn and take forever, I need to stop them to launch them again What the heck is going on? Cpu usage is under 5%, Memory seems to be under 25% usage. Disk queue is under 0,05 , network is 10% of 1GB card I really dont know what I need to do...1.2KViews4likes6CommentsDuplicate to tape at DR site
Hi, If I set up a single job definition to first backup to local dedup disk at site 1 and then duplicate this to dedup disk at site 2 using opt dedup, would another stage to duplicate to tape at site 2 use the data stored at site 2 and not site 1? Thanks.Solved680Views4likes5CommentsNIC Teaming Slow Throughput, any ideas?
Hey guys, We currently have 2Gig set up through NIC teaming for our backup exec setup.When we have backups running, it barely reaches 35-40% of the bandwith. I am using Backup Exec 2010. Has anyone else here set up NIC teaming before? What are some things that might cause slow performance? Are there any specific settings in Backup Exec that may help? Thanks!5.2KViews3likes9CommentsDisk-to-tape backup taking progressively longer for single job
Have a about a dozen various backup jobs and they all follow the same pattern, backup to a deduplication disk, then later the job is duplicated out to tape. This has been working fairly well but I've noticed an issue over the past 2+ weeks for a single job. This job backs up our Exchange databases to the dedupe disk, then later out to tape. The disk-to-disk portion has stayed consistent, 250GB transferred at 2500MB/min on average. The disk-to-tape part is where it has gotten out of whack. It started out taking an hour and a half to two hours to copy, but has gradually gotten longer and longer until now it is reporting over 6 hours to do the same amount of data. We haven't changed anything since this job was created, same dedupe disk, same tape library, same type of tapes, same encryption, etc. This is the only job that is doing this, all the others are running just fine in a fairly consistent time frame. Any ideas?437Views3likes3CommentsBE2014 Dedup storage device keeps jobs queued, only 1 job runs at a time
Backup exec 2014 with dedup storage on a local disk. Concurrent operations has been set to 8. BE Media server has been rebooted several times. Yet when a number of jobs that kick off at the same time start, all with the dedup disk as the destination, only one runs, the others sit at queued, then the first job finishes and one of the jobs sitting at queued starts processing and so on. Its as if the concurrent operations value is being ignored and is set to 1 when it isn't. These jobs have GRT enabled and are all using windows agent. What could be missing? With the jobs runningin sequence instead of parallel is obviously stretching our backup window. Thanks.Solved641Views3likes2CommentsNB7.5 Remote Backup Procedure after upgrading & changing Master
Environment: NetBackup 7.5 Media Server with Puredisk and Tape Library attached. I need to backup remote site "ZXY" over a WAN Link. History: Previously we were backing up XYZ using Synthetic Backups to Media Server "B" with Master Server "A" in another location. We then moved to a new Master Server "A1"and media server "B1." Before moving we replicated data using SLP from B to B1. We used the following link to seed: http://www.symantec.com/business/support/index?page=content&id=TECH144437 In particular this section: For example, assume two new remote clients, remote_client1 and remote_client2, are being backed up for the first time.Data for both clients has been copied via a transfer drive and backed up locally to the media server media1, using a policy called “transfer_drive”. Run the following commands on the media server to setup a special seeding directory using the transfer_drive backup images for each client: $ seedutil -seed -sclient media1 -spolicy transfer_drive -dclient remote_client1 $ seedutil -seed -sclient media1 -spolicy transfer_drive -dclient remote_client2 Verify the seeding directory has been populated for each client: $ seedutil –list_images remote_client1 $ seedutil –list_images remote_client2 Run backups for remote_client1 and remote_client2. Clean-up the special seeding directory.: $ seedutil –clear remote_client1 $ seedutil –clear remote_client2 Clearing the special seeding directory is important.The source backup images referenced in the special seeding directory will not be expired until they are no longer referenced.To help with this, the special seeding directory for a client will automatically be cleared whenever an image is expired by NetBackup for that client.That being said, it is good practice to explicitly cleanup the special seeding directory when it is no longer needed Now: We attempted to run backups but continually failed with Error 14. We had the Accelerator unticked. Due to the way our VPN's and WAN links are setup we decided to point the Client to another Media Server C1 which connects to A1 Master Server. How should I progress with backing up the client. Shall I use the Accelerator, have Client Side Dedupe and just run a Full Backup over the link? Or shall I some how replicate from Media Server B to C1 or from B1 to C1, then follow the above link on seeding and then run a full backup? Im a little confused on how remote backups are actually suppose to work and whether the Accelerator works on the first backup. Also whether it will use the dedupe data. Any help is appreciated.443Views3likes1Comment5220 Slow Backup Performance
Hi everyone. We recently invested in two 5220 36TB appliances to replace the 7.1 Windows media servers we were using. We are moving to (almost) tapeless backups and the appliances seemed a good a bet. We wil be running the two in alternate sites using AIR and backing up 95% VADP over SAN. We are currently in a transitional period where we are seeding the 5220 that will eventually be based in the remote site and therefore its sitting along side the other appliance in the same datacentre. We are currently still writing all backups to 6 x LTO-3 drives and need to continue this until we physically move the 2nd appliance. AIR is in operation and both appliances currently protecting around 170TB. NBU version is 7.5.5 on masters and 2.5.1 on 5220's. If I run an isolated backup (i.e. no other activity on the appliance) it flys, and I mean seriously flys...I'm getting in excess of 120MB p/s in some cases. Our nightly cinc's also run OK. No where near the above but acceptable throughput never the less. The issue is our full backups at the weekend. We have 420 VM's to process (approx 20TB of data). I'm lucky if I get more than 5mb p/s over SAN and the backup window is really starting to creak... We do use query based vm selection and max 2 connections to a datastore. I've been sweating over this for weeks. We've been updating brocade switch firmware, swapping out fibre, playing with buffer size/numbers none of which have made any great difference. Have even had a Symantec appliance engineer on site to check things out (array batterys, hardware errors etc) which was interesting but largely unproductive. Symantec support over email/phone have been beyond disappointing. After this weekends slowness, I'm almost certain that it is the AIR replication and tape duplications creating heavy I/O and severly impacting the backup write performance. I had considered this before but was kind of under the illusion that the appliance was built to handle more than I could probably throw at it. I'd say there's probably no more than 20 or so backup streams hitting it at any one time plus the 6 tape duplications plus the replications to remote master. What I saw was considerable increase in backup throughput when the SLP was suspended and the dups/reps cancelled and then steady degredation when enabled. I'm kind of surprised and massively disapointed by this finding. I note a lot of issues with slow rehydration to tape and indeed I suffered this on our Windows media servers with SAN based MSDPs. In the case of my appliances the rehydration performs great...Just at the expense of seriously slow backups..! My question is there anyone who has experienced similar on the appliances and how do you handle the I/O? Should I be limiting I/O to the disk pool? What would be the recomended setting? Any advice greatly appreciated. Thanks for your time.. EdSolved2KViews3likes6Comments