cancel
Showing results for 
Search instead for 
Did you mean: 

Job to duplicate B2D sets to removable drive scatters data all over the place and kills Agent

xilog
Level 3
Hello.

I hope you good people will be able to help me, I'm at my wits end.

Apologies for a lengthy post, the devil is probably in the detail with this problem

I'm attempting to create a backup server to enable a two-stage backup, first stage being a nightly backup from servers to the local drive array and then duplicating that backup to a removable drive.  Then at the end of the week the removable media is swapped and the local backup deleted, leaving the removable device as the "live" backup and the local array clean for another week's backups.  Full backup runs on Friday night with diffs every Sat-Thu night.

I have created what I think are suitable devices, media sets and jobs and indeed everything works fine until the duplicate jobs start.

There are three devices, a B2D folder called "Servers" for daily backups, a B2D folder called "backupsvr" which contains a backup of the backup server immediately after Backup Exec installation and a removable B2D device called "Disk_1" (set up as drive F:) which is the destination for duplication.  All these use the default settings of 1GB files and 100 backup sets.

There are just two media sets, "Daily Backups" and "Daily Duplicates".  Daily Backups has an OPP of 2 weeks and indefinite append, Daily Duplicates has an OPP of just 1 day since its disks will be ejected at the end of the week and not reinserted for at least 5 weeks and also has infinite append.

There are two main backup jobs, Friday Full and Daily Diff, both targeted at the Servers device and putting its media into the Daily Backups pool.  Each of those has an associated Duplicate task targeted at Disk_1, the idea being that after the main backup is complete, it's then duplicated on the removable device.

To my way of thinking, thios should all work, yes?

Well what actually happens is this:

The "Full Backup" runs properly, backing up to the Servers B2D device and putting the media in the Daily Backups pool.
The "Duplicate full backup" task starts, initially putting files onto Disk_1 as expected.  When it's put 20GB of data onto the disk (it's a 1TB drive) it goes very, very slowly and then kills the BE agent, finally it gets round to verifying this 20GB and when it starts writing again it starts putting the B2D files in the Servers device and the backupsvr device (seemingly at randon) and putting the media in any pool it seems to feel like; it's put some in Daily Backups, some in Daily Duplicates, some in Imported media.  Some even ends up in Scratch Media.

What the blazes can be going on?  It's driving me nuts trying to figure out what's gone wrong.

Any advice you can offer?

Thanks,

Kevin.

26 REPLIES 26

Not applicable
Hello Xilog, I guess you have selected the all devices in the job properties i would suggest you to target tat job to the B2D you want it to write. you can change that settings by right clicking the job | Properties | device and media | Device(dropdown) Select the Specific B2D for the job

xilog
Level 3
Hi Masti,

I wish it was that simple; the job was explicitly targeted at Disk_1 already.

Deatheye
Level 5
interesting. We got the same problem with scattering data all over the place. Tried it out with two different medai servers... also definied the target device...
Looking forward to an answer to this ^^'

RahulG
Level 6
Employee
are there any error handling rules hwich you have configured ?? if there is not then it might be some corruption int he databse causing this issue .
Let me know if this happens only on duplicate backup jobs or it happens on all the backup job

Deatheye
Level 5
Could you check what you configured as the source device for the duplication job? I just noticed that my setting there was wrong. Still trying to find out how to repeat the duplication job for verification.

Ben_L_
Level 6
Employee
I have a couple questions on what you are doing.

1. "Daily Duplicates has an OPP of just 1 day"  - So with this configuration the Tuesday duplicate job could overwrite the Monday duplicate job.  Is this how you wanted it configured?  If you want to keep all the duplicates for the week you will need to increase this to atleast 5 days to not have any problems with previous duplicates getting overwritten.

2. "When it's put 20GB of data onto the disk (it's a 1TB drive) it goes very, very slowly and then kills the BE agent" - Explain kills the BE Agent.  This could be the root of all your problems. Please explain this with as much detail as you can provide.

xilog
Level 3
Hi folks,

I'll try to be as detailed as possible:
  1. I chose 1 day just so that if the initial backup takes a very long time then the later part of the backup wouldn't overwrite the first part.  The duplicates for Sat-Thu are all "Append, fail if no appendable media" type. So there's no chance that there will be an overwrite.
  2. OK, 20GB gets written, and then the icon bottom right that shows the status of the media server services changes from the green "play" arrow to the red stop square, and in services.msc the agent is shown as stopped and there are error log entries showing that beagent caused an exception (0xc0000005 I think from memory; I'm at home atm) and terminated.  At this point there is still a huge amount of activity on the local RAID but nothing is apparently being written to the removable drive.  After about 45mins, the job switches to "verify" status briefly and then carries on backing up but to a wrong target device.
  3. No error handling rules are set up yet.
Since I first started this thread I've experimented by creating new B2D and removable B2D devices, media pools and jobs and tried a quick test and it seems to be working.  It's possible there was a problem with the database I guess or that I mismanaged the initial setup (which would be annoying but if so at least I know it's not a serious problem!)  I'll check it after tonight's jobs and see what happens.

Ben_L_
Level 6
Employee
1. For a test, change the OPP of your duplicate media set to 5 days. Then change the duplicate job to "Append to media, Overwrite if no appendable media is available." 

2. beagent is not a part of Backup Exec 11.x or higher.  This is an older agent in 10.x and lower. You may have some pieces of an old install still running on the server that need to be removed.

xilog
Level 3
OK Ben, OPP to 5 days, should I leave append period as infinite?

Also, beagent was a slip of the fingers (I was writing from memory); it was beremote.exe that was faulting, with c0000005 error.

xilog
Level 3
Came in this morning and saw that there was a pattern to the scattering of data across devices.  The full backup worked okay as before, then the duplicate started.  It initially used its targeted device Disk_1 for the first 10GB chunk, and then part way through the second 10GB chunk the following errors appear in the Windows event log:

07:23:44, Application Error: Faulting application beremote.exe, version 11.0.7170.0, faulting module EDBProv.dll, version 11.0.7170.0, fault address 0x00014d31

07:23:48, DrWatson: The application, C:\Program Files\Symantec\Backup Exec\beremote.exe, generated an application error.  The error occurred on 07/08/2009 @ 07:23:45.287. The exception generated was c0000005 at address 0C114D31 (edbprov!InitEdbProv)

After that, the duplicate job continues but puts data sequentially into the Disk_1, Backupsvr and Server B2D locations but is putting it into the correct media set, Daily Duplicates.  The job is also running very, very slowly at this point, presumably because the agent is not working.

It appears that the error is occurring as it's duplicating the backup of the Exchange server as the following errors also appear in the BE job log at the same time:

V-79-57344-65069 - WARNING: "\\{removed by poster}\Microsoft Information Store\First Storage Group\Staff" is a corrupt file.
This file cannot verify.
V-79-57344-65069 - WARNING: "\\{removed by poster}\Microsoft Information Store\First Storage Group\Students" is a corrupt file.
This file cannot verify.
V-79-57344-65072 - The connection to target system has been lost. Backup set canceled.

The original backup of these information stores was successful.

Does this help pinpoint the problem any better?

Thanks,

Kevin.

xilog
Level 3
So this time, having run liveupdate to solve the agent problem, the agent didn't fault.  The job still behaves in the same way though; fails at the same place. Next run will exclude the Exchange server to see if it's duplicating the Exchange data that's causing the problem as it starte misbehaving about then.

This is driving me insane; everything is set up correctly but BE just seems to want to scatter my data all over the place.

xilog
Level 3
Searching the forums and knowledgebase yielded a few clues where GRT was failing duplicate to tape backups so I wondered if it might be the culprit here.  It is.  Last night a backup ran just as before but with just the GRT option disabled.  The duplicate job worked perfectly.

How does one report a bug?  Is there a bug report tracker/system here?

Kevin.

nn
Level 2
I used to have all the media set to 64Gb and only started having the problem when I reduced it to 16Gb (so that I could pre-allocate and avoid fragmentation without wasting too much space). I will try this weekend with my offsite media set to 64Gb and see if it solves the problem. Maybe the problem comes when the GRT backup doesn't fit into a single file in the duplicate set? I am on BE12 with all service packs and hotfixes. No errors from the agent. I do have a GRT backup of Exchange 2007.

xilog
Level 3
I'd be interested to hear your results.  Initially my disk sets were set to 100GB, more than twice the size required for the Exchange backup but the duplicate was failing at only 20GB into the backup (the size of my first information store)

Ben_L_
Level 6
Employee
How does one report a bug? Is there a bug report tracker/system here?

Best way would be to open a case with support to have them look at it.  The issue you are facing may already be resolved with a later version of the product.
http://www.symantec.com/business/support/contact_techsupp_static.jsp

xilog
Level 3
Thanks for he link Ben, I'll give them a shout.

nn
Level 2
The duplicate job has nearly finished and all the media has been allocated from the correct B2D device so far (680Gb backup with with 30Gb Exchange GRT and SQL but no Sharepoint). One thing I find strange is that it takes over half an hour to allocate each new 64Gb file on the external USB RAID array (these are new files as I had to wipe everything when starting with the new media size). Maybe it will be faster when it can overwrite existing ones.

Ken_Putnam
Level 6
Maybe it will be faster when it can overwrite existing ones.\

Should definitely speed up when you no longer need to create/format new BKF files

Deatheye
Level 5
Hmmm interesting. Could please inform us further about this problem if you know anything new xilog?

We have a case open with the spreading data everywhere problem with symantec support. Till now we aren't further anywhere there. The job the symantec supporter created worked fine for severall tries. But never any job we created on 3 different backup exec server.
I try to check if I can confirm this too with GRT.


EDIT: Where excatly should I deactivate GRT? I found several places to activate / deactivate GRT inside the job.