cancel
Showing results for 
Search instead for 
Did you mean: 

Duplicate Failed V-79-57344-37956

Ryan_White
Not applicable
We are backing up 700GB of data to disk using backup exec 10. The total storage space of our disks is 557GB. The retention is set for 2 days. The way the backups are set to run the data is backed up to the disks and then archived off to tapes. The error V-79-57344-37956 is showing up on the duplicate job. 0xe0009444 - The requested source duplicate backup sets catalog record could not be found. Perhaps the media containing the source backup sets was previously deleted.
Final error category: Job Errors

Can you give me some insight to why this is happening? Is there enough drive space before the data is over written on the disks?
40 REPLIES 40

Ross_Smith_2
Level 4
You only need to delete any templates that are failing. You only need to re-create all 5 if all of them are failing with this error.

Sharvari_Deshmu
Level 6
Hello,

We request you to keep us updated


Thanks,

NOTE : If we do not receive your reply within two business days, this post would be marked assumed answered and would be moved to answered questions pool.

Joshua_McKenzie
Level 3
So this issue can occur on even one job out of nine created by the policy?

Trying the deletion of the duplication template/recreation right now to see if that addresses the issue... I just find it very inconsistent that in an environment where a program is told to duplicate template data to jobs with the only variation being selection list input, seemingly random jobs become "corrupt" and stop working.

Sure enough, that fixed the problem. Now to see if any of the other 8 jobs are arbitrarily corrupt tonight...

~Josh

Ross_Smith_2
Level 4
Hi Josh,

Thanks for getting back to us and reporting that the fix worked for you. I agree it's an annoying issue, I'm just glad to have a workaround.

This is the last problem I have with BackupExec now. Over the past 9 months I've been able to report and get fixed every other bug I've found. Just waiting for Veritas to work out what's going wrong here and issue the fix.

Now, if only they'd document the problem and the work around on their knowledgebase...

Wolfgang_Bruchh
Level 4
Hello, finally I'm glad that "somehow" I could fix the issue. Recreating the Template worked for me as well. Sure, it's not acceptable to having this interrim solution since that long time. I'm scared about managing 42 BE Media server and recreating templates and appropriate jobs because of that known bug. VERITAS HAS TO FIX IT, no question !!!

BTW: The intinal error message is confusing. Saying Catalog query failed. The affected Media couldn't get re-catalogued. Whenever I tried this I got error 0xe0000900 ( V-79-57344-23-4). All described tasks didn't work, except the template recreation as found after several h in that Forum.
So after that fix I could also successfully catalog the media again.

Hope a hotfix will be available soon.
Thanks to the Forum Members who addressed this.

Ross_Smith_2
Level 4
It's now 6 months since I first reported this to Veritas. I've had another job fail and the 'delete and re-create' work around doesn't appear to have helped so I've yet again reported this issue to Veritas.

Just to keep this in the public eye, I'm reporting this here, with a copy below of my e-mail to Veritas.

And yes, I have had enough problems with BackupExec that I'm on first name terms with half of the UK advanced support team...

Ross

------------------------------------------------------
Case ref: 240-139-205
------------------------------------------------------

Hi Gareth,

Here's the log. I've reported the exact same problem before, and it's been replicated by Colin. Colin found me a workaround of deleting & re-creating the template within the policy to re-create the job. That's worked in the past, but hasn't worked for this job - it's failed 3 weeks in a row now despite being re-created.

The last I heard, Colin was looking into this and trying to find the cause so it could be fixed.

The previous case logs of mine referring to this problem are 240-109-275 and 240-109-513. The problem was first reported by myself in April. Six months on I really do want a solution to this. Oh, and I'm not the only person experiencing this problem - 9 other people reported it on just one thread on the discussion forums. I was able to solve their problem, but the two veritas staff who tried to answer were completely unaware of the issue.
http://forums.veritas.com/discussions/thread.jspa?threadID=49934&start=15&tstart=0

All the best,

Ross

Nick_Terry
Level 3
Partner
I too am having this problem on a couple of servers and deleting and recreating the duplicate template seems the only way to temporarily fix it. Can anyone give me an update as to the situation regarding this issue? I've installed SBE 10d on one of the servers and VBE 10 Service Pack 3 on the other but the problem still occurs on both.

Ross_Smith_2
Level 4
Hi Nick,

I've got a case open with Veritas regarding this. It sounds like it will be a long while before it is fixed, but there is some good news in that Veritas have now found the cause of the problem. They also have another workaround for BackupExec build 5520 onwards which can help prevent this problem occuring.

They are in the process of writing this all up as an official knowledgebase article, but I know that's likely to take a while so I'll do my best to explain what I know here:

The workaround in brief is:
For any job where this error has already occured, delete and re-create the affected templates. In the future, any time you have a problem with a duplicate job, fix the problem and then right-click on the duplicate job in the job history and click 'Run Now' to repeat it.

And the full explanation:
Whenever a duplicate job fails, it retains it's list of targets. Veritas have told me that this was by design to ensure that data would not be missed and would be included with the next run of the job. The problem is that they don't seem to have considered what to do if the source data is no longer available - if that happens, this error is generated and the only solution is to delete and re-create the template.

Now, if your source data will still be available the next time your duplicate job is scheduled you should never see this problem, but for those of us running disk-to-tape backups, we don't have a lot of space on the disks and data is often overwritten.

If you had limited disk space there was no way around this problem until, in build 5520 of BackupExec, Veritas quietly fixed a problem that prevented many duplicate jobs being re-run manually. Now that problem has been fixed, we can repeat duplicate jobs while the source data is still there and prevent this problem re-occuring.

So, as long as you can solve problems with duplicate jobs and re-run them before the source data is overwritten, you should never see this problem again.

Of course, if you get a problem you can't fix before the data is lost, you'll still have to delete and re-create the template, but at least we know what's happening now.

Ross

Nick_Terry
Level 3
Partner
Hi Ross,

Thanks for the reply. Very helpful.

Nick

Kurt_Glore
Level 3
What is the status on resolution for this issue? I have been using the work around long enough. I still have dup jobs that fail after a few weeks and then I have to delete the jobs and recreate the policy that those jobs belong to. I dont have time to put bandaids on something that is supposed to be automatic.

Ross_Smith_2
Level 4
Kurt, if you read my post above you should find that you shouldn't always need to delete and re-create the templates.

You should only need to delete a template if a duplicate job tries to run when the source data is missing.

This normally means the job has to run twice before this error happens - an error on the first run will generate this error on the next. If you follow my advice above and fix the problem after the first failure and re-run the job, not only will you prevent this problem recuring, but you will also be ensuring that you have your duplicate of the data that should have been generated by the first run.

I agree this is still a problem and Veritas are aware that I only consider this a workaround and expect this to be fixed, but they have also told me it will be a long while before they do fix this.

RossMessage was edited by:
Ross Smith

Kurt_Glore
Level 3
Ross,

Thanks for the input. My dup jobs fail with the following error: The query for media sequence number 0 of this media family was unsuccessful.
Ensure that all media in the family have been inventoried and cataloged.

I have tried everything from the registry suggestion to things mentioned in this thread. The problem comes back in time. I hope they get a fix soon. We have invested a lot of time and money in this product and it is not performing as advertised.

Ross_Smith_2
Level 4
Hi Kurt,

That's an error I've not come across before I'm afraid, and if your duplicates are failing that regularly the best advise I could give would be for you to phone Veritas support.

Ross

Norbert_PORSCHE
Level 3
Hi All,

I came across the same problem resulting from an B2D-Job which ran out of disk-space. So the media set expects to be continued on an other B2D-media but this new B2D-media could not be created because no space available, so the B2D-job failed and of course all subsequent Duplicate jobs failed too.

In this case there is nothing to be fixed except ensuring there is enough space for future jobs.

I also used the workaround and deleted the specific Template to reset the list of sets to be duplicated.
By creating a new policy (before delete) and importing all templates from my Original Policy I created a "Template-Repository" which I can use to recreate a deleted Template by reimporting again. So this workaround is less pain.

But why is Veritas not able to create a simple explicit Reset-function for the Duplicate list in such a long time? Lots of people are waiting on this.

First I thought I could "Edit the selection list" of the duplicate job as this function is available in the "Job Monitor" list. But that just results in a crash of Backup Exec.

I only wonder if this Duplicate problem also exists when using the old Pre-Policy way where you created just B2D-jobs and attached Duplicate-Jobs to it. I have not tried it yet.
Has anybody experience with that?

Norbert

Nick_Terry
Level 3
Partner
Hi,

Shortly after my previous posts I stopped using templates and went back to the old way as you mentioned where you create a job and then a duplicate job attached to it. I don't get the issue with this method. Does anyone have any news on the status of the policy problem?

Ross_Smith_2
Level 4
Bounce!

This is still a major problem for us, this error occurs on a regular basis and we're forever deleting and re-creating templates.

We've already had further complications caused twice as a result of settings being missed while re-creating these templates. Something that I feel is inevitable when we are having to re-enter so many settings so often.

We haven't lost data yet because of this, but it has caused at least 4 support calls to Symantec.

If anybody from Symantec is actually reading this, please pay attention. Your inability to fix problems like this is costing you directly in support costs.

Ross

xander_ekkel
Not applicable
To add another post to it. Having the same problems at customer site.

Reinstalled and Updated all my servers to Version 10.1 Rev 5629. Applied MP1, but still the same issues.

And it seems that they errors are increasing over time.

Ross_Smith_2
Level 4
And yet again we're having to delete and re-create templates simply because a job missed it's time window. Veritas / Symantec, if you're listening, it is unacceptable for a product of this nature to contain these kinds of flaws.

I accept that jobs will fail from time to time, however it should be possible to recover from that failure quickly and easily, either by re-running the job there and then or waiting for it's next scheduled run. It should be expected that barring new errors, a properly configured job will be able to run sucessfully the next time it is scheduled. There should be NO failure method that means that job will then fail for every run from that point forward, with no means of recovery.

BackupExec has some of the worst error handling decisions of any program I've used. As a result it's the most fragile program I have ever had the misfortune of administering.

Please, somebody, get these problems fixed.

------------------------------------
Error messages to help others find this thread:
- Unable to determine ADAMM media identification. Catalog query failed.
- Duplicate job failure
- 0xe0009444 - The requested source duplicate backup sets catalog record could not be found. Perhaps the media containing the source backup sets was previously deleted.

mart_g
Level 4
Partner Accredited
Hi, any updates so far ?

Jeoffrey_Becker
Level 3
Hi Guys,

We have the same problem here.

Many servers which are managed by a CASO server have policies set up where data is duplicated to tape by policy after B2D.

If a job fails one day (e.g. a bad tape or faulty drive) I will have to recreate the back-up policy otherwhise the next day the duplicate will fail.

The duplicate job has to finish correctly otherwhise it will keep looking for the B2D files from the day the job failed. even weeks after the job had failed.

We also use the trick by right-clicking a policy and choose "Copy". You will get a "Copy of ... (policyname)." Just delete the old policy and rename the copy policy and recreate the jobs using selection list.

We will manage over 100 server with a CASO server so this really is a pain in the ***


I really hope Veritas will come out with a fix for this problem soon ! And I hope the next person who call tech support about this issue isn't treated as if they are the first person with this problem!!Message was edited by:
Jeoffrey Beckers