cancel
Showing results for 
Search instead for 
Did you mean: 

BE 20.6 Jobs fail - could not write to database

HansMeiser
Level 3

Hello Vox Community!

For a couple of days, BE 20.6 Backups fail. The given error code is 0xe0000f09 - an error occured writing to the BE Database. The error message links to a KB Item V-79-57344-3849 - which cannot be found.

Googling for these messages, it seems the error is related to incremental Jobs which somehow overlap the Full Backup, at least to my understanding. But we only run full backups every night, no incrementals at all.

Are there any other issues that may cause the error? How to troubleshoot? The job is one we use for years, and the databases have not changed recently. What changed lately was a small BE update to be applied to both the BE Server and the remote servers thru the live update function.

9 REPLIES 9

ManojNisal
Level 3
Employee

Hi HansMeiser,

Whenever a job fails, a couple of debug log files are generated in the "<product install>\Logs" directory on the Backup Exec server.

Can you check the failures/errors printed in the debug log (towards the end) generated for bengine and paste those errors here?

 

Hi @ManojNisal

Thank you for your reply. I am not exactly sure which parts you need, there is a good amount of logs from today. There are 6 files named SERVERNAME_bengine_AlwaysOnLog_DATE.log for a total of 15 MB, and maybe another 20 logfiles which have changed today.

I am going to restart the server and have another go at the backup tonight. If this doesn't help, I will dive into the logs to see what I can find.

Hi @HansMeiser,

Whenever a job fails it will generate a file with name like SERVERNAME_bengine_AlwaysOnLog_DATE.log along with few other files. So you can match the timing of any backup job failure (with error could not write to database) with the timestamp of bengine log file.

By the way, you mentioned you run full backups everyday. By any chance, is the expiry of these backup sets set to 24 hours?

Thank you @ManojNisal 

The job failed again at 07:38 this morning, log as follows:

Spoiler
[14332] 2020-03-12T07:37:39.761 [loops] - BackupBSDProcessor(): Received MSG_RWF_METADATA message from raws.
[14332] 2020-03-12T07:37:39.762 - Status 'E_SSP_DISABLED (0xE000A200)' for '((BESTATUS)0xE000A200L)' in 'BackupJob::SetRwfMetadata:11970'
[14332] 2020-03-12T07:37:43.772 [loops] - ndmpCallbackUpdateAgentCookie: SUCCESS!
[14332] 2020-03-12T07:37:43.781 [loops] - stream data[DEFAULT] halted: SUCCESSFUL
[14332] 2020-03-12T07:37:43.781 [tpfmt] - TF_NDMPGetResult(): MediaServer thread done, returning TFLE 0x00000000
[14332] 2020-03-12T07:37:43.781 [loops] - data halted: SUCCESSFUL
[14332] 2020-03-12T07:37:43.782 [tpfmt] - TF_FreeTapeBuffers: from 0 to 0 buffers
[14332] 2020-03-12T07:37:43.933 [loops] - BaseBSDProcessor::PollDataServer: Shutting down.
[14332] 2020-03-12T07:37:43.934 - Status 'E_SSP_DISABLED (0xE000A200)' for '((BESTATUS)0xE000A200L)' in 'BackupBSDProcessor::GenerateSspRwfFile:5461'
[14332] 2020-03-12T07:37:43.934 [loops] - BackupBSDProcessor::GenerateDRFile() - Enter
[14332] 2020-03-12T07:37:43.934 [loops] - BackupBSDProcessor::GenerateDRFile() - Info: Not a Simplified System Protection Job. Ignoring .DR file creation.
[14332] 2020-03-12T07:37:46.917 [tpfmt] - TF_GetAndResetBytesWrittenToCurrentMedia returning 46612754432 bytes written to media B2D007862, {8365ffb4-6a2a-4813-a70a-1bc728799e04}.
[14332] 2020-03-12T07:37:46.917 [server] - BackupMessageHandler - update_cat_fragment_storage_usage - Passing 46612754432 bytes for media id {8365ffb4-6a2a-4813-a70a-1bc728799e04}.
[14332] 2020-03-12T07:37:46.917 - Status 'E_SSP_DISABLED (0xE000A200)' for '((BESTATUS)0xE000A200L)' in 'BackupJob::GetRwfMetadata:12001'
[14332] 2020-03-12T07:37:46.917 - Restore Work Flow (RWF) Metadata. Size:94. XML:'<?xml version="1.0" encoding="utf-16"?>
<RestoreMetadata RestoreType="3" RestoreSubType="3"/>
'. in 'BackupJob::GetRwfMetadata:12006'
[14332] 2020-03-12T07:37:47.350 - Resetting Restore Work Flow (RWF) Metadata. in 'BackupJob::ResetRwfMetadata:11861'
[14332] 2020-03-12T07:37:47.491 [tpfmt] - WriteEndSet( 1 ) returning 0x00000000
[14332] 2020-03-12T07:37:47.951 [tpfmt] - WriteEndSet( 1 ) returning 0x00000000
[14332] 2020-03-12T07:37:47.951 [tpfmt] - OTC_Close: Closing otc_sm_fptr 0x6c622680.
[14332] 2020-03-12T07:37:47.953 [tpfmt] - WriteEndSet( 0 ) returning 0x00000000
[14332] 2020-03-12T07:37:48.182 [tpfmt] - TF_FreeTapeBuffers: from 0 to 0 buffers
[14332] 2020-03-12T07:37:48.388 [server] - Updating refCat metadata database failed with error -536837659
[14332] 2020-03-12T07:37:48.389 [server] - Source Path set in jobHistoryDetail \\srv-hyperv\System?State
[14332] 2020-03-12T07:37:48.389 [server] - Family Guid set in jobHistoryDetail 3B74CD5F-BFF2-4DDD-977A-E7DD1F42D9A5
[14332] 2020-03-12T07:37:48.389 [server] - (3692) EndOperation(2, 0xe0000f09)
[14332] 2020-03-12T07:37:48.389 [server] - Job::DetermineFinalStatusForResource( fs-2, ec-0xe0000f09 ) - 6
[14332] 2020-03-12T07:37:49.108 [loops] - Informational : RABackupBSDProcessor - Encryption DeInited OK for SET.
[14332] 2020-03-12T07:37:49.108 [loops] - Enter NDMPSnapHostBackupEngine::TrackSnappedBSD
[14332] 2020-03-12T07:37:49.108 [loops] - Job was marked as failed, sending 1 to snapshot cleanup code
[14332] 2020-03-12T07:38:02.833 [loops] - Exit NDMPSnapHostBackupEngine::TrackSnappedBSD
[14332] 2020-03-12T07:38:02.833 [loops] - NDMPBaseHostEngine::AskAgentToDumpCircularLog: Requesting for remote agent always on logs for 'srv-hyperv'
[14332] 2020-03-12T07:38:02.833 [loops] - Enter NDMPSnapHostBackupEngine::AdvanceToNextBSD
[14332] 2020-03-12T07:38:02.833 [loops] - Exit NDMPSnapHostBackupEngine::AdvanceToNextBSD
[14332] 2020-03-12T07:38:02.834 [loops] - Could not clean up metadata file: C:\Program Files\Veritas\Backup Exec\Data\OHTEMP\SRV-HYPERV-VRTSRV-{9DFB9D80-F1C8-487A-B0C1-B0DDFC058BBD}. Reason: 0x2
[14332] 2020-03-12T07:38:02.834 [loops] - Exit NDMPSnapHostBackupEngine::ProcessBSDList
[14332] 2020-03-12T07:38:02.834 [loops] - Exit NDMPSnapHostBackupEngine::ProcessBSDs
[14332] 2020-03-12T07:38:02.834 [ndmp\ndmpclient] - (6561)ndmpcGetUsernamePassword: resourceFlags - 0x1
[14332] 2020-03-12T07:38:02.835 [ndmp\ndmpclient] - (6561)ndmpcGetUsernamePassword: resourceFlags - 0x1
[14332] 2020-03-12T07:38:02.835 [ndmp\ndmpclient] - (6561)ndmpcGetUsernamePassword: resourceFlags - 0x800
[14332] 2020-03-12T07:38:02.836 [ndmp\ndmpclient] - (6561)ndmpcGetUsernamePassword: resourceFlags - 0x0
[14332] 2020-03-12T07:38:02.837 [ndmp\ndmpclient] - (6561)ndmpcGetUsernamePassword: resourceFlags - 0x1
[14332] 2020-03-12T07:38:02.899 [ndmp\ndmpcomm] - ERROR: ndmp_process_messages: unexpected reply message: 0xf403.
[14332] 2020-03-12T07:38:05.721 [loops] - NDMPAgentConnector::EndConnection(): Data Server = srv-hyperv
[14332] 2020-03-12T07:38:05.721 [tpfmt] - TF_CloseTape() baseChannel = 0x0a9add60
[14332] 2020-03-12T07:38:05.729 [tpfmt] - RewindDrive mover ret = 0 (0x0)
[14332] 2020-03-12T07:38:05.729 [tpfmt] - TF_PutChannel( ) 0A9ADD60
[14332] 2020-03-12T07:38:05.730 [loops] - (525)NDMPEngine::Run returning 0x0
[14332] 2020-03-12T07:38:05.730 [tapealrt] - TAPEALERT: Get TapeAlert Flags Return Code = 0X0
[14332] 2020-03-12T07:38:05.731 [tapealrt] - TAPEALERT: TapeAlert Device Flag = 0X0
[14332] 2020-03-12T07:38:05.731 [tapealrt] - TAPEALERT: TapeAlert Changer Flag = 0X0
[14332] 2020-03-12T07:38:05.731 [tapealrt] - TAPEALERT: Get TapeAlert Flags Return Code = 0X0
[14332] 2020-03-12T07:38:05.733 [tapealrt] - TAPEALERT: Get TapeAlert Flags Return Code = 0X0
[14332] 2020-03-12T07:38:05.733 [server] - BackupJob::RunBackup( ) - Update Credentials
[14208] 2020-03-12T07:38:05.816 [server] - DeviceManager: incoming event fired
[14208] 2020-03-12T07:38:05.816 [server] - DeviceManager: Executing ReturnDeviceCall from thread 14332.
[14208] 2020-03-12T07:38:05.816 [tpfmt] - TF_FreeDriveContext( A9DC3C8 )
[14208] 2020-03-12T07:38:05.816 [tpfmt] - TF_FreeTapeBuffers: from 2 to 0 buffers
[14208] 2020-03-12T07:38:05.816 [tpfmt] - FreeFormatEnv( cur_fmt=0 )
[14208] 2020-03-12T07:38:05.817 [tpfmt] - OTC_Close: Deleted set map file.
[14208] 2020-03-12T07:38:12.541 [server] - DeviceManager: processing pending requests
[14208] 2020-03-12T07:38:12.541 [server] - DeviceManager: going to sleep for 900000 msecs
[14208] 2020-03-12T07:38:12.541 [server] - DeviceManager: dismount event fired
[14208] 2020-03-12T07:38:12.541 [server] - DeviceManager: processing pending requests
[14208] 2020-03-12T07:38:12.541 [server] - DeviceManager: going to sleep for 900000 msecs
[14332] 2020-03-12T07:38:12.541 [server] - *
[14332] 2020-03-12T07:38:12.541 [server] - * BackupJob::RunBackup() - returning - 0x0
[14332] 2020-03-12T07:38:12.541 [server] - *
[14332] 2020-03-12T07:38:12.541 [server] - ***********************************************
[14332] 2020-03-12T07:38:12.542 [server] - Closed persistent record handle 0x604
[14332] 2020-03-12T07:38:12.542 [server] - DataStreamsCollator:RemoveActiveJob: Removing cookie for JobID = {D031D282-BAA9-4689-ABA0-D54D3C031B96}
[14332] 2020-03-12T07:38:12.542 [server] - BackupJob::Finish() - See about dedup warning (jc-1, nsd-0).
[14332] 2020-03-12T07:38:12.543 [server] - StoreJob::Finish( ) - Update Credentials
[14332] 2020-03-12T07:38:12.544 [server] - Removing 'Sichern Mo-Fr kompletto-Fr komplett' from status update list
[14332] 2020-03-12T07:38:12.553 [server] - BackupJob::MergeBEVSRJobLogsIfNecessary: processing
[14332] 2020-03-12T07:38:12.553 [server] - BackupJob::MergeBEVSRJobLogsIfNecessary: No VSR log file found, no merging necessary
[14332] 2020-03-12T07:38:12.553 [server] - BackupJob::MergeBEVSRJobLogsIfNecessary: processing
[14332] 2020-03-12T07:38:12.553 [server] - BackupJob::MergeBEVSRJobLogsIfNecessary: No VSR log file found, no merging necessary
[14332] 2020-03-12T07:38:12.554 [server] - EndJob() criticalVolumeError - 0xe0000f09
[14332] 2020-03-12T07:38:12.555 [server] - Ending job 'Sichern Mo-Fr kompletto-Fr komplett' with error status (-536867063)
[14332] 2020-03-12T07:38:12.555 [server] - Job::Finish: Requesting for always on logs for engine

The expiry of the job is set to 3 hours, in case I forget to swap the RDX tape so it could be overwritten. Usually, the tape is changed daily with 5 tapes in rotation for the week.

Unfortunately, the only thing that changed with today's job is that the verification job completed succesfully - which it didn't after the failed backup jobs of the previous days.

I was suspecting this to be a case of backup set expiry happening while backup job was in progress. However, from the available information I don't think that's the case.

As this would require more detailed troubleshooting, I'd suggest opening a case with support.

Thank you for your pointers. I will create a ticket with support.

Currently, I am debating with myself if I should try to create a new BE database over the weekend to see if that fixes the issue, as the error messages point to database problems. This article makes me believe that should be a quick process: https://www.veritas.com/support/en_US/article.100015393

 

Hello, would you have any update on this? I'm facing extactly the same issue

Hi. So far, the issue is still there. I am in touch with Support thru a ticket. It is not their fault that this is still unsolved, they are very helpful. I just don't have the time at the moment to reply in a timely manner.

Basically, the first approaches failed. We are now trying to use an empty database to see if that works - and then would reinstall the original DB. They have advised a couple of other things I should try after that, and will collect some data to further investigate the issue.

I'd recommend to open a ticket yourself. I can update here, I am just slow due to having to deal with the Corona-situation currently.

Hi , I've just found that Backup Exec Agents ( for WIndows in my case ) of Guests VMs must also be updated to 20.6. After updating everything seems to be running fine again. Note: All Guest VM's had to be updated.