05-09-2011 09:19 AM
Hi,
We have been having problems with backing up one of our servers using Backup Exec v12.0. I can back up successfully using specific agents (SQL and SharePoint), but jobs fail when trying to back up the local disk drives with the following error:
Event Type: Error
Event Source: Backup Exec
Event Category: None
Event ID: 34113
Date: 09/05/2011
Time: 14:33:25
User: N/A
Description:
Backup Exec Alert: Job Failed
-- The job failed with the following error: The directory is invalid.
We have a number of other servers that we back up and this is the only one which is having a problem. Both the media server and the remote server are running Windows Server 2003.
Although this problem occurs with all our regular backups, I am currently using (for testing purposes) a brand new job set to back up just one file which is not in active use.
So far I know the following:
Can anyone help me with why this is not working?
Thanks.
Solved! Go to Solution.
05-11-2011 04:18 AM
All the databases have been passing the DCBB CHECKDB tests, yet the problem still occured.
Last night, however, we tracked it down to just one instance which was causing the problem by shutting all the instances down and just starting up one at a time to test.
With a bit of rooting around in the sql logs this morning, we've managed to track down the specific problem dbs, and it turns out that the problem was occuring due to the fact that the db location addresses (in the database tables themselves) had the slashes the wrong way round. Alter the direction of the slash with the ALTER DATABASE command, and now the backup works.
I don't know what has changed to cause this, these instances and dbs have been present and unchanged for a number of years. At least it's working again though!
Thanks for your help though Simon,
05-09-2011 09:36 AM
Check the Eventlog (Application and System) on the Server which is backed up for entries at the time of the backup. You could also perform a chkdsk on the system to verify file system integrity
05-10-2011 09:18 AM
Hi Simon,
Thanks for your response. We ran a chkdsk last night and although it found a couple of small things, it has made no difference to the problem.
As for the Eventlog, I thought there was nothing, but it seems there's a backupexec error (see below) mentioning something to do with an SQL problem. For reference. this server is our SQL (2005 standard) and WSS3,0 server, but we have no problems with the specific backup exec agents running.
Googling the error, we found a website detailing a workaround which renames a dll to disable the sql agent module of backup exec (http://www.symantec.com/business/support/index?page=content&id=TECH29539). When implemented this fixes the problem with the local drive backups. However we need the SharePoint and SQL modules to work, so does not help us fix our issue.
We have 5 instances of SQL on that machine, but I am at a loss as to why Backup exec has suddenly started having problems as I'm not aware of any changes having been made to any of our SQL setups. Any ideas?
Event Type: Error
Event Source: Backup Exec
Event Category: None
Event ID: 57860
Date: 10/05/2011
Time: 14:40:39
User: N/A
Computer: SEUK-S6
Description:
An error occurred while attempting to log in to the following server: "SEUK-S6".
SQL error number: "0011".
SQL error message: "[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied.
".
For more information, click the following link:
http://eventlookup.veritas.com/eventlookup/EventLookup.jhtml
Data:
0000: 11 00 00 00 00 00 00 00 ........
0008: 00 00 00 00 00 00 00 00 ........
05-11-2011 12:19 AM
The technote youve posted also describes that the real cause for this error is a corrupt database which is only worked around by effectively disabling the SQL Agent module if you did not license it.
So I'd suggest you run DCBB CHECKDB to verify the database integrity (see this MS article for details: http://msdn.microsoft.com/de-de/library/ms176064.aspx)
05-11-2011 04:18 AM
All the databases have been passing the DCBB CHECKDB tests, yet the problem still occured.
Last night, however, we tracked it down to just one instance which was causing the problem by shutting all the instances down and just starting up one at a time to test.
With a bit of rooting around in the sql logs this morning, we've managed to track down the specific problem dbs, and it turns out that the problem was occuring due to the fact that the db location addresses (in the database tables themselves) had the slashes the wrong way round. Alter the direction of the slash with the ALTER DATABASE command, and now the backup works.
I don't know what has changed to cause this, these instances and dbs have been present and unchanged for a number of years. At least it's working again though!
Thanks for your help though Simon,