cancel
Showing results for 
Search instead for 
Did you mean: 

Directory not found

Ruska
Level 4

I am a bit confused by jobs failing with the error "Directory not found".

This only happens once in a while. I understand that it's looking for a folder that no longer exist and know how to fix it. What I'm trying to achieve is mitigate it. Surely, files and folders appear and disappear on a daily base, yet it is not necessary to manually add  or remove the file structure changes on a job every time it happens.  I therefore understand that BE dynamically keeps track of such changes.

Even so, sometimes a job fails due to a folder being removed. Why is this and how can I stop this from happening. I cannot always know when server owners are making file structure changes on their servers and amend the job manually accordingly. Also, does this only apply if a folder is removed from the root of a volume or does this go for all folders no matter how deep?

If it applies to root folders only I can get the guys to log changes before removing such folders. Otherwise, is there a way to configure BE to just back up whatever is available to back up even though folders were removed. This way the job wont be marked "failed job" if technically all data available were successfully backed up.

8 REPLIES 8

Dev_T
Level 6

Hello,

I was facing the same problem. I restarted the backup exec services through batch job and added the file in the pre-command in the job so every a fresh selection list was taken. For convenience, Symantec has included batch files that will automatically stop or restart all Backup Exec services at once. These batch files, BESTOP and BESTART, are located in the Winnt\Utils folder or in path C:\Program Files\Symantec\Backup Exec on the Backup server

This is not the normal bevahior of Backup Exec

shailesh866
Level 4
Employee Accredited
The avoid the error in the given scenario kindly refer the following example : 


Example: 

If you select directly Subfolder1 and Subfolder2 in the selection list using "View by resource". 
If you change to "View selection details" you can see the following: 

C:\Folder\Subfolder1\*.* /SUBDIR 
C:\Folder\Subfolder2\*.* /SUBDIR 

Doing this if you change/move/delete this subfolders the job will fail with the given error. 

To avoid this you can do the following instead: 

Select only the main "Folder" in the selection list and the result in the "View selection 

details" will be: 

C:\Folder\*.* /SUBDIR 

If you don't want to backup all subfolders inside you can exclude them. To do it just deselect 

the unwanted folders from "View by resource" and you will have: 

C:\Folder\*.* /SUBDIR 
C:\Folder\Subfolder3\*.* /SUBDIR /EXCLUDE 
C:\Folder\Subfolder4\*.* /SUBDIR /EXCLUDE 

The result will be the same as the first option but in this case the backup wont fail when you rename or create new subfolders. 

Ruska
Level 4

Hello Dev t, thanks for the reply.

I was under the impression that this was a feature of some kind since I have been experiencing it since version 10d.

I have since upgraded to 12.5 and periodically this still occurs. Are we saying that if this occurs the fix is to restart the services. Is there no permanent fix as one can only apply this one after a job has failed.

Would implementing this procedure mean the services gets restarted automatically before every job? If so, what if one is running multiple jobs at the same time for hours at a time?
 

CraigV
Moderator
Moderator
Partner    VIP    Accredited
...and it also happens in 2010...

RahulG
Level 6
Employee
Work around for this would be to select the complete volume or the parent folder in the bakup selection as if you make granular selection if the folder is deleted you would need to remove the folder from the selection details... You can propbable select the complete volume like C : and then just exclude the folders you want

Ruska
Level 4

Thanks for all the information guys

It sounds like all the workarounds only covers subfolders.  More often than not it is usually a root folder that get s removed that results in the job failing. I know how to fix it by manually excluding the folder from the selection list. This process remains reactive instead of proactive.

Is there any way to update the selection list dynamically as root folders are moved or renamed to tackle the cause before it causes a job to fail?

Bare in mind the job for months without failure unitl someone remove a root folder. Only then does the job fail. By standard not all volumes and folders are selected, only ones requested to be backed up. Users sometimes then move or del root folders without telling anybody causing the job to fail.

RahulG
Level 6
Employee
Well you can post this as an idea. So if most of the cusotmer request for suh enhancement engneering might think if there can be some change made in code which remove the name of the resources from the database once it finds the resource no more exist...

Ruska
Level 4

Thanks Rahul

That's a good idea. Perhapes going forward they can have the agent on a server monitor the the filesrtucture and if a root folder gets removed update the media server of these changes. That way jobs wont fail because the media server is unaware of changes made on client servers.

Its probably not a problem for smaller sites but can be benefitual on larger sites where a lot of people are making al lot of ad hoc changes on random servers. That way the knock on affect does not reflect on a backup reprot negatively while in fact all data that should be getting backed up is getting backed up.