07-06-2010 05:11 AM
I am a bit confused by jobs failing with the error "Directory not found".
This only happens once in a while. I understand that it's looking for a folder that no longer exist and know how to fix it. What I'm trying to achieve is mitigate it. Surely, files and folders appear and disappear on a daily base, yet it is not necessary to manually add or remove the file structure changes on a job every time it happens. I therefore understand that BE dynamically keeps track of such changes.
Even so, sometimes a job fails due to a folder being removed. Why is this and how can I stop this from happening. I cannot always know when server owners are making file structure changes on their servers and amend the job manually accordingly. Also, does this only apply if a folder is removed from the root of a volume or does this go for all folders no matter how deep?
If it applies to root folders only I can get the guys to log changes before removing such folders. Otherwise, is there a way to configure BE to just back up whatever is available to back up even though folders were removed. This way the job wont be marked "failed job" if technically all data available were successfully backed up.
07-06-2010 05:37 AM
07-06-2010 10:39 AM
07-06-2010 11:22 PM
Hello Dev t, thanks for the reply.
I was under the impression that this was a feature of some kind since I have been experiencing it since version 10d.
I have since upgraded to 12.5 and periodically this still occurs. Are we saying that if this occurs the fix is to restart the services. Is there no permanent fix as one can only apply this one after a job has failed.
Would implementing this procedure mean the services gets restarted automatically before every job? If so, what if one is running multiple jobs at the same time for hours at a time?
07-06-2010 11:46 PM
07-06-2010 11:51 PM
07-07-2010 02:08 AM
Thanks for all the information guys
It sounds like all the workarounds only covers subfolders. More often than not it is usually a root folder that get s removed that results in the job failing. I know how to fix it by manually excluding the folder from the selection list. This process remains reactive instead of proactive.
Is there any way to update the selection list dynamically as root folders are moved or renamed to tackle the cause before it causes a job to fail?
Bare in mind the job for months without failure unitl someone remove a root folder. Only then does the job fail. By standard not all volumes and folders are selected, only ones requested to be backed up. Users sometimes then move or del root folders without telling anybody causing the job to fail.
07-07-2010 03:44 AM
07-07-2010 04:13 AM
Thanks Rahul
That's a good idea. Perhapes going forward they can have the agent on a server monitor the the filesrtucture and if a root folder gets removed update the media server of these changes. That way jobs wont fail because the media server is unaware of changes made on client servers.
Its probably not a problem for smaller sites but can be benefitual on larger sites where a lot of people are making al lot of ad hoc changes on random servers. That way the knock on affect does not reflect on a backup reprot negatively while in fact all data that should be getting backed up is getting backed up.