Our current setup:
Windows Small Business Server 2011 (which is actually Server 2008 R2)
Symantec System Recovery 2013
Logical drives are C:, D: and E:. We back these up on a weeknightly basis (Monday, Tuesday, Wednesday, Thursday, and Friday). It also backs up the System Reserved drive. It is a recovery point set that run a complete backup each night to an external USB drive. Before the backup runs, we delete out the previous recovery backups from the week before. Ex: on a Tuesday night, we plug in the USB drive labeled "Tuesday". In Windows explorer, we delete out the backup files that are there from last Tuesday's backup. This leaves the hard drive blank, allowing that night's backup to run, which completely copies over the C:, D: and E: drives to the Tuesday USB hard drive (in the case of someone accidently leaving the previous week's backup files on the USB drive, the job creates a differential backup of the logical drives to the USB drive).
-We are using a recovery point set job description. Every day, the job shows up as scheduled with the destination drive labeled according to the day of the week: [Monday]\, [Tuesday]\, etc.
-Each hard drive has a recovery point set limit of 1.
-All the USB hard drives are identical.
The problem is that one of the hard drives has failed and we are replacing it with a different, higher capacity USB hard drive. Now the scheduled jobs do not run on that drive. When we run the scheduled backup job manually, it backups successfully to the new USB hard drive.
I do not have any error messages right now, unfortunately.
What changes do I need to make to my scheduled backup job to allow it to backup to different models of hard drive? We are not using offsite backup features.
Solved! Go to Solution.
I researched the settings of SSR 2013 on that job some more. The backup job is using "Start a new recovery point set (base): The first time the backup is run in a new week".
I did recreate the job. First, let me tell you what I ended up trying.
I was still not able to get the SSR 2013 to recognize the new hard drives with the existing backup settings. I ended up changing the backup destination from the USB hard drives to the network share of the hard drives. (Ex: backup destination was now \\servername\driveletter . I was then able to intermix different types of hard drives in the backup destination. I put in the admin username and password for that network admin share to the hard drive. It ran successfully like this for several nights.
However, I encountered an error one morning. The following messages from the SSR log explain it:
7/16/2015 21:00:22 PM High Priority Info: Info 6C8F1F65: The drive-based backup job, Drive Backup of System Reserved (*:\), (C:\), driveD (D:\), driveE (E:\), has been started automatically. 0x00 (Symantec System Recovery)
7/16/2015 23:17:59 PM High Priority Error: Error EC8F17B7: Cannot create recovery points for job: Drive Backup of System Reserved (*:\), (C:\), driveD (D:\), driveE (E:\).
Error ED800019: The recovery point file '\\servername\i$\SERVERNAME_D_Drive353.v2i' could not be opened.
Error E7D1000E: Unable to open '//servername/i$/SERVERNAME_D_Drive353.v2i'. It does not exist. 0xED800019 (Symantec System Recovery)
We are running SSR 2013 Small Business Server Edition, v 220.127.116.11853.
Based on the above error messages, I have NO idea why the backup failed. The backup job started at 9 PM, but then the job failed because it couldn't open the drive, it seems. The drive was still being sensed by Windows and SSR the next morning when I came in and checked (expecting my backup files to be present in the removable USB drive...surprise!). No further error messages in the Event Viewer on the server were available for that time frame that referenced SSR 2013.
I ran several backup jobs on the nights after that with the identical settings. No errors.
After that, I did not trust the backup job to run correctly. I thought that perhaps the network drive sensing was flaky, and so I deleted the backup job. SSR hung when I tried to delete the job and I had to kill the service and start it again. Then I was able to delete the backup job and recreate it, using individual drives like in the original job.
Since then, jobs have run but the USB drives will not eject safely. It says that the drive is in use. I am troubleshooting this issue, and it appears as if the network share of the I drive is what is locking the drive. (This is because if I remove the network admin share for the USB drive (\\servername\i$) from the Share and Storage Managament console, the drive is able to be ejected. Also, I can't rename the aliases for the USB hard drives after I named them initially. Do you have an answer to either of these problems?
Once I have an answer about why the job failed when the job settings were set for network drive, I will mark this question as solved. However, we have switched over all of our hard drives to the new type, so we no longer have the need for mixed drives in the back up job. Thanks.
After I rebooted the server, something new came up.
I rebooted the server last night, then let the backup job run normally. It completed successfully, then the next morning I tried to eject the USB drive. It wouldn't let me, so I removed the I$ share for that drive (using the Share and Storage Management program). Then, after running Process Explorer, I saw that VProSvc.exe still had a handle open on the drive. I tried ejecting the USB drive, but it still told me it was busy.
I tried stopping the Symantec System Recovery service, but it said the process was not accepting control messages at this time (paraphrased). I ended up killing the VProSvc and it then let me eject the USB drive.
I am wondering if SSR 2013 somehow hasn't fully let go of the drive letter/network share destination that I had previously told the backup job to save to, and that is causing the lock on the I: drive admin share. I am going to try manually assigning a different job letter to the hard drives so that they are all now the J: drive. I will see if I can unplug these drives successfully.
I was able to reproduce the problem: plug in usb hard drive, can't eject it. I changed the drive letter assignment for that drive from I: to J: using Disk Management and I was able to eject the drive without closing any handles.
I was able to reproduce this solution, as well: every time I change the drive letter assignment for the usb drive from I: to J:, I can eject the drive.
I am going to run the backup tonight with the new drive letter assigned to the removable drive and will change the destination in SSR 2013 to point to that removable drive with the new letter. I will then see if it completes AND lets me eject the drive tomorrow morning. I will then verify that this fix is long-term over the next few days and does not stop working.
So, to sum up:
1) The original backup job had been pointing to the I: drives for backups. I changed this to \\servername\i$ to try and allow us to use different USB drive types and it worked and the backups completed. However, the USB hard drive would not allow me to eject it (both types of drive, new and old).
2) I switched all our USB drives to the new type. I deleted the modified backup job and created a new backup job that matched the old settings (pointed to I: drive for saving). The backups would complete, but they would not eject safely.
3) I had to change the drive letter assignment for the USB hard drives from I: to J: in order to get the hard drives to eject safely. I changed the job settings to point to this new hard drive letter (J. I am able to eject the hard drives now. I will confirm that the job completes successfully and check whether or not I can eject the drive safely afterwards.
Changing the letter assignment for the drives worked. My workaround is to manually assign J to each backup drive.
When I plugged in the I: drive, I turned off the Symantec System Restore service and stopped the VProTray.exe process. I was still not able to eject the I: drive safely. I checked the handles and there had been three "System" processes with PID 4 that were open on the I: drive; after turning off SSR service and killing VProTray.exe, there were only two.
I had to stop sharing the I: drive admin share ($) in order to get the handles to close and the drive to eject.
After I changed the drive letter to J:, I was able to eject easily with no problems.
Symantec, can you recreate this problem? Small Business Server 2011 (which is built on Server 2008 R2).
It looks like it is working. I can now eject the I: drive without the handles from SSR/Windows share being open on the drive. I eventually had to:
1) delete the existing backup job
2) delete all external USB drives from the Advanced menu -> Drives list
3) go into Tasks -> Options -> Settings -> General -> Default backup destination, delete that entry out completely and click "OK"
(I also went to Tasks -> Manage Backup Destinations... and deleted out as many entries from there as I could, however I don't think that helped.)
4) shut down the SSR service and kill the VProTray.exe process
5) change the drive letter assignment of the USB hard drive I wanted to use from I: to J: using Disk Management admin tool
6) safely eject the USB drive (J
7) restart the SSR service and run the VProTray.exe process
8) plug in the same USB drive again and assign the drive letter I: manually
9) create a new backup job and choose the default location to be the I: drive, along with all the other settings (backup recovery set, new recovery point weekly, M-F repeat)
I will run backups on this for a few days and see if it allows me to 1) eject the drives without a problem and 2) plug in a new hard drive (ie, [Tuesday]) and have the job backup to that drive (because there had been a problem where the backup was pointing at a drive label, not a drive letter, and failing because that drive label wasn't present).