I have a client that is running SSR 2013 to backup the drives in their server. When a "full" backup job is started it runs and completes with no problem. However, if I run a file/folder backup it freezes at 5% and it will take a long time just to go from 1% to 5% in the first place. Usually, it says file 46 of 46 or 28 of 28 etc.... So essentially it seems as if it is freezing on the last item.
There are no entries in the event viewer outside of "backup job started", deleting and recreating the job does not resolve the issue.
The drive is 1.79TB total but has 1.65TB free. So it should be an easy job for the software to handle. We use the file backup in this situation because it will allow us to keep old copies of files/folders that we can recover by themselves rather than an entire snapshot of the drive. Also, we have this setup because file/folder backup will not purge older backups unlike the snapshot feature.
That's still over 100GB of data that you are backing up there. The file/folder backup method is not really designed for that amount of data and, in my experience, you will see performance issues backing up that way.
We use the file backup in this situation because it will allow us to keep old copies of files/folders that we can recover by themselves rather than an entire snapshot of the drive.
--> Just to be clear, you can do granular level restores of files/folders from a volume-level backup as well. Not sure if you are aware of that?
We know that we can pull specific files and folders from a regular restore, the problem is that the full restore option deletes the older copies of the backups. So lets just say a year goes by and we end up needing a folder from a year before, if we were using the full snapshot backup said folder would have been overwritten by a newer version of the shadow copy.
If you can think of any better way to achieve backups for this specific drive in the way that I would like to, I'm open to all ideas.
So lets just say a year goes by and we end up needing a folder from a year before, if we were using the full snapshot backup said folder would have been overwritten by a newer version of the shadow copy.
That's true to a certain extent but it really depends on the schedule you are using and how many backup sets you choose to retain. But clearly the more sets you retain, the more space you need to store them.... so I can see this being one reason why you would want to choose the file/folder backup method to get around this.
So you either resort to full volume backups and adjust your settings in a way that allows you to go back as far as you need in order to restore what you need, or you try and figure out why this backup is 'freezing'.
As I said before, for 100+GB of data, the file/folder backup method is not recommended. I suspect it may just be slow which is why it appears to 'freeze'. How long have you waited out of interest to see if it progresses?
The backup is scheduled to run daily at 5AM. Whenever I check it, it is always stuck on 5%. The only difference in status if I let it run for a prolonged period is the file it is on. It will take a long time to even get up to 5%, once it does it will say creating file/folder backup and eventually it will get to scanning files (this will take hours). If I let it run and check back days later, it will always be on the last file/folder in the backup process (46 of 46, 62 of 62 etc...). I wish I could check event viewer for some more info but nothing is listed inside event viewer, it basically just says job started and thats it, no messages are posted in the event viewer after that.
Based on the limited information on this thread, I would say this is just a performance issue and this goes back to the fact that the file/folder option is not designed for this amount of data. Sorry that I don't have a better answer for you.
If you want this investigated further to see what can be done, I suggest opening a support case. However, just be aware that the final answer may be the same that I have given here.