I am testing backups and restores of virtual machines backed up from a vSphere 4.1 U1 infrastructure.
The backups seem to work fine, however restores take a very long time to complete.
While troubleshooting I set up the simplest test I could think of:
The really strange thing is that for most of the time, nothing seems to be happening.
In this case, the backup was 2,147,499,027 bytes, which completes in about 1-2 minutes.
The restore of this machine took nine hours! I attempted this several times, usually canceling it after 30 minutes or so since it seemed stalled. But I left it to run overnight and it did actually complete - with a Successful state.
What happens is the restore processes normally until it's almost complete, and then the job just 'sits there' with no change in the byte count for several hours. Currently I have another attempt which has been running for just over 4 hours, with a byte count of 2,145,412,627 bytes. As you can see, this is just shy of 2 MB short of the total size. The job history for the successful restore does report the correct byte count, i.e. exactly the same size as the original backup.
I've also done a test of one of our 'real' VMs (a DC), which is 15,078,652,324 bytes. This one 'only' took 3 and a half hours to restore, although I observed the same behaviour: it restores it a 'normal' speed up until it's almost complete, and then just sits there for a few hours, and finally completes while nobody's looking.
I'm seeing this behaviour whether I use NDB or SAN transport mode for the restore, and whether I restore from a B2D folder or directly from tape.
Any suggestions for how I can work out what BE is doing at the tail end of the job?
When you upgraded to R3, did you make any changes to any policies or backup selections to get this issue resolved?
I ask as we too have had really slow image level restores since we have been running R2 earlier this year and I'm keen to perform the upgrade to R3 soon, especially if this can help resolve this issue for us.