I've added some additional comments below, regarding recovering a FSMO role holder.
Given the problems with your lab test restore and the failure to restore to the replacement hardware, I think this may have been a progressive failure that
went catastrophic. Depending on how close the restore image was made to the failure time increases the chance of the resore image being corrupted. If you
have an older image that is younger than the tombstone period of your domain controllers, then you may have better luck. If the older images open from the same storage device, then you can rule out a SRD incompatability with your image storage device. If the older image restores, you may still have to do a repair installation of the OS and recover the Exchange installation from BE. It is still better than rebuilding it all from scratch. Depending on recent changes to your Active Directory and how important they are, you may want to have another domain controller sieze the FSMO and global catalog roles, if they were held by the server being restored.
*********************************************************************************************************
I need to add that introducing a recovered FSMO role holder back into an AD domain where the FSMO roles where seized by another DC is dangerous and is bound to cause AD corruption. The recovered DC should be moved to a lab network and demoted to an ordinary server. Then on a production environment DC ( in the same forrest) run "ntdsutil /metadata cleanup" After this, it should be safe to have the recovered server join the domain again and promote it to a domain controller again.
********************************************************************************************************
I have some other suggestions, but you should copy your BESR images to another location and work with the copies. If something goes wrong, then the originals are still intact. Are you working with a base and incremental set of images? Have you tried opening the images in the Recovery Point Browser to see if you can examine the contents? You can also use "Verify Recovery Point" from the file menu to check for corruption. The image name has to be highlighted in the GUI in order for the verify menu item to not be greyed out. Also, have you tried consolidating any incremental images into a single image? If the most recent incremental image verifies, use "Copy Recovery Point" to consolidate the incrementals into one image. If the most recent image is corrupt, keep going back through the images until a good one is found.
You can also use the "Copy Recovery Point" option to try rebuilding a base image as well, if that is all you have.
Regarding your test restore, the most common cause of an Exchange info store not mounting (after recovery to a different server) is the lack of an active network connection. The Information Store service will not start, and thus not mount the stores unless an active connection is present on the NIC. A simple switch or hub is all that is needed to provide this in a lab situation. No need to connect to your production network. Another common cause is the Exchange databases need their indexes rebuilt after a restore. Defragmenting the databases with ESEUTIL also rebuilds the indexes. It is also a good idea to run ISINTEG on recovered Exchange databases. More info on these utilities linked below.
ESEUTIL
http://support.microsoft.com/kb/192185
ISINTEG
http://support.microsoft.com/kb/301460
When restoring to different hardware and using the Restore Anyware option, you really should select the individual drive files and restore them separately. You only want to select the sv2i file when doing a bare metal restore to the same computer, but with replacement hard drive(s) installed. I'm a including a link for troubleshooting the SRD recovery process:
http://seer.entsupport.symantec.com/docs/295044.htm
BTW, there is a new version of the SRD available if your support contract is up to date.