What is the proper procedure to make a backup of an entire deduplication storage directory to external storage?
In the backup job setup, there is both the raw directory structure of the drive letter and directory name "E:\BackupExecDeduplicationStorageFolder\" .... but there is an additional section under "Shadow copy components" with "Configuration" and "Storage" as possible selections.
So which do I choose to backup? Both of these? Or only the shadow copy components?
The target of the backup is a large external USB drive, with backup compression enabled.
Also, is this really enough for the dedupe data to be usable if restored in the future, or do I need to be backing up some additional SQL database that is associated with the deduplication disk directory?
As a product feature request, the ability to back up and restore the dedup storage without having to go through all this complex manual preparation, should be directly available within the Backup Exec user interface.
The entire uncompressed dedupe storage is nearly equal in size to the compressed full tape backups I was previously doing of all of our servers, plus it contains data to recover everything going back four weeks.
It makes far more sense to be able to regularly and easily backup the entire dedupe storage, after all the other nightly full server backup jobs complete, rather than to export individual jobs out of it to external tape or cloud storage.
I have similar feelings. The dedup storage really needs a 1 button backup option.
Press >>>>THIS<<<<< to backup.
Then a 1 button restore:
Press >>>>>THIS<<<<< to restore
There should also be a sync option that simply sync's the entire storage to another device with very low priority. Again, one button with maybe a few bandwidth control options.
I have actually abandoned backing up the dedup storage. It was nothing but headaches. Backups to the dedup storage would fail while it was being backed up. Most of my sites have 30+ gb dedup volumes and It simply takes too long to backup all the dedup data. If any minor network or storage conjestion occured, the backup of the dedup would would actually fail and ruin my day. I can't imagine actually having to restore the data with the same headaches
A better approch for me was to consider the dedup storage as an "immediate" or temporary storage device where only the last few weeks of backup data exists in dedup. Full backups are then duplicated to tape, disk or cloud.to maintain the GFS DR copies and the data in dedup expired. This reduced the total size of dedup considerably and works much more reliably.
In addition to offloading weekly jobs offsite, I backup the BEkey, catalogs and data folders of Backup Exec. Those two folders and the key combined with the offsite backups is everything needed to recover quickly.
Granted, offloading dedup data doesn't preserve the data deduplication but from an operators standpoint, I don't have to babysit the darn thing and can provide simple foolproof instructions any operator can follow.
I look at it like this. Imagine a DR scenario where you've lost your site. The dedup storage is backed up somewhere. The process for restoral begins with reinstalling BE, then restoring the dedup data. If the restore works lol... On top of that, once the restore is complete, there is the catalog and index of the data which can take days depending on the size of dedup. It's just not a viable solution for me. I have to be able to recover a system within minutes, I can't have the process take a week.
Weekly offsite copies of full backups offer a simple and much quicker way to recover from a disaster than backing up dedup for me.