Job stuck on Loading Media endless
We have the following problem: Newly installed server 2016 with BE 20.6 on a decent server. Locally 8 TB as deduplication storage and as source a vSPhere Cluster 6.7.0 The jobs run for some days without problems (always full backup) on the deduplication storage. But then jobs stop after a few gigabytes. The message is "Load medium" without error message. A snapshot is created on the vSPhere and nothing happens. In the resource monitor I see massive reading activity in the DeDup storage and no activity on the vSphere server. I could not find any error messages. Sometimes the job can be aborted and sometimes only a reboot helps. The job itself is a file server. Agent is installed everywhere and up to date. We back up as a synthetic backup. Anybody have any ideas?1.9KViews0likes2CommentsDedup ratio 1:4
Wether I check Backup Exec systems upgraded from BE16 to BE20 or newly installed BE20 systems i seldomly get Dedup ratios higher than 1:3 whereas with BE16 I had Dedup ratios more like 1:9. What has to be changed in the configuration to get a higher Dedup ratio under BE20 or is this not possible ?748Views0likes1CommentOrphaned backup files in Dedup Storage folder
Hello Community, I've read through some topics regarding to orphaned .bkf files and how to handle them, but I think my situation is a little bit different and I'd like to ask for some help or guidance where to look further. So we are using a Dedup Storage in order to have more storage space for backups. However we are about to run out of free space so the situation is quite bad. First of all, I started to look up backup sets with old data, but I could not find any relavant information, it seemed OK and I did not understand what is occupying so much space. I went further and started digging in the files. I've found almost 8 TB of data (.BHD and .BIN files) from July 2017 (D:\BackupExecDeduplicationStorageFolder\data) as well as some PostgreSQL log files (D:\BackupExecDeduplicationStorageFolder\log\pddb). Okay, that's a huge amount of data. So I've headed back to Backup Exec admin console and started to look for backups from 2017. I've checked the Job History and I found a very few backups from November 2017 but still no backup which might contain old backup sets from July. I also checked Backup Calendar, maybe that helps me. Nope. Same as Job History. What I would like to know: Am I missing something and looking for old data at the wrong place? How do I know which backup set created those old files? If I have more information about those files, can I safely remove them manually? If have no more information, what could be the worst case if I remove those files manually? Hopefully I've provided enough useful information. If not, I'm happy to help further in order to resolve this case. Thanks, CsabaSolved2.4KViews0likes3CommentsDEDUP - DISK - Instant Recovery
I will back up Server01 and store it in DEDUP. But I will need that Server01 can use Instant Recovery, after backing up to DEDUP, okay to duplicate to disk? Or better to do on disk and then replicate to DEDUP? Would you also recommend this scenario? DEDUP will have one year retention. Instant Recovery will be retained for one week.756Views0likes2CommentsBackup Exec 2014 Deduplication Disk Offline - spauser database corrupt - solved
We have a BE2014 backup media server running on Windows Server 2012 R2 doing a B2D Deduplication Storage. The system is virtualised using VMWare ESXi v5.5. We had a strange issue where a faulty NIC Driver on (E1000e) on the VM would cause the server to lose connectivity for a few seconds and then automatically come back up causing disruptions and backup failures.The system is running the latest VMWare Tools. Such events are logged in Event Viewer / System Logs as Event 27 and 32. This has been a confirmed problem by VMWare but no fix has been offered except to replace the virtual nic with their recommended "VMXNET 3" Driver. We followed the workaround and replaced the Virtual NICs with the VMXNET 3 and the problem went away, but after a restart of the server, when we tried to bring our storage devices back online, the Deduplication Storage failed with an error something to the extent of "Unable to Authenticate OpenStorage Device". All services including the BE Dedup Agent, Manager, and PostGres DB were running, as the online troubleshooting documentation told us to check. We tried rebooting and restarting all BE Services with no avail. We did some research and found out that there are two user accounts used to operate the Deduplication Storage, one is the one that is configured via the GUI, and another that is automatically created in the embedded POSTGRES database. We reset the password back to the usual one when everything was still working in the GUI, and then tried to reset the second account using the "spauser.exe -c -u <username>" to update the password in the database, the GUI Portion worked but the when trying to update the DB Password it asked for the old password, and none of our passwords worked, even though we have never changed it. The dedup was working until the NIC issues, this led us to believe that the database was corrupt, and there was no literature online to tell us how to solve it. It simply stated that if you forget your old password, you would not be able to bring the Dedup back online and to simply delete and recreate the drive and start over, implying all your old backups would be lost forever. We were in uncharted territory at this point, so we deleted the Dedup Drive in the BE Console and then backed up the entire folder dedup folder with all our existing backups in it to other media, and then renamed it to CORRUPT_BackupExecDeduplicationStorageFolder and created a new dedup device which came online without any issue. This created a brand new uncorrupted user authentication database under D:\BackupExecDeduplicationStorageFolder\databases\spa\database\authentication\ . We then went into that folder and opened up the file named "1" which contains the username and password hash of the account used for the Dedup Device. This file is also what "spauser.exe" and the Dedup Engine uses to authenticate against when bringing the Dedup Device online. "spauser.exe -l" shows this as "User 1". Contents of this file: 0|BEDedup.User|75a9505d992fec489f96ab92619812a1| Likely in the format of DBIndex|UserName|md5PasswordHash| Now we have a fresh and working username and password hash that we know is not corrupt and know the password to, which we tested and authenticated successfully against using the "spauser.exe -c -u BEDedup.User" command. Next we then shutdown all the BE Services and renamed the current (New) D:\BackupExecDeduplicationStorageFolder\ to D:\NEW_BackupExecDeduplicationStorageFolder and the original one with all our backups back to D:\BackupExecDeduplicationStorageFolder\ and went into the D:\BackupExecDeduplicationStorageFolder\databases\spa\database\authentication folder and replaced file contents of "1" with the newly generated password hash (username remains the same as this is required), and restarted the BE Services. Just like that all the services started normally, and BE started to Discover the device, afterwards we were able to do an Inventory and Catalog, and the device was back online! This method in theory can also be used to reset the password without entering the "Old Password:" if you have forgotten it. During the course we have learned that the password hash is simply an unsalted md5 hash, so if this ever happens again, we could use a md5 hash generator to create the new hash instead of having to delete and recreate a new Dedup Device. This is extremely strange has we had never changed the password before. The md5 hash in the corrupted "1" file doesn't match with any other password that we have used in the past either. Hope this helps someone with a similar issue, of course backup the folder before you start making any modifications to the database files in the Dedup Folder! Cheers!981Views0likes0Comments