Backup Exec server cannot access the specified storage device (tape library) in CAS/MBES environment
We have many backup policies in our environment and almost all of them are having the same configuration. It may sound odd but we have one backup for each VM in this environment. All of them are having a weekly job and daily incremental job. As the second stage, each of these jobs is linked to a duplication job which duplicates the correspondent backup set to a DR open storage device. Every first Monday of the month we get the last full backup set and duplicate it to LTO5 tapes and keep them for long retention as monthly backups. The tape library is attached to one of the MBES servers. The problem is that there are a few jobs that cannot see the tape library during monthly tape duplications on each month. The jobs that cannot access the tape library are completely random. So, if one backup policy cannot see the tape library in one month it may work fine next month. Refreshing the Backup Exec services on CAS and two MBESs have not helped in our case. The only way we used to get the full backup sets duplicated to tape was to manually find the backup sets in the deduped OST and duplicated them to the tape library. Just recently, I have found a work around to make the backup policies which have trouble accessing the library to find the device and kick off the monthly duplication. However, after a few jobs, they go to the same state and I have to do the same process again until all of thebackuppolicies are duplicated to tape successfully. We also get "ODBC access error. Possible lost connection to database or unsuccessful access to catalog index in the database" error message on CAS. Apparently, this happens when there is a communication drop out between CAS and one of the MBES servers. I guess this might be causing the monthly duplication issue we are facing in the environment. This is already escalated to Veritas support and is still under investigation. My question is if anybody knows a permanent fix for this issue?1.3KViews0likes2CommentsDLM issues - Duplicated backup sets
Hi all, I have a complex issue with regards to DLM and duplicated backup sets I am hoping you may be able to help with. My current setup is as follows: We have two sites; Production and DR At each site we have a DataDomain setup as OST storage The production BE implementation is a CASO server and DR the slave. Each night we backup the production servers to the Production OST device and then duplicate to the DR OST device. The problem we are facing is that once a backup set has been duplicated to the DR OST device, both the production incrementals and DR incrementals become dependant on the last duplicated Full backup. The Full backups at the Production site display no dependant backup sets even though incrementals have since been taken. The issue with this is that when full backups expire on the Production OST they are removed regardless of there being any incremental backups still on the Production OST which would be dependant on the Full backup. Usualy DLM would recognise that there are dependent incrementals and expire but not delete. However as the dependent sets are being associated with the duplicated copy on the DR OST this does not happen. This means that in the event of a restore, I would have to rely on the DR OST backup sets which are not local to site, meaning a slow restore. I have a case raised with Veritas support who have replicated this issue in the lab and said this is "by design". I have asked them to tell me the logic behind this but whilst I wait, I was hoping I might get some insight here and also any particular workarounds (other then making my incrementals expire before my full set). Thanks in advance!Solved1.4KViews0likes3Comments