All the full backups for our Exchange 2013 mailbox databases except one are failing this weekend and I'm not sure why. I've sniffed around the logs and it seems like a possible network issue from the errors I'm seeing but I would like some more experienced opinions on this. I'm providing a summary of our Exchange\Netbackup environment, the BPBRM and BPBKAR logs, and a the job status details for one of the failed databases.
Netbackup 5220 appliance master/media with attached Dedup Disk shelf running Netbackup 188.8.131.52(184.108.40.206)
Exchange 2013 DAG running on VMs 2 Mailbox Database servers with 12 databases(6 active on each server)
I've been getting successful GRT Full backups each weekend for the last 6 weeks. We are currently in the middle of a migration from 2010 to 2013 so the databases have been growing steadily. I have each database set as a new stream in my policy and each server kicks off 6 streams when the backup runs. I haven't limited the number of streams for either client since the backups were working great this weekend. We haven't upgraded the appliance or clients in anyway for the issue to start happening.
I'm seeing 401 and 403 events in the windows app logs on the mailbox servers as well but I have the box checked to continue the backup if the consistancy check fails. The one database that did successfully backup shows it failed the consistancy check as well. I'm going to ask the Exchange admin to check into it regardless.
I ran an incremental backup and all but one of the databases were backed up successfully.
Also, how do I make the BPBRM log display correctly in Notepad? The entries are all mashed together instead of each being on new a line. It came from the appliance(Unix) obviously so I'm not sure if that makes a difference.
I will attach the logs in a second post.
I added the bpbkar and bpbrm logs, just waiting for them to be published in my initial post. I filtered the bpbrm log for the two attempts for this one of the databases and most of the others had the same result. I also included the full log in case I missed something. I attached the status detail of the job for one of the databases as well.
After several attempts, I was able to successfully backup 5 of 12 databases by Monday morning. I discovered that the Exchange admins were doing mailbox migrations(20 mailboxes 5-10GB each at a time) during the weekend the last two weekends which might have been causing latency/resource issues for the Exchange servers. I'm going to be doing a test backup tonight after a reboot of both servers and no migrations running to see if that resolves the issue.