05-20-2013 03:56 AM
Environment:
Exchange 2010 Standalone server
NBU 7.0.1 on master/media server and client
W2008 R2 SP1
Exchange policy with 3 Information Stores in Backup Selection with multiplexing and multistreaming enabled.
Backup starts and snapshots are created on Exchange Client.
bpfis log:
Solved! Go to Solution.
05-21-2013 10:56 AM
Hi,
A follow up on my previous post: My customer verified that the MS hotfix was already installed (included in an update).
And then a little more info which might be useful.
Although we are running a DAG with two servers the symptoms are pretty much the same as Marianne's.
However we got a decent performance by disabling the concistency check and using the passive copy for backup of the 4TB (8 databases and 2 public folders). All completed in 5 hours 30 minutes. Backup is to an Windows MSDP and with client side dedupe disabled (I didn't yet try it with client side dedupe enabled).
To me the symptomps looked like som kind of contention (maybe storage or filesystem) when the snapshot verification was taking place. When it was disabled (which you can't do as it is a standalone) all streams started after few minutes instead of 2-3 hours.
--jakob;
05-22-2013 01:19 AM
Hi All
Thank you for all the advice, pointers, assistance, etc.. etc...
I think Jakob is correct - my gut feel is that we are dealing with resource contention on the Exhange Server.
This is what we did yesterday afternoon:
Uninstalled NBU client software and reinstalled.
We kicked off a backup of a single IS.
This time, the Evt Application log looked totally different!
No more "Cmdlet failed" Event Errors' and VSS snapshot process could be seen:
Level Date and Time Source Event ID Task Category Information 2013/05/21 05:21:00 PM MSExchangeIS 9622 Exchange VSS Writer Exchange VSS Writer (instance 09f024e6-fad8-4620-8922-7be25f4de44c:3) has processed the post-snapshot event successfully. Information 2013/05/21 05:20:46 PM MSExchangeIS 9612 Exchange VSS Writer Exchange VSS Writer (instance 09f024e6-fad8-4620-8922-7be25f4de44c:3) has thawed the database(s) successfully. Information 2013/05/21 05:20:46 PM ESE 2003 ShadowCopy "Information Store (5164) Shadow copy instance 3 freeze ended. For more information, click http://www.microsoft.com/contentredirect.asp." Information 2013/05/21 05:20:45 PM MSExchangeIS 9610 Exchange VSS Writer Exchange VSS Writer (instance 09f024e6-fad8-4620-8922-7be25f4de44c:3) has frozen the database(s) successfully. Information 2013/05/21 05:20:45 PM ESE 2001 ShadowCopy "Information Store (5164) DB1: Shadow copy instance 3 freeze started. For more information, click http://www.microsoft.com/contentredirect.asp." Information 2013/05/21 05:20:45 PM ESE 2001 ShadowCopy "Information Store (5164) Shadow copy instance 3 freeze started. For more information, click http://www.microsoft.com/contentredirect.asp." Information 2013/05/21 05:20:45 PM MSExchangeIS 9608 Exchange VSS Writer Exchange VSS Writer (instance 09f024e6-fad8-4620-8922-7be25f4de44c:3) has prepared for Snapshot successfully. Information 2013/05/21 05:20:45 PM MSExchangeIS 9811 Exchange VSS Writer Exchange VSS Writer (instance 3) has successfully prepared the database engine for a full or copy backup of database 'DB1'. Information 2013/05/21 05:20:45 PM ESE 2005 ShadowCopy "Information Store (5164) Shadow copy instance 3 starting. This will be a Full shadow copy. For more information, click http://www.microsoft.com/contentredirect.asp." Information 2013/05/21 05:19:13 PM MSExchangeIS 9606 Exchange VSS Writer Exchange VSS Writer (instance 09f024e6-fad8-4620-8922-7be25f4de44c) has prepared for backup successfully.
NBU bpfis was at this point still saying 'successful snapshot preparation'.
We can only assume that a consistency check was running at this point, as 'snapshot created' message only came through an hour later and backup started writing.
So, it seems that the consistency check is running a bit faster when there is only a single DB to check, but it still does not help to get 15 DBs backed up in one night. Disabling the consistency also made no difference as it seemed to run in any case (probably because it is standalone?).
Again - this was no problem when there were 3 large stores on the previous Exchange server....
05-22-2013 02:18 AM
Hi,
Oups I forgot to mention the probably most important change we made in our setup.
We limited the number of simultanious streams for each of the DAG members to three and found that it run must faster.
In your case it should be enough set the maximum number of streams per policy to a low number (maybe 3 or 4). For a DAG with an even distribution of MBs across the DAG members we found that it was necesarry to do it at NBU client level (client attributes on master server properties). Otherwise the first member could easily takes up all the streams for the policy and you were back to square one.
You need to know a little more about the underlying storage array used for the Exchange databases. If it is using the same total number of spindles for the LUNs as before then you are definately creating contention by running 5 x the number of read streams for the same number of harddisk readheads.
--jakob;