11-24-2017 09:58 PM
Hello
We have Media Server Deduplication Pool for backup to disk.
Now each job flow is a sequence about 15 min write first 51 200 000 Kb data and then hang for about 1,5-2 hours then write next 50Gb and hang again and ect. This job flow is same for all policies type: vmware, exchange, file, catalog.
Part of status log of catalog backup:
11/22/2017 06:21:30 - Info bptm (pid=11776) start backup
11/22/2017 07:41:37 - begin writing
11/22/2017 13:57:56 - Info bptm (pid=11776) waited for full buffer 25123 times, delayed 139028 times
11/22/2017 13:57:56 - Info bpbkar32 (pid=15148) bpbkar waited 3838 times for empty buffer, delayed 3991 times.
11/22/2017 13:58:13 - Info bptm (pid=11776) EXITING with status 0 <----------
So backup ~220Gb of catalog take about 7,5 hours instead of 40-50 minutes .
No any errors in status log. As I understand job just wait some timeout and write next.
This value (50Gb) as i understand correlate with only Fragment Size of Deduplication Pool STU.
Why cause of this behavior? How to fix it?
Solved! Go to Solution.
11-27-2017 12:46 AM
Insufficient memory could very well be the cause.
This TN is actually more accurate: https://www.veritas.com/content/support/en_US/doc/25074086-127355784-0/v25295647-127355784
- at least 1GB RAM for each TB of dedupe storage.
That is over and above 8GB RAM needed for master server processes.
Experience has shown that 1.5 GB RAM per TB is advisable.
The MSDP 'write' process can be followed in bptm log.
Level 3 logs should be sufficient.
11-25-2017 12:31 AM - edited 11-25-2017 12:32 AM
11/22/2017 13:57:56 - Info bptm (pid=11776) waited for full buffer 25123 times, delayed 139028 times
Delayed 139028 times, as per default values it is around 34 min, since bptm is waiting for full buffer, client is not sending the data fast enough to Media server.
it might be because of the buffer settings or network speed issues..
what are the values in
SIZE_DATA_BUFFERS
NUMBER_DATA_BUFFERS
Have you tested the network speed between the media server and clients?
How many paraller jobs are running same time to disk pool?
Have you checked the performance with less jobs sending the MSDP?
11-25-2017 01:04 AM
This status log of catalog backup.
This is same server with local disk drives: master (catalog), media(deduplication pool)
NUMBER_DATA_BUFFERS 1024
SIZE_DATA_BUFFERS 131072
>>How many paraller jobs are running same time to disk pool?
Different. Sometimes many sometimes not.
Now it 6 jobs. As you can see on screenshot, jobs are multiple by 50Gb
11-25-2017 05:42 AM
11-26-2017 11:11 PM
I read NetBackup™ Deduplication Guide Release 7.7
RAM requirements
4 GBs for 4 TBs of storage up to 32 GBs for 64 TBs of storage.
Now Storage has grown to 64Tb, and RAM is 24Gb.
I will try increase to 48Gb.
Is lack of memory can cause such behavior?
Deduplication pool database placed on SSD storage, and as I see it load very small.
11-27-2017 12:46 AM
Insufficient memory could very well be the cause.
This TN is actually more accurate: https://www.veritas.com/content/support/en_US/doc/25074086-127355784-0/v25295647-127355784
- at least 1GB RAM for each TB of dedupe storage.
That is over and above 8GB RAM needed for master server processes.
Experience has shown that 1.5 GB RAM per TB is advisable.
The MSDP 'write' process can be followed in bptm log.
Level 3 logs should be sufficient.
11-28-2017 08:46 PM
It seems problem is realy in RAM and jobs works fine without stops.
I increased phisical RAM on server to 48Gb. But I have 2008R2 Standard and it limit RAM to 32Gb. So I want updrade server to 2012R2, it have no RAM limit.
On demo stand I tested inplace upgrade 2008R2 - 2012R2 and it always failed.
So my only way is reinstall OS or migrate to new server. In this situation, is disaster catalogrecovery right path only?
11-28-2017 10:32 PM