Highlighted

Job stop every 50Gb

Hello

We have Media Server Deduplication Pool for backup to disk.

Now each job flow is a sequence about 15 min write first 51 200 000 Kb data and then hang for about 1,5-2 hours then write next 50Gb and hang again and ect. This job flow is same for all policies type: vmware, exchange, file, catalog.

Part of status log of catalog backup:
11/22/2017 06:21:30 - Info bptm (pid=11776) start backup
11/22/2017 07:41:37 - begin writing
11/22/2017 13:57:56 - Info bptm (pid=11776) waited for full buffer 25123 times, delayed 139028 times
11/22/2017 13:57:56 - Info bpbkar32 (pid=15148) bpbkar waited 3838 times for empty buffer, delayed 3991 times.
11/22/2017 13:58:13 - Info bptm (pid=11776) EXITING with status 0 <----------

So backup ~220Gb of catalog take about 7,5 hours instead of 40-50 minutes .

No any errors in status log. As I understand job just wait some timeout and write next.

This value (50Gb) as i understand correlate with only Fragment Size of Deduplication Pool STU.

Why cause of this behavior? How to fix it?

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

Re: Job stop every 50Gb

Insufficient memory could very well be the cause.

This TN is actually more accurate: https://www.veritas.com/content/support/en_US/doc/25074086-127355784-0/v25295647-127355784

- at least 1GB RAM for each TB of dedupe storage. 
That is over and above 8GB RAM needed for master server processes. 
Experience has shown that 1.5 GB RAM per TB is advisable.

The MSDP 'write' process can be followed in bptm log. 
Level 3 logs should be sufficient.

 

View solution in original post

7 Replies
Highlighted

Re: Job stop every 50Gb

11/22/2017 13:57:56 - Info bptm (pid=11776) waited for full buffer 25123 times, delayed 139028 times

Delayed 139028 times, as per default values it is around 34 min,  since bptm is waiting for full buffer, client is not sending the data fast enough to Media server.

it might be because of the buffer settings or network speed issues..

what are the values in 

SIZE_DATA_BUFFERS
NUMBER_DATA_BUFFERS

Have you tested the network speed between the media server and clients?

How many paraller jobs are running same time to disk pool?

Have you checked the performance with less jobs sending the MSDP?

Highlighted

Re: Job stop every 50Gb

This status log of catalog backup.
This is same server with local disk drives: master (catalog), media(deduplication pool)

NUMBER_DATA_BUFFERS 1024
SIZE_DATA_BUFFERS 131072

>>How many paraller jobs are running same time to disk pool?
Different. Sometimes many sometimes not.
Now it 6 jobs. As you can see on screenshot, jobs are multiple by 50Gb

 

stu.png

 

 

 

 

 

Highlighted

Re: Job stop every 50Gb

Have you checked the NBU Dedupe Guide for server and disk resource requirements?
Highlighted

Re: Job stop every 50Gb

I read NetBackup™ Deduplication Guide Release 7.7

RAM requirements
4 GBs for 4 TBs of storage up to 32 GBs for 64 TBs of storage.

Now Storage has grown to 64Tb, and RAM is 24Gb.
I will try increase to 48Gb.

Is lack of memory can cause such behavior?
Deduplication pool database placed on SSD storage, and as I see it load very small.

Highlighted
Accepted Solution!

Re: Job stop every 50Gb

Insufficient memory could very well be the cause.

This TN is actually more accurate: https://www.veritas.com/content/support/en_US/doc/25074086-127355784-0/v25295647-127355784

- at least 1GB RAM for each TB of dedupe storage. 
That is over and above 8GB RAM needed for master server processes. 
Experience has shown that 1.5 GB RAM per TB is advisable.

The MSDP 'write' process can be followed in bptm log. 
Level 3 logs should be sufficient.

 

View solution in original post

Highlighted

Re: Job stop every 50Gb

It seems problem is realy in RAM and jobs works fine without stops.

I increased phisical RAM on server  to 48Gb. But I have 2008R2 Standard and it limit RAM to 32Gb. So I want updrade server to 2012R2, it have no RAM limit.

On demo stand I tested inplace upgrade 2008R2 - 2012R2 and it always failed.

So my only way is reinstall OS or migrate to new server. In this situation,  is disaster catalogrecovery right path only?

Highlighted

Re: Job stop every 50Gb

Correct.
Best to have catalog backup on Basic Disk or tape.
Be sure that you have the DR-file somewhere safe.

For NBU reinstall, use the exact same NBU version and installation location.