cancel
Showing results for 
Search instead for 
Did you mean: 

Error 89: on 90 percent of our backups last night - help required with number_data_buffers and size_data_buffers settings

Trevor_Jackson1
Level 4
Partner
 Hi,

We are running 6.5.4 on Solaris 10 SPARC with 8GB RAM.

We currently have performance issues with our duplication to tapes which we are trying to fix by using the NET_BUFFERS_SZ, SIZE_DATA_BUFFERS & NUMBER_DATA_BUFFERS.

Using the calculation from page 110 of the performance tuning guide:
Shared memory required = (number_data_buffers * size_data_buffers) * number_tape_drives * max_multiplexing_setting

Looking at the documentation, I have picked up that this shows that our values are correct for the amount of memory we are using. However, it would seem I am completely wrong

We have created a project for netbackup which has been setup to use shared memory of 8GB in size. And this seemed to get rid of the error 89 messages

I have looked at the BPTM logfile and it would seem that each backup job got allocated shm_size = 268441606 bytes which soon used up the available memory

Our settings are:
(256 * 1048576) * 2 * 16 = 8589934592

Now it would seem that inside the bptm logfile the amount of memory allocated to each individual job is 256 * 1048576 = 268435456. The documentation is either wrong or I have this completely mixed up somehow.

Can anyone help?

Thanks

Trevor


6 REPLIES 6

Stumpr2
Level 6

Did you use 1 KB = 1024 bytes?
 

Will_Restore
Level 6
256 buffers x 1M each seems pretty extreme!  Example in the book is 16 buffers x 64k.

Besides, can't allocate all system memory for Netbackup. How's the OS supposed to run?

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified
If your system only has 8GB, you cannot define all of this memory as shared memory (drivers, nbu itself and another daemons also wants to have some memory to run).

Trevor_Jackson1
Level 4
Partner
Just thought I would update on this item whilst online:

The amount and size of the buffers were not the issue here. The system was set to use 5GB of the 8GB available, which is more than enough to process the backups.

However, due to recent job session splitting exercises, the policies were able to run too many jobs at one time. We reduced the amount of jobs per policy to 16 at any one time and now the backups fly like the wind!!!

David_McMullin
Level 6
PLEASE NOTE!

Veritas has confirmed with me that the formula above is not all inclusive -

Instead of
Shared memory required = (number_data_buffers * size_data_buffers) * number_tape_drives * max_multiplexing_setting

Shared memory required = (number_data_buffers * size_data_buffers) * number_tape_drives * max_multiplexing_setting*#ALL LOCAL DRIVES

Windows backups running all local drives function as a multiplex/multiplier as well.

It can overrun your memory.

Trevor_Jackson1
Level 4
Partner
Thats is fantastic news, then why is that not the case in Symantec Documentation......technical writers eh! LOL

So #ALL LOCAL DRIVES equals the amount of drives? or the amount of jobs?

If its jobs, I figured as much after a long drawn out process monitoring investigation.... bptm processes were using too much memory than the calculation would imply. However, if you add the ALL LOCAL DRIVES into account I can see how this would affect the number greatly....

Thanks for the tip though David ;)