Please do look into the buffer settings given below
Modification:
For most configurations, the default NetBackup buffer settings are correct and there is no need to adjust them for the purpose of performance. Furthermore, there are factors outside of NetBackup which effect performance and should be reviewed. Some of these external factors include Host Bus Adapter (HBA) cards, SCSI cards, network interface card (NIC) settings, client disk I/O speed, network latency, and tape drive I/O. All of these should be reviewed to determine their respective impact on backup and restore speeds prior to any attempts to tune NetBackup.
On a Windows server, four different buffer settings can be modified to enhance backup performance. Those settings are:
�NUMBER_DATA_BUFFERS: The number of buffers used by NetBackup to buffer data prior to sending it to the tape drives. The default value is 16.
�SIZE_DATA_BUFFERS: The size of each buffer setup multiplied by the NUMBER_DATA_BUFFERS value. The default value is 65536.
�NET_BUFFER_SZ: The network receive buffer on the media server. It receives data from the client. The default value is 256K.
�Buffer_size: The size of each data package sent from the client to the media server. The default value is 32K.
Overview:
When a backup is initiated, the client packages data of the amount specified by the Buffer_size value, and transfers the information to the media server, which in turn, buffers that data in the NET_BUFFER_SZ. When the NET_BUFFER_SZ is full, it transfers data to the array of space created by a combination of NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS. As soon as at least one of the SIZE_DATA_BUFFERS is full, assuming the drive is ready to write, the information is written to the tape drive.
To troubleshoot performance issues related to buffer settings, enable and review the bptm log on the media server and the bpbkar log on the client. On the media server, go to the \Veritas\NetBackup\Logs directory, and create a bptm folder. On the client, go to the
Examine the bptm and bpbkar logs for references to waits and buffers and compare one side with the other. Individually, each side's number of waits means nothing. Only when compared to the opposite side, can you deduce where a potential bottleneck is:
Warmest Regards
Ankur Kumar