cancel
Showing results for 
Search instead for 
Did you mean: 

Netbackup duplication data chunk size

hammers09
Level 4
Partner Accredited Certified

Does anybody know how netbackup decides the data chunk size used to read data from advanced disk when duplicating to tape and whether or not this can be modified.

 

For example we are currently writing backups to advanced disk pools and we can see on the disk array that data is being written in 512k chuncks. This we believe is because bpbkar passes the data to the media server in 512k chunks.

However when we duplicate to tape the array is receiving read requests to pass the data in 64k chunks which is highly insufficient as it requires more iops in order to get all of the data from disk..

 

I need to know what determines the read chunk size and whether or not it can be modified.

1 ACCEPTED SOLUTION

Accepted Solutions

hammers09
Level 4
Partner Accredited Certified

At the end of the day the slow duplication was down to badly fragmented disk pools.

For immediate improvement in performance i lowered the high water mark to force regular clear downs ensuring there was always sufficient free space to hold a full backup.

Then ran diskeeper to defrag the disk pool areas.

View solution in original post

7 REPLIES 7

marekkedzierski
Level 6
Partner

It looks like you've got default buffers configuration and your environment is not tuned. All settings are described in this document: http://www.symantec.com/business/support/index?page=content&id=TECH1724

You can try with:

SIZE_DATA_BUFFERS = 262144

NUMBER_DATA_BUFFERS = 128

SIZE_DATA_BUFFERS_DISK = 1048576

NUMBER_DATA_BUFFERS_DISK = 16

 

"For example we are currently writing backups to advanced disk pools and we can see on the disk array that data is being written in 512k chuncks." - It's configured in SIZE_DATA_BUFFERS_DISK file.

"However when we duplicate to tape the array is receiving read requests to pass the data in 64k chunks" - It's configured in SIZE_DATA_BUFFERS file. Be careful, SIZE_DATA_BUFFERS is related to tape drives. In other words - during duplication disk read buffer is equal to tape write buffer. You can't set disk read buffer to 1024k and tape write buffer to 512k (for example).

marekk

hammers09
Level 4
Partner Accredited Certified

No these buffer files are in place.

These buffer files determine how Netbackup buffers the information before passing it to disk/tape and we are using 32 x 1mb buffers for disk backups and 32 x 128k buffers for tape backups. Now when backing up direct to tape or disk we can see in bptm that it is using these values and we get great performance. Its only duplications that are slow.   

the array manufacturer has examined the  performance data and can see that writes to the array occur in 512k blocks (so i am assuming that when Netbackup buffers the data into our 1mb buffers its filled with 512k chunks as used by the bpbkar process).

However when the reads are coming off the array for duplications they are in 64K chunks which symantec support tell me is the default for bpduplicate operations. However they said they hadn't a clue as to how it could be changed.

I know its not the array because if  i backup data from the array it reads it in 512K chunks inline with the bpbkar process.

 

marekkedzierski
Level 6
Partner

There is no option to change this, I was trying to find a solution for this problem few months ago but only way to incerase performance is to add more physical tape drives because there is no multiplexing during duplication. Duplication is similar to restore process, NB reads each image file by file. If backup has a lot of small files duplication is slow, if backup has large files - duplication is fast.

hammers09
Level 4
Partner Accredited Certified

I have found a parameter in the bptm log called DUP_BLKSIZE and this is set to 65536 (64K)

I have asked symantec support to speak with engineering to see if this can be modified

errrlog
Level 4

curious to know where the netbackup picks up the DUP_BLKSIZE setting.

AnwarMD
Level 2
Employee Accredited Certified
does this help for duplication slow for advDisk ? http://www.symantec.com/docs/TECH153154

hammers09
Level 4
Partner Accredited Certified

At the end of the day the slow duplication was down to badly fragmented disk pools.

For immediate improvement in performance i lowered the high water mark to force regular clear downs ensuring there was always sufficient free space to hold a full backup.

Then ran diskeeper to defrag the disk pool areas.