Netbackup duplication data chunk size
Does anybody know how netbackup decides the data chunk size used to read data from advanced disk when duplicating to tape and whether or not this can be modified.
For example we are currently writing backups to advanced disk pools and we can see on the disk array that data is being written in 512k chuncks. This we believe is because bpbkar passes the data to the media server in 512k chunks.
However when we duplicate to tape the array is receiving read requests to pass the data in 64k chunks which is highly insufficient as it requires more iops in order to get all of the data from disk..
I need to know what determines the read chunk size and whether or not it can be modified.
At the end of the day the slow duplication was down to badly fragmented disk pools.
For immediate improvement in performance i lowered the high water mark to force regular clear downs ensuring there was always sufficient free space to hold a full backup.
Then ran diskeeper to defrag the disk pool areas.