cancel
Showing results for 
Search instead for 
Did you mean: 

Fragment size/buffer size to increase backup speed

Amit_Karia
Level 6

Hi All,

Env :

Master Solaris 10 , clustered server Quantam Library LTO-4 drives  ,NDMP Drives, And DSU

I have been faceing issues where LTO-4 tapes are not writing even 800GB , Multiplexing is set to 12 for standard and NT backups,

I thought of changing Storage fragment size to 5 gb(Earlier it was 1 TB) , as it might help us writing more data on tape , but after making changes i found out backup speed has changed drastically

There are no buffer parameters set, in this scenario i would like to achieve two things

1) Use media capacity to full affect(media should write atleat 800GB before getting full)

2) Increase speed of backups

Please guide

Thanks,

Amit

 

2 REPLIES 2

Nicolai
Moderator
Moderator
Partner    VIP   

The #1 backup tweak in Netbackup is BPTM buffet tuning. Pls see the Netbackup Planning and Performance Guide page 123 and forward.

http://www.symantec.com/docs/TECH62317 (if the link bring you to the product main page, go back and re-click the link).

Fragment size does not improve backup speed. The fragment size define how a big chunk of data Netbackup will write data in before starting the next one. Big fragment size save space in the catalog, but does impact restore speed. If fragment size is 1 TB, it means Netbackup need to read 1 TB of data just to restore one file. A fragment size of 50GB seems to be a good trade-off.

Use a cleaning tape from time to time. It seems to improve read/write speed from time to time.

Andy_Welburn
Level 6

the amount of data that you can store on a tape will ultimately depend upon the type of data that you are saving & how well it compresses (or not).

My response below was to a similar query some time ago (& there have been many similar ones since):

"...
a small fraction of our media is FULL at less than the 'advertised' 400Gb native & in some instances only ~200Gb to fill the tape. Whereas we also have some at >1Tb reported!

The amount of compression achieved will ultimately depend upon the type of data - some types of data compress much more readily than others & in some instances the size can increase.

"...
The most common cause for poor data compression is that the data is simply not of the type that contains the redundancy required to allow the techniques used to reduce the size required for storage. Since it is the industry standard to quote the capacity of tape drives assuming an average 2:1 data compression ratio, this is around the level most customers experience and hence is pertinent to the performance of the product, this can lead to some customer dissatisfaction. This typically affects data types that use efficient techniques to minimize the data sizes they use initially (examples are MP3 files or JPG files) and the only solution is to accept that the data cannot be compressed and use the required amount of media to store the data required.
..."

http://h20000.www2.hp.com/bizsupport/TechSupport/D...

..."