cancel
Showing results for 
Search instead for 
Did you mean: 

Catalog backup Slowing down each day

Nizmo
Level 3

Hi. we recently migrated our netbackup enviorment over to new hardware.. everything has gone smooth.. the catalog recoveries etc were very fast.. 

 

on the first few days the catalog backup ran at around 76mb/sec.. however over a number of daya the catalog backup has lost almost 20mb/sec speed. 

 

this is the waits that occured on the day it ran at 76mb/sec

 

03/13/2013 20:58:39 - Info bpbkar (pid=10329) bpbkar waited 903 times for empty buffer, delayed 197032 times
03/13/2013 20:58:39 - Info bptm (pid=10330) waited for full buffer 159111 times, delayed 237019 times
 
 
and this is a recent one which ran at 56mb/sec
 
3/14/2013 11:24:47 - Info bpbkar (pid=57872) bpbkar waited 560 times for empty buffer, delayed 121348 times
03/14/2013 11:24:47 - Info bptm (pid=57873) waited for full buffer 261375 times, delayed 379837 times
 
 
and todays catalog is running at 53mb/sec.. seems to be getting slower each day.. We have hardly anything running while the catalog backup is being taken. harware wise the masters are on DL580's. Catalog resides on the SAN using veritas file system. and OS on the master is red hat 6.3
 
 

 

2 REPLIES 2

Steve_Lussiaud
Level 4

If your Backup catalog is done during a low period activity is aldready a good thing.

If you can, could you test a catalog backup on tape for test; Your LUN is perhaps hight in IOs and speed slow down ?

 

 

 

 

 

 

mph999
Level 6
Employee Accredited

 

 
 
 
 
03/13/2013 20:58:39 - Info bpbkar (pid=10329) bpbkar waited 903 times for empty buffer, delayed 197032 times
03/13/2013 20:58:39 - Info bptm (pid=10330) waited for full buffer 159111 times, delayed 237019 times
 
3/14/2013 11:24:47 - Info bpbkar (pid=57872) bpbkar waited 560 times for empty buffer, delayed 121348 times
03/14/2013 11:24:47 - Info bptm (pid=57873) waited for full buffer 261375 times, delayed 379837 times
 
For both of these, it waited more times for the full buffer, so the greastest performance drop would appear to be the speed the data was sent to the memory buffers, for example disk read speed /san performance.
 
However, the waiting for empty is still high, which suggests either tuning or SAN issues between the media server and drives, or in more rare cases, faulty hardware.
 
Persoanlly I would do this.
 
Check the tuning is in place
 
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS  (set this to 262144)
/usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS  (set this to 128 or 256 )
 
These are only suggested values, what works for one system may not for another, but on average, I woudld expect these to give good performance.
 
Next, create a test policy to backup the internal disks of the media server - do you then get a difference in the numbers ?
 
Martin