cancel
Showing results for 
Search instead for 
Did you mean: 

Buffers - Performance

Matt_Fisher
Level 2
I have a Netbackup 6.5 environment and i have recently switch from backing up to Tape drives (which ran well) to backing up to iSCSI LUNS for disk. My problem is that i now get terrible backup times for the backups to disk compared to the backups to tape. Im sure some of this time frame difference is due to the fact that i was able to muiltplex to tape drives and cannot to disk which will make the backup window larger. One thing I am seeing now that concerns me though is that the backup speeds have gone down by nearly half. I am getting very low KB per Sec performance on the backups to disk. The disks are on a EMC storage device running raid 5. I know that device has better performance than what i am seeing from the backups to disk.

I have read that maybe the buffers need to be adjusted but i am not sure exactly what the buffer sizes need to be adjusted to...or what buffers. Master server and almost all clients are Windows servers. The NICs are 100/1000 cards but ia m only seeing about 10 to 15% of them used during the backup process. How can i adjust Netbackup to give me better performance/usage of the backup speeds to disk.

NOTE: Other settings used
1. I am also now using duplicate jobs so it sends each job to two different storage devices...how bad does this impact performace as well?
2. I have compression enabled...not seeing any high CPU usage due to the compression though on the clients.

Thanks for any help
1 ACCEPTED SOLUTION

Accepted Solutions

Nicolai
Moderator
Moderator
Partner    VIP   
The settings you are looking for are :

NUMBER_DATA_BUFFERS_DISK
SIZE_DATA_BUFFERS_DISK

Placed in the [install_path]netbackup\db\config directory

But I don't think buffers are you're problem, it's the network connection. A LAN connection will always loose compared to SCSI or Fibrechannel in regards to bandwidth and latency. A gigabit connection can transfer 100MB/sec - but that is best case senario. If you don't hit that,  try to upgrade network cards driver,  eliminate router and switch that don't have non-blocking architecture ( In a non-blocking switch, all ports can run at full wire speed) . Use high quality cables all the way, near end  cross talk can be a problem with poor quality cables.

PS: Don't expect a RAID 4+1 LUN to go faster than 75MB/sec.



 

View solution in original post

5 REPLIES 5

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
Have you had a look at the Performance Tuning Guide?
ftp://exftpp.symantec.com/pub/support/products/NetBackup_Enterprise_Server/307083.pdf
Chapter 10 covers disk I/O performance.
See also “Changing the size of shared data buffers” on page 112.

Also have a look at this TechNote (it's for 5.1, but still valid):
http://seer.entsupport.symantec.com/docs/273532.htm

Nicolai
Moderator
Moderator
Partner    VIP   
The settings you are looking for are :

NUMBER_DATA_BUFFERS_DISK
SIZE_DATA_BUFFERS_DISK

Placed in the [install_path]netbackup\db\config directory

But I don't think buffers are you're problem, it's the network connection. A LAN connection will always loose compared to SCSI or Fibrechannel in regards to bandwidth and latency. A gigabit connection can transfer 100MB/sec - but that is best case senario. If you don't hit that,  try to upgrade network cards driver,  eliminate router and switch that don't have non-blocking architecture ( In a non-blocking switch, all ports can run at full wire speed) . Use high quality cables all the way, near end  cross talk can be a problem with poor quality cables.

PS: Don't expect a RAID 4+1 LUN to go faster than 75MB/sec.



 

Matt_Fisher
Level 2
The Raid is a 5+1 setup if that makes a difference.

The thing that gets me is that when the backups where going to tape the speed was much faster...since going to disks (and doing multiple copies..2 of them) the speed has dropped a lot.

When backing up clients to disk the KB per Sec never go much higher than 8000 or 9000 (and that absolute best case) per sec most of the jobs get about 2000 per sec...but when backing up the Catalog to that same disk location the KB per Sec is over 40000. Does this mean my problem is from the client side? If so then why did they run nice and fast when the backups where going to tape? Does this put the buffers back in play and that they need to be tweaked?

Also, right now i am only running 1 job per disk location so there is no fighting for the access by multiple jobs itting the same disks....this actually helped a little but not very much.

Omar_Villa
Level 6
Employee
The only way to fix performance issues is finding where the bottleneck is, the first one I will look is Network, you say u are using iSCSI, are you using a iSCSI-HBA or TOE or the conversion is through software? how many hops do you have between your media server and your iSCSI device, same between client and media server, any firewall in the middle? try sending a ping with 65000 packets size between client and media and look for latecy or request timeouts.

If is doing fine, next thing will be the buffers but not necesarely  the biggest buffers are the best ones, normaly are, to confirm this go to the media server bptm or bpdm and grep for delay and waited words see how many times the media server is waiting for buffers, same thing on the client side under the bpcd log, here you will see where the bottleneck is, if is on the client or media server side. Confirm the client NET_BUFFER_SZ = 262144  and must match with the media server NET_BUFFER_SZ if not change that to be the same 256KB is the biggest buffer you can configure.

Under the media server configure NUMBER_DATA_BUFFERS_DISK =128 & SIZE_DATA_BUFFERS_DISK = 262144 , because you are using iSCSI connections you can use bigger buffers, everything is still under the network.

No network issues, what about PCI slot overloaded or a bottleneck in the PCI bus, other place to look is over the I/O of the box between Swap/Virtual Memory and RAM, how is the paging doing, if you dont have enough Swap or Virtual Memory you will have a lot of paging, which can ends in performance problems.

Check for the Drive Controler Speeds and do some math

Drive Controler MB/Sec GB/Hr
ATA-5 (ATA/ATAPI-5) 66 237.6
Wide Ultra 2 SCSI 80 288
iSCSI 100 360
1Gb FC 100 360
SATA/150 150 540
Ultra-3 SCSI 160 576
2Gb FC 200 720
SATA/300 300 1080
Ultra320 SCSI 320 1152
4Gb FC 400 1440

Network Speed Theoretical GB/Hr Typical GB/Hr Theoretical MB/Sec Typical MB/Sec
100BaseT 36 False Pict
25
12.5 10
1000BaseT 360 250 125 100
10000BaseT 3600 2500 1250 1000


Let us know.

Reagan
Level 5
Partner Accredited Certified
RAID-5 is known for poor write performance since parity has to be calculated across all drives for each write and if a drive fails in RAID-5, performance really suffers.  I believe RAID 5+1 is RAID 5 plus mirroring, so there is another performance hit, since the writes are also written twice.

Ideally you would want RAID 1+0 for performance. RAID-5 is usually used because it allows for the most usable space, while providing redundancy.   

Try using jumbo frames for the iSCSI connection.  The idea is to use a bigger shovel to transport data across the iSCSI connection.  Each packet that goes across the connection has a header.   With jumbo frames, you can transport the same amount of data using less headers, which should improve performance.