cancel
Showing results for 
Search instead for 
Did you mean: 

NetBackup - Performance Advice

blaneypj
Not applicable

Hi

 

I am in the middle of trying to resolve a performance problem on our backups at the moment and could do with some advice.... I am conscious that just posting a message saying 'its slow' isn't much help so have left this as a last resort.

 

This is what we have:

Client

HP DL380 G4 with 800Gb of data to backup.  Teamed HP NICs connected to a switch on the same IP range. The client version of Netbackup is 6.5 (build no: 20070723)

 

Network

The switch is an HP ProCurve 2824 with dynamic trunking on the two ports the client is connected to.  It is connected via a Gig link to our core network - we are seeing no bottlenecks on the network.

 

Media Server

HP DL380 G3 running Netbackup 6.5 (build no: 20070723).  It is connected to 2 HP Ultrium3 tape drives via LSI Adapter, Ultra320 SCSI 2000 series cards.  The only server it is backing up is the client in question.  This server is also the master server in our Netbackup setup.

 

I have tweaked the various Buffer parameters but we are still only getting 14469 KB per second throughput with it taking an average of 15 hours to backup the 800Gb. Interestingly we have a sister site with the same amount of data but using Backupexec and it backs up in only 6 hours!

 

First question I have is..... is it a bad idea to have a Master and Media server on the same hardware?  Second question is - any other ideas what we can try to lower the times here?

 

Thanks in advance,

Phil

 

 

2 REPLIES 2

jim_dalton
Level 6

Phil,

What do your data look like? I mean the simple fact is, netbackup is good with large files and the more small files the slower it goes, spending lots of time opening and closing files and updating the imagedb with info rather than copying the data to tape.

So if your 800gb is ten files , then I would expect it to be much faster, but if its 200,00 files then Im not so suprised at the result.

We need to know the data profile really.

 

An easy test: create yourself a large data file ( say 10gb) then back it up over the same infrastructure.

Then create a dir with 10gb of small files and back that up.

You'll see a large difference in performance.

 

Having master and media server as same box is quite Ok, Ive never done it any other way. 

 

Tim 

jim_dalton
Level 6

Phil

 

I should add, if you have tuned the buffer params and so forth (again test it with a large multi gbit file to get the optimum andit makes a huge difference) then another approach is to multistream the data.

If you have several filesystems, you can read each one simultaneously, or if its one fs, break it into streams ( use the NEW_STREAM directive in the file list). Then set up multiplexing and figure how many you can run at once. You then write N data streams simultaneously to the same tape. Backup is faster, restore is slower.

Ive a notes server and I run 4 streams at once. It doesnt go any faster aggregated above 4.

 

Tim