cancel
Showing results for 
Search instead for 
Did you mean: 

NIC Teaming Slow Throughput, any ideas?

MkElite
Level 3

Hey guys,

 

We currently have 2Gig set up through NIC teaming for our backup exec setup.When we have backups running, it barely reaches 35-40% of the bandwith. 

I am using Backup Exec 2010. Has anyone else here set up NIC teaming before? What are some things that might cause slow performance? Are there any specific settings in Backup Exec that may help?

 

Thanks!

 

9 REPLIES 9

CraigV
Moderator
Moderator
Partner    VIP    Accredited

Hi,

 

there are no specific settings within BE to tell the software to push through as much data as there is bandwidth available.

How is your NIC teaming set up? Are you aggregating your bandwidth, or is it for failover (1 NIC active, the other passive)?

Some things that might cause slow performance:

1. NICs and switch ports not set to the maximum speed of the NIC, ie. 1GB FULL;

2. network issues which cause latency during the backups;

3. AV scans during the backup (either of a B2D or BE services, so check these out);

4. Fragmented disks, especially if using it for B2D.

Thanks!

MkElite
Level 3

I am aggregating the bandwidth. 

Thanks a lot for your reply. Will look into these 4 things you have listed. I will let you know if I find something out of place!

Thanks

 

teiva-boy
Level 6

Backupexec is a single threaded product, even more so that build.  No multi-plexing, no multi-streaming, no multi-core advantages, etc....  

You could have a 10000Gbps backbone haul and still not be able to utilize the available bandwidth.

The neat thing about backup is that it shows your weakest link in your environment.  Well it doesn't really show you what is that weak link, but you'll only go as fast as capable.  So you need to verify the source can read that fast.  You need to make sure your target can write that fast.  You will need more concurrent streams to start to saturate the links.  You will need to write your backups to fast disk, etc...  This kind of testing is tedious.  And using an aging product as BE2010 is not helping either.  This was one of the worst builds since it introduced so much new technology.  Make sure you are installed to the latest R3 or greater, and all the hotfixes, and you'll need to redeploy agents too in most cases.

Few folks can come close to saturating a single 1GbE link, averaging around 40-80MB/s.  If you're getting around 80, you're in good company, and "normal," for most of the hundreds of installs I've worked on.

To get higher, you need to do a lot of tuning, but more importantly, you need to have a fast B2D target.

Good luck.  

MkElite
Level 3

Only thing I noticed is that could be off is Windows Defender is currently active on the backup server. Not sure if that will mess with the bandwidth. 

I did test a direct file transfer tp backup server from the location the backup job will access

The transfer was capped to 50% of the 2Gb. That is still much higher than the max of 20% when the same file is backed up on Backup. We have a DellTL2000 with LTO 6 tapes. Anything else you can think of?

Thanks

MkElite
Level 3

Well I at least know that the source can read 60%+ faster when I directly transfer the same file that is being backed up. It transferred at 1gb/s while the backup does does only 300-400mb/s. 

Now I need to figure out how to get the backup to 1gb/s. Will have to figure out what the problem is. Maybe the tapes themselves? an LTO6 tape from dell is advertised as 160mb/s...

Can I configure to write to multiple tapes at once? so half data on one tape and the other half on the next. 

Thanks

MkElite
Level 3
Byte count          : 11,105,197,825,730 bytes
Job rate            : 2,784.18 MB/Min (Byte count of all backup sets divided by Elapsed time for all backup sets)
Files               : 75,079
Directories         : 3,081
Skipped files       : 4
Corrupt files       : 0
Files in use        : 0

That is about 46mb/s avg. 

pkh
Moderator
Moderator
   VIP    Certified

Can I configure to write to multiple tapes at once? so half data on one tape and the other half on the next. 

No.  BE does not have this facility.  You would need NBU to do this.

NetworkCompany
Level 4

There is no way to increase a single job's bandwidth greater than a single interface even if you bind 20 nics into one using GEC/LACP etc.  If you want more than 1GB/sec with a single backup job, you'll have to go with something like fiberchannel or 10G.

Backing up a single system across gigabit ethernet will never saturate a tape drive.  If the tape ever has to pause or stop, the spinup time throws the maximum throughput right out the window.  The maximum transfer rate across gigabit ethernet is about 130mb/sec.  To achieve that rate, you'd have to backup one huge file.  Multiple small files and pausing/waiting for VSS snapshots and other processing tasks limits the bandwidth even further.

One way to get the most out out of tape, schedule multiple jobs concurrently across multiple interfaces to disk storage, then duplicate disk to tape.  In this scenario, we are able to achieve more than 1GB/sec and by keeping the tape drive streaming I am seing 8000MB/min on an LTO5 drive.

Backup Set Summary
Backed up 1 Shadow Copy Writer(s)
Processed 6,842,060,004,663 bytes in  13 hours,  26 minutes, and  22 seconds.
Throughput rate: 8092 MB/min
Compression Type: Hardware

I can imagine LTO6 would be slightly higher.   This aggravated hybrid approach offers faster backups than backing up individual systems directly to tape.

I suppose it would be possible improve on this by backing up half a server like say the C drive (or VM1) on interface x to disk, then duplicate to tape A while simultaneoulsly backing up drive D (or VM2) on interface y to disk then duplicating to tape B.  Interesting thoughts

rreed2
Level 2

I know this is an old thread but w/ a brand new Dell R430, TL2100, either single 1G NIC or LACP teamed (4) 1G NIC's, exact same experience.  ~300-350Mbp throughput overall, either B2D or B2T, no difference.  We're unable to get it above this speed, backing up VM's w/ the VMware Agent backing up VMDK's (not local windows agents).  Fresh install of Backup Exec 15 w/ all service packs, etc.  Equates to 2.5-2.6GBpm average, same as you.  One "large" 450GB VM does bump that up to 500Mbps throughput, or about 3.5GBpm.  Still unusable if you need to backup 13.7TB of critical data in 29 hours, let alone our overall 30TB of data.  Symantec's overseas support who are clearly just reading from a scripted checklist was a dead end after concluding that our 2.5GBpm/300Mbps was "already better than average."  We've engaged our VAR about what they sold us.