Highlighted

Backup Exec LACP NIC teaming

Hello,

I have tried to open a case for this issue, but it turns out that Backup Exec support is the single worst support I have ever used in my life. 

We have a 4x1Gbps LACP team (shows connected at 4Gbps, switch configured for LACP).  It never uses more than 1Gbps.

What is the supported way to configure NIC teaming?  I would imagine that a LOT of people are maxing at 1Gbps when running a lot of backups at once.

What teaming methods are people successfully using with this product?

Yes, I know you must have multiple TCP streams to test the speed of LACP teaming.  14 backups were running at once.

 

 

 

3 Solutions

Accepted Solutions
Accepted Solution!

...I used to use NIC Teaming

...I used to use NIC Teaming on HP ProLiant servers with fail-over. Then again, I never needed 4 NICs to maximise any throughput, even on the largest site I was backing up consisting of around 25 servers which were split into 4 jobs. This all completed overnight before production started up the next day on a 1Gb network.

THis is more of a hardware vendor query than a Backup Exec query...from what I can remember, BE will use whatever it is given, not choose how to use the NICs. I have also not seen any official documentation around which type of NIC teaming is the best either.

Thanks!

View solution in original post

Accepted Solution!

maybe this forum can help

maybe this forum can help you. This is more a windows / network issue rather than backupexec

http://social.technet.microsoft.com/Forums/windowsserver/en-US/0f0ab7b8-4d21-41e6-a6ca-23ace4fd09eb/lacplink-aggregation-max-speed-per-thread-only-1-gbs-per-application?forum=winserverPN

 

View solution in original post

Accepted Solution!

The technet forum thread is

The technet forum thread is dealing more with SMB3 which (unless I am mistaken, BE doesn't use).  Aidan Finn has a great series of articles on the subject of NIC teaming: http://www.aidanfinn.com/?p=13984.

With NIC teaming, you will not typically see aggregate bandwidth between two hosts (there are configurations that provide aggregate bandwidth but it is typically only one direction, outbound toward hosts connecting to a server and not vice versa).  What you can expect is for connections to multiple hosts to use separate interfaces if configured correctly.  Ideally, the best teaming algorhthms utilize the switch hardware (switch assisted or LACP).  If you don't have switches that support it, I highly recommend upgrading.

With LACP in Windows, your load-balancing possibilities are limited to address hash or hyper-v port.  Hyper-v port is really only appropriate for servers hosting virtual machines.  With address hash load balancing, Windows load-balances connections to various hosts by machine/NIC address assigning separate connections to separate NICs.  This still only yields the throughput of a single NIC per connection.

With Server 2012 R2, Microsoft introduced Dynamic load balancing and made it the default for new teams.  I am not sure this is a good idea as it could potentially allow a lot of out-of-order packets, increasing the processing time and slowing network traffic.  The jury's still out on that question.

A better bet if you want more throughput between servers is to upgrade your switches and NICs to 10G or 40G.

View solution in original post

6 Replies
Accepted Solution!

...I used to use NIC Teaming

...I used to use NIC Teaming on HP ProLiant servers with fail-over. Then again, I never needed 4 NICs to maximise any throughput, even on the largest site I was backing up consisting of around 25 servers which were split into 4 jobs. This all completed overnight before production started up the next day on a 1Gb network.

THis is more of a hardware vendor query than a Backup Exec query...from what I can remember, BE will use whatever it is given, not choose how to use the NICs. I have also not seen any official documentation around which type of NIC teaming is the best either.

Thanks!

View solution in original post

is this a native server with

is this a native server with backupexec software? or are u using a backupexec appliance?

If it is a server, what is the operating system? What type of nic's are u using? which teaming software are u using?

It is a Windows 2012 R2

It is a Windows 2012 R2 server with backup exec software installed, not an appliance.

Broadcom NICS, I have tried both Broadcom and Microsoft LACP teaming.

 

can u use more than 1 GB with

can u use more than 1 GB with another process than backupexec? (eg file copy)

Accepted Solution!

maybe this forum can help

maybe this forum can help you. This is more a windows / network issue rather than backupexec

http://social.technet.microsoft.com/Forums/windowsserver/en-US/0f0ab7b8-4d21-41e6-a6ca-23ace4fd09eb/lacplink-aggregation-max-speed-per-thread-only-1-gbs-per-application?forum=winserverPN

 

View solution in original post

Accepted Solution!

The technet forum thread is

The technet forum thread is dealing more with SMB3 which (unless I am mistaken, BE doesn't use).  Aidan Finn has a great series of articles on the subject of NIC teaming: http://www.aidanfinn.com/?p=13984.

With NIC teaming, you will not typically see aggregate bandwidth between two hosts (there are configurations that provide aggregate bandwidth but it is typically only one direction, outbound toward hosts connecting to a server and not vice versa).  What you can expect is for connections to multiple hosts to use separate interfaces if configured correctly.  Ideally, the best teaming algorhthms utilize the switch hardware (switch assisted or LACP).  If you don't have switches that support it, I highly recommend upgrading.

With LACP in Windows, your load-balancing possibilities are limited to address hash or hyper-v port.  Hyper-v port is really only appropriate for servers hosting virtual machines.  With address hash load balancing, Windows load-balances connections to various hosts by machine/NIC address assigning separate connections to separate NICs.  This still only yields the throughput of a single NIC per connection.

With Server 2012 R2, Microsoft introduced Dynamic load balancing and made it the default for new teams.  I am not sure this is a good idea as it could potentially allow a lot of out-of-order packets, increasing the processing time and slowing network traffic.  The jury's still out on that question.

A better bet if you want more throughput between servers is to upgrade your switches and NICs to 10G or 40G.

View solution in original post