Forum Discussion

john9797's avatar
john9797
Level 3
10 years ago

Backup Exec LACP NIC teaming

Hello, I have tried to open a case for this issue, but it turns out that Backup Exec support is the single worst support I have ever used in my life.  We have a 4x1Gbps LACP team (shows connect...
  • CraigV's avatar
    10 years ago

    ...I used to use NIC Teaming on HP ProLiant servers with fail-over. Then again, I never needed 4 NICs to maximise any throughput, even on the largest site I was backing up consisting of around 25 servers which were split into 4 jobs. This all completed overnight before production started up the next day on a 1Gb network.

    THis is more of a hardware vendor query than a Backup Exec query...from what I can remember, BE will use whatever it is given, not choose how to use the NICs. I have also not seen any official documentation around which type of NIC teaming is the best either.

    Thanks!

  • jurgen_barbieur's avatar
    10 years ago

    maybe this forum can help you. This is more a windows / network issue rather than backupexec

    http://social.technet.microsoft.com/Forums/windowsserver/en-US/0f0ab7b8-4d21-41e6-a6ca-23ace4fd09eb/lacplink-aggregation-max-speed-per-thread-only-1-gbs-per-application?forum=winserverPN

     

  • JonathanLee's avatar
    10 years ago

    The technet forum thread is dealing more with SMB3 which (unless I am mistaken, BE doesn't use).  Aidan Finn has a great series of articles on the subject of NIC teaming: http://www.aidanfinn.com/?p=13984.

    With NIC teaming, you will not typically see aggregate bandwidth between two hosts (there are configurations that provide aggregate bandwidth but it is typically only one direction, outbound toward hosts connecting to a server and not vice versa).  What you can expect is for connections to multiple hosts to use separate interfaces if configured correctly.  Ideally, the best teaming algorhthms utilize the switch hardware (switch assisted or LACP).  If you don't have switches that support it, I highly recommend upgrading.

    With LACP in Windows, your load-balancing possibilities are limited to address hash or hyper-v port.  Hyper-v port is really only appropriate for servers hosting virtual machines.  With address hash load balancing, Windows load-balances connections to various hosts by machine/NIC address assigning separate connections to separate NICs.  This still only yields the throughput of a single NIC per connection.

    With Server 2012 R2, Microsoft introduced Dynamic load balancing and made it the default for new teams.  I am not sure this is a good idea as it could potentially allow a lot of out-of-order packets, increasing the processing time and slowing network traffic.  The jury's still out on that question.

    A better bet if you want more throughput between servers is to upgrade your switches and NICs to 10G or 40G.