cancel
Showing results for 
Search instead for 
Did you mean: 

bonding mode for 1G ports

hytham_fekry
Level 4
Partner Accredited Certified

Hi , i have an upcoming implementation for an 5230 appliance ... no 10 GB network switches are available in the customer network , As per my understanding there's no constraints in using eth0 in backup traffic , so i plan to add  four ports "eth0 to eth3" in one bond with a single IP to avoid any configuration-related  issues in future .. and configure IPMI port for managment  ... 

As a best practise , is balance-alb reliable enough under heavy load using four ports .. 

From your experience , what's the most reliable , and better perfroming mode , so i can engage network team for any configuration changes that maybe required . 

1 ACCEPTED SOLUTION

Accepted Solutions

vtas_chas
Level 6
Employee

In my experience with a number of customer environments, LACP doesn't offer a significant improvement in performance over kernel mode balancing like balance-alb.  If you have a really well run network that is built for performance, it can make a difference, but I've never seen it give more than a single digit performance boost over kernel mode bonding.  I have almost universally seen it add a significant amount of hassle if the network team isn't used to configuring it and doesn't understand fully how it works with your specific switches.

I also tend to use even numbers of ports for kernel mode bonding, as that is seen as a best practice in general when using this method (outside of appliances).  

If you intend to use a different network for management, what you might consider doing is a 2 port bond for both.  I've done that in the past, using eth0/eth1 for management so that if for some reason in the future Engineering does take back eth0 (remember it is technically reserved but fully supported for use), you've still got eth1 available for management and your data bond doesn't change.  If you really need the bandwidth of the third port in the bond, that obviously puts you into a bit of a bind, but it is one way of doing things that I know works well.

Charles
VCS, NBU & Appliances

View solution in original post

4 REPLIES 4

Systems_Team
Level 6
   VIP   

Hi Hytham_fekry,

So long as your appliance version is 2.6.0.2 or greater, then you can reconfigure eth0 to be used in a bond.  You have to reconfigure eth0 via the CLISH, not the web management interface.  See this post for a little more info regarding that:

Eth0 reserved interface, Why and How can I free it up?

https://vox.veritas.com/t5/NetBackup-Appliance/Eth0-reserved-interface-Why-and-How-can-I-free-it-up/...

If it is a brand new appliance, you will need eth0 to do the initial config, and after that you can reconfigure it.  One thing you need to be aware of is that the IPMI port is only for IPMI management.  You will still need to manage your appliance over your bonded IP address - there are two other interfaces to the appliance: Web management and SSH management, which you can't do from the IPMI port.

I use balance-alb for bonding and have 2-3 ports in a bond and have had no issues.  Maybe someone else who uses all four could chime in with some details?  I have had no issues with my setup.  Guess you will also need to confirm what types of bonding your network switches support as well.

One other item you should look into as well is if you are backing up VMware clients, is to use the appliances VMware backup capabilities.  Here the appliance (and Master server) hooks into VMware, and you can have most if not all of those clients backed up over the san, which will alleviate the stress on your network.  It works really well, and that is how mine are set up - using a combination of network and san.

Hope this helps,

Steve

hytham_fekry
Level 4
Partner Accredited Certified

Thanks Steve for the reply .. 

Yes , i'm aware of that , i was thinking wither to leave eth0 for mangment , bond 3 other interfaces or bond the 4 together ..

As per the doc you highlighted , If i choosed the first option , i won't be able to use  eth0  later for backups  , i will be forced to have  2 IPs in different subnets  , one for mangment and one for backup , and configure the  non-managment IP only at all clients and master server side .

i'm investigating with network team what modes can swtiches support , i guess that handling bond from switch side may give a better performance .

 

vtas_chas
Level 6
Employee

In my experience with a number of customer environments, LACP doesn't offer a significant improvement in performance over kernel mode balancing like balance-alb.  If you have a really well run network that is built for performance, it can make a difference, but I've never seen it give more than a single digit performance boost over kernel mode bonding.  I have almost universally seen it add a significant amount of hassle if the network team isn't used to configuring it and doesn't understand fully how it works with your specific switches.

I also tend to use even numbers of ports for kernel mode bonding, as that is seen as a best practice in general when using this method (outside of appliances).  

If you intend to use a different network for management, what you might consider doing is a 2 port bond for both.  I've done that in the past, using eth0/eth1 for management so that if for some reason in the future Engineering does take back eth0 (remember it is technically reserved but fully supported for use), you've still got eth1 available for management and your data bond doesn't change.  If you really need the bandwidth of the third port in the bond, that obviously puts you into a bit of a bind, but it is one way of doing things that I know works well.

Charles
VCS, NBU & Appliances

hytham_fekry
Level 4
Partner Accredited Certified

Thanks for your input ...