Table of Contents

Ubuntu - Networking - Configuration - Bonding NIC Teaming

Bonding, also called port trunking or link aggregation means combining several network interfaces (NICs) to a single link, providing either high-availability, load-balancing, maximum throughput, or a combination of these.

WARNING: Make sure you have ILO/BMC out of band remote access to your server.

You are going to change vital network settings, this may result in loss of connectivity if done wrong.


Install required packages

ifenslave is used to attach and detach slave network interfaces to a bonding device. Install the package:

sudo apt install ifenslave

Kernel Module

Before Ubuntu can configure your network cards into a NIC bond, you need to ensure that the correct kernel module bonding is present, and loaded at boot time.

Edit the file:

vim /etc/modules

Add the word bonding to the file:

/etc/modules
bonding

Also, load the module manually for now:

modprobe bonding

Bonding network config

Edit the file:

vim /etc/network/interfaces

Example config for an round-robin load balancing setup:

/etc/network/interfaces
auto lo
iface lo inet loopback
 
auto eth0
iface eth0 inet manual
    bond-master bond0
 
auto eth1
iface eth1 inet manual
    bond-master bond0
 
auto bond0
iface bond0 inet static
    # For jumbo frames, change mtu to 9000
    mtu 1500
    address 172.16.20.1
    netmask 255.255.255.0
    network 172.16.20.0
    broadcast 172.16.20.255
    gateway 172.16.20.1
    dns-nameservers 172.16.20.2
    bond-miimon 100 # Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. 
    bond-downdelay 200 # Specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been detected.
    bond-updelay 200 # Specifies the time, in milliseconds, to wait before enabling a slave after a link recovery has been detected.
    bond-mode 0
    bond-slaves none # we already defined the interfaces above with bond-master

For round-robin/load balancing, use bond-mode: balance-rr.


Bonding modes explained

ModeNameComment
Mode 0balance-rrRound-robin policy: Transmit packets in sequential order from the first available slave through the last.
This mode provides load balancing and fault tolerance.
Mode 1active-backupActive-backup policy: Only one slave in the bond is active.
A different slave becomes active if, and only if, the active slave fails.
The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.
This mode provides fault tolerance.
The primary option affects the behavior of this mode.
Mode 2balance-xorBalanced policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count].
This selects the same slave for each destination MAC address.
This mode provides load balancing and fault tolerance.
Mode 3broadcastBroadcast policy: Transmits everything on all slave interfaces.
This mode provides fault tolerance.
Mode 4802.3adIEEE 802.3ad Dynamic link aggregation.
Creates aggregation groups that share the same speed and duplex settings.
Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Prerequisites:
* Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation (LACP).
Most switches will require some type of configuration to enable 802.3ad mode.
Mode 5balance-tlbAdaptive transmit load balancing: Channel bonding that does not require any special switch support.
The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave.
Incoming traffic is received by the current slave.
If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Prerequisites:
* Ethtool support in the base drivers for retrieving the speed of each slave.
Mode 6balance-albAdaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support.
The receive load balancing is achieved by ARP negotiation.
The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

Sources