Skip to main content Skip to sidebar

Client-Server App with MPTCP

Multipath TCP (MPTCP) extends standard TCP by allowing a single connection to use multiple network paths simultaneously. A phone connected to both Wi-Fi and cellular can seamlessly switch between them without dropping the connection, or a server with two Ethernet links can aggregate their bandwidth into a single TCP stream. MPTCP achieves this transparently at the transport layer - applications see a regular TCP connection while the kernel manages subflows across available interfaces.

The protocol is defined in RFC 8684 (which replaced the original RFC 6824), with the architecture described in RFC 6182.

This post covers the protocol fundamentals, kernel setup, and building a working client-server application in Go.

How MPTCP Works

In regular TCP, a connection is bound to a single source-destination IP pair. If that path fails (e.g., Wi-Fi drops), the connection dies.

MPTCP introduces the concept of subflows - multiple TCP connections between different IP addresses that together form a single logical MPTCP connection. The kernel’s MPTCP scheduler distributes data across subflows based on their current performance characteristics: round-trip time, congestion window, and available bandwidth.

flowchart TB
    subgraph App["Application Layer"]
        AppConn["Single TCP Connection<br/>(application sees one socket)"]
    end

    subgraph MPTCP["MPTCP Layer (Client Kernel)"]
        Scheduler["MPTCP Scheduler"]
    end

    subgraph Subflows["TCP Subflows"]
        SF1["Subflow 1<br/>eth0: 192.168.1.10 (ISP 1)"]
        SF2["Subflow 2<br/>eth1: 10.0.0.5 (ISP 2)"]
        SF3["Subflow 3<br/>eth2: 172.16.0.20 (ISP 3)"]
    end

    subgraph Server["Server"]
        SRV["eth0: 203.0.113.50"]
    end

    AppConn --> Scheduler
    Scheduler --> SF1
    Scheduler --> SF2
    Scheduler --> SF3
    SF1 --> SRV
    SF2 --> SRV
    SF3 --> SRV

Connection Lifecycle

  1. Initial handshake - The client initiates a standard TCP SYN with an MP_CAPABLE option, signaling MPTCP support. If the server also supports MPTCP, they negotiate keys for authentication.
  2. Subflow creation - After the initial connection is established, either endpoint can advertise additional IP addresses via ADD_ADDR signals. New subflows are created with MP_JOIN, authenticated using HMAC derived from the initial keys.
  3. Data transfer - The MPTCP scheduler splits outgoing data across subflows. Each subflow carries a Data Sequence Number (DSN) mapping that lets the receiver reassemble data in the correct order regardless of which subflow delivered it.
  4. Failover - If a subflow fails (path goes down, excessive retransmissions), the kernel retransmits unacknowledged data on surviving subflows. The application is not notified - it continues reading and writing as if nothing happened.
  5. Connection teardown - A DATA_FIN signal closes the MPTCP connection. Individual subflows are closed with standard TCP FIN.

Fallback Behavior

If either endpoint does not support MPTCP, the connection automatically falls back to regular TCP. This makes MPTCP safe to enable by default - it only activates when both sides support it. The kernel also falls back if a middlebox (firewall, NAT) strips the MPTCP options from packets.

Use Cases

Mobile Failover

A mobile device connected to both Wi-Fi and cellular establishes an MPTCP connection using the Wi-Fi address as the primary subflow. When the user walks out of Wi-Fi range, the cellular subflow takes over without any interruption:

sequenceDiagram
    participant App as Application
    participant K as Kernel (MPTCP)
    participant WiFi as Wi-Fi Subflow
    participant Cell as Cellular Subflow
    participant Srv as Server

    App->>K: connect()
    K->>WiFi: SYN + MP_CAPABLE
    WiFi->>Srv: Initial handshake
    Srv-->>WiFi: SYN-ACK + MP_CAPABLE
    K->>Cell: SYN + MP_JOIN
    Cell->>Srv: Join existing connection
    Srv-->>Cell: ACK + MP_JOIN

    Note over WiFi,Cell: Both subflows active

    App->>K: send(data)
    K->>WiFi: Data (primary path)

    Note over WiFi: Wi-Fi signal lost

    K->>Cell: Retransmit + new data
    Cell->>Srv: Seamless continuation

    Note over App: No interruption detected

The application does not need any special handling - the kernel transparently moves traffic to the surviving subflow. This is how Apple’s Siri and Maps use MPTCP on iOS to maintain connections when switching between networks.

Bandwidth Aggregation

A server with two 1 Gbps network interfaces can combine them into a single MPTCP connection that approaches 2 Gbps aggregate throughput:

ConfigurationThroughputResilience
Single TCP (1 NIC)Up to 1 GbpsNo redundancy
MPTCP (2 NICs)Up to 2 GbpsSurvives single NIC failure
MPTCP (3 NICs)Up to 3 GbpsSurvives dual NIC failure

The actual throughput depends on the MPTCP scheduler, path characteristics, and workload. Short-lived connections benefit less than long-lived bulk transfers.

Kernel Configuration

MPTCP support is available in the Linux kernel since version 5.6. It must be explicitly enabled.

The examples below use a client with three ISP uplinks and a server with a single interface:

Client:

InterfaceIP AddressISP
eth0192.168.1.10ISP 1 (primary)
eth110.0.0.5ISP 2
eth2172.16.0.20ISP 3

Server:

InterfaceIP Address
eth0203.0.113.50

MPTCP must be enabled on both the client and the server. The client is responsible for creating subflows across its interfaces - the server accepts them.

flowchart LR
    subgraph Client["Client"]
        E0["eth0<br/>192.168.1.10"]
        E1["eth1<br/>10.0.0.5"]
        E2["eth2<br/>172.16.0.20"]
    end

    ISP1["ISP 1"]
    ISP2["ISP 2"]
    ISP3["ISP 3"]

    subgraph Server["Server"]
        S0["eth0<br/>203.0.113.50"]
    end

    E0 --> ISP1
    E1 --> ISP2
    E2 --> ISP3
    ISP1 --> S0
    ISP2 --> S0
    ISP3 --> S0

Check MPTCP Support

# Check if MPTCP is available
sysctl net.mptcp.enabled

If the sysctl key does not exist, the kernel was compiled without MPTCP support. Most modern distributions (Ubuntu 22.04+, Fedora 36+, Debian 12+) include MPTCP.

Enable MPTCP

# Enable MPTCP
sudo sysctl -w net.mptcp.enabled=1

# Make persistent across reboots
echo "net.mptcp.enabled=1" | sudo tee /etc/sysctl.d/90-mptcp.conf
sudo sysctl -p /etc/sysctl.d/90-mptcp.conf

Path Manager Configuration (Client)

The path manager controls how additional subflows are created. The in-kernel path manager uses IP address endpoints configured via ip mptcp.

On the client, configure the secondary interfaces as MPTCP endpoints:

# eth0 is the primary interface - used for the initial connection
# No endpoint entry needed; the kernel uses it automatically when the app connects

# eth1 - create a subflow from this address
sudo ip mptcp endpoint add 10.0.0.5 dev eth1 subflow

# eth2 - create a subflow from this address
sudo ip mptcp endpoint add 172.16.0.20 dev eth2 subflow

# Allow up to 2 additional subflows per connection
sudo ip mptcp limits set subflow 2 add_addr_accepted 2

When the client connects to the server over eth0, the kernel automatically creates two additional subflows - one from eth1 (10.0.0.5) and one from eth2 (172.16.0.20) - both targeting the server at 203.0.113.50.

Verify the configuration:

# List configured endpoints
ip mptcp endpoint show
10.0.0.5 id 1 subflow dev eth1
172.16.0.20 id 2 subflow dev eth2
# Show current limits
ip mptcp limits show
subflow 2 add_addr_accepted 2

Path Manager Configuration (Server)

The server does not need endpoint configuration since it has a single interface. It only needs MPTCP enabled and limits set to accept incoming subflows from the client:

sudo sysctl -w net.mptcp.enabled=1

# Accept up to 2 additional subflows initiated by the client
sudo ip mptcp limits set subflow 2 add_addr_accepted 2

Endpoint Flags Reference

Endpoint FlagBehavior
subflowKernel creates a new subflow from this address
signalKernel advertises this address to the peer via ADD_ADDR
backupSubflow is only used if the primary fails
fullmeshCreate subflows to all remote addresses from this address

Routing Policy (Client)

Each client interface needs its own routing table so that subflows use the correct source address and gateway. Without per-interface routing, the kernel may route all subflows through a single default gateway, defeating the purpose of MPTCP.

# Define routing tables
echo "100 isp1" | sudo tee -a /etc/iproute2/rt_tables
echo "101 isp2" | sudo tee -a /etc/iproute2/rt_tables
echo "102 isp3" | sudo tee -a /etc/iproute2/rt_tables

# eth0 - ISP 1
sudo ip route add default via 192.168.1.1 dev eth0 table isp1
sudo ip rule add from 192.168.1.10 table isp1

# eth1 - ISP 2
sudo ip route add default via 10.0.0.1 dev eth1 table isp2
sudo ip rule add from 10.0.0.5 table isp2

# eth2 - ISP 3
sudo ip route add default via 172.16.0.1 dev eth2 table isp3
sudo ip rule add from 172.16.0.20 table isp3

This ensures that packets with source address 10.0.0.5 go through eth1 to ISP 2’s gateway, and packets with source 172.16.0.20 go through eth2 to ISP 3’s gateway.

Verify routing:

# Check that each source address resolves to the correct gateway
ip route get 8.8.8.8 from 192.168.1.10
ip route get 8.8.8.8 from 10.0.0.5
ip route get 8.8.8.8 from 172.16.0.20

Fullmesh Mode

For maximum bandwidth aggregation across all three ISPs, use fullmesh instead of subflow on the client:

sudo ip mptcp endpoint flush

sudo ip mptcp endpoint add 192.168.1.10 dev eth0 fullmesh
sudo ip mptcp endpoint add 10.0.0.5 dev eth1 fullmesh
sudo ip mptcp endpoint add 172.16.0.20 dev eth2 fullmesh

sudo ip mptcp limits set subflow 3 add_addr_accepted 3

With fullmesh, the kernel creates a subflow from every local address to every remote address. Since the server has a single address (203.0.113.50), this produces 3 subflows - one from each client interface.

Note
In this setup fullmesh and subflow behave identically because the server has only one address. The difference appears when the server has multiple addresses - fullmesh creates a subflow from every local address to every remote address (N x M), while subflow creates one subflow per local address.

Backup Mode

To use ISP 3 (eth2) only as a failover path when ISP 1 or ISP 2 go down:

sudo ip mptcp endpoint add 10.0.0.5 dev eth1 subflow
sudo ip mptcp endpoint add 172.16.0.20 dev eth2 subflow backup

The backup flag tells the scheduler to avoid this subflow unless all non-backup subflows have failed. Traffic flows over eth0 (primary) and eth1 (secondary), with eth2 standing by as a last resort.

Go Implementation

Go added MPTCP support in version 1.21 through the net package. The API is minimal - you set SetMultipathTCP(true) on a net.ListenConfig or net.Dialer, and the rest is handled by the kernel.

Server

package main

import (
    "context"
    "encoding/json"
    "log"
    "net"
    "net/http"
)

type StatusResponse struct {
    MPTCP      bool   `json:"mptcp"`
    RemoteAddr string `json:"remote_addr"`
}

func main() {
    mux := http.NewServeMux()

    mux.HandleFunc("/status", func(w http.ResponseWriter, r *http.Request) {
        mptcp := false

        if tc, ok := r.Context().Value(connKey{}).(*net.TCPConn); ok {
            mptcp, _ = tc.MultipathTCP()
        }

        w.Header().Set("Content-Type", "application/json")

        json.NewEncoder(w).Encode(StatusResponse{
            MPTCP:      mptcp,
            RemoteAddr: r.RemoteAddr,
        })
    })

    lc := &net.ListenConfig{}
    lc.SetMultipathTCP(true)

    ln, err := lc.Listen(nil, "tcp", ":8080")
    if err != nil {
        log.Fatal(err)
    }

    log.Printf("MPTCP HTTP server listening on %s", ln.Addr())

    srv := &http.Server{
        Handler:     mux,
        ConnContext: saveConnContext,
    }

    if err := srv.Serve(ln); err != nil {
        log.Fatal(err)
    }
}

type connKey struct{}

func saveConnContext(ctx context.Context, c net.Conn) context.Context {
    if tc, ok := c.(*net.TCPConn); ok {
        return context.WithValue(ctx, connKey{}, tc)
    }

    return ctx
}

The server creates an MPTCP-enabled listener and passes it to http.Server.Serve(). To check whether each request arrived over MPTCP, the ConnContext callback stores the underlying *net.TCPConn in the request context. The /status handler retrieves it and calls MultipathTCP() to report the connection type.

Client

package main

import (
    "encoding/json"
    "fmt"
    "log"
    "net"
    "net/http"
)

func main() {
    d := &net.Dialer{}
    d.SetMultipathTCP(true)

    client := &http.Client{
        Transport: &http.Transport{
            DialContext: d.DialContext,
        },
    }

    resp, err := client.Get("http://server.example.com:8080/status")
    if err != nil {
        log.Fatal(err)
    }
    defer resp.Body.Close()

    var status struct {
        MPTCP      bool   `json:"mptcp"`
        RemoteAddr string `json:"remote_addr"`
    }

    if err := json.NewDecoder(resp.Body).Decode(&status); err != nil {
        log.Fatal(err)
    }

    fmt.Printf("MPTCP: %v, RemoteAddr: %s\n", status.MPTCP, status.RemoteAddr)
}

The client injects an MPTCP-enabled dialer into http.Transport.DialContext. Every HTTP request made through this client uses MPTCP when the server supports it. This works with any HTTP method and any endpoint - the transport layer handles the multipath negotiation transparently.

Verifying MPTCP

Check Active MPTCP Connections

# List MPTCP connections and their subflows
ss -M

# Detailed view with subflow information
ss -tiM

The -M flag filters for MPTCP connections. The output shows each connection with its subflows:

State  Recv-Q  Send-Q  Local Address:Port  Peer Address:Port
ESTAB  0       0       192.168.1.10:42356  203.0.113.50:8443
         subflow 10.0.0.5:42357 -> 203.0.113.50:8443

Monitor with nstat

# Show MPTCP-specific counters
nstat -az | grep -i mptcp

Key counters:

CounterMeaning
MPTcpExtMPCapableSYNRXMPTCP-capable SYN packets received
MPTcpExtMPCapableSYNACKRXMPTCP-capable SYN-ACK received
MPTcpExtMPJoinSynRxSubflow join requests received
MPTcpExtMPFallbackTokenInitConnections that fell back to TCP

Packet Capture

# Capture MPTCP traffic on port 8443
sudo tcpdump -i any -n port 8443 -v | grep -i mptcp

MPTCP options appear in the TCP options field. Look for mptcp capable, mptcp join, and mptcp dss in the output.

Production Considerations

Path Manager Strategy

Choose the path manager strategy based on your deployment:

ScenarioStrategyConfiguration
Client with Wi-Fi + cellularsubflow on both interfacesClient creates subflows proactively
Server with multiple NICssignal on additional addressesServer advertises IPs, client creates subflows
Data center redundancyfullmeshEvery address pairs with every remote address
Active-passive failoverbackup on secondarySecondary only activates on primary failure

Scheduler Selection

Linux supports multiple MPTCP schedulers:

# List available schedulers
sysctl net.mptcp.scheduler
SchedulerBehaviorBest For
defaultSends on subflow with lowest RTTGeneral purpose
redundantSends on all subflows simultaneouslyLow-latency failover
roundrobinAlternates between subflowsBandwidth aggregation

Firewall Rules

MPTCP uses TCP options. Firewalls and middleboxes that strip unknown TCP options will cause MPTCP to fall back to regular TCP. Ensure:

  • Firewalls allow TCP options 30 (MPTCP) to pass through
  • NAT devices do not rewrite TCP option fields
  • If using iptables/nftables, avoid rules that drop packets with unknown TCP options
Note
MPTCP is designed to be middlebox-friendly. If options are stripped, it falls back to regular TCP automatically. However, the benefits of MPTCP are lost in that case. Use nstat counters (MPTcpExtMPFallbackTokenInit) to detect environments where fallback is happening.

Compatibility Matrix

ComponentMinimum Version
Linux kernel5.6 (basic), 5.19+ (recommended)
Go1.21
iproute25.10 (for ip mptcp commands)
macOSSupported since Big Sur (kernel level)
iOSSupported since iOS 7 (used by Siri)
WindowsNot supported natively

Conclusion

MPTCP adds network resilience and bandwidth aggregation to standard TCP connections with minimal application changes. In Go, enabling it requires two lines of code - SetMultipathTCP(true) on a dialer or listener. The kernel handles subflow management, data scheduling, and transparent failover.

Key takeaways:

  • Transparent fallback - MPTCP degrades to regular TCP when unsupported, making it safe to enable everywhere
  • Kernel-managed subflows - Applications see a single connection while the kernel manages multiple paths
  • Go 1.21+ support - Native API through net.Dialer and net.ListenConfig
  • Linux 5.6+ - Requires kernel support and explicit sysctl enablement
  • Monitoring - Use ss -M, nstat, and tcpdump to verify MPTCP is active