Random Linear Network Coding
There’s an assumption baked into how we think about networking: packets are sacred. You send them, routers forward them unchanged, and the receiver reassembles the originals. If a packet is lost, you retransmit that specific packet. The whole system is built around preserving individual packets end to end.
Random Linear Network Coding (RLNC) throws that assumption away. Instead of forwarding packets, intermediate nodes mix them together. And it turns out this is better.
How it works
The source takes its data and splits it into k packets. Instead of sending these packets directly, it sends random linear combinations of them over a finite field:
where are random coefficients and are the original packets. Each coded packet is a blend of all the originals, weighted by random coefficients. The receiver doesn’t need the original k packets — it needs any k linearly independent combinations. Once it has k independent equations in k unknowns, it solves the system and recovers the originals.
The neat part: intermediate nodes can do the same thing. A router that receives several coded packets can combine them into new coded packets, creating fresh linear combinations without knowing anything about the original data. The mixing propagates through the network.
Why this is clever
In traditional networking, packet loss is a specific problem. You lost packet 7, so you need packet 7 retransmitted. The sender needs to know which packet was lost, which means feedback, which means latency.
With RLNC, there’s no such thing as a specific missing packet. Every coded packet contains information about all the originals. If you lose one, the sender doesn’t need to know which one — it just sends another random combination. Any new independent combination moves you closer to decoding. The loss is generic, so the repair is generic.
This gets really interesting for multicast. If you’re sending to a hundred receivers with different loss rates, traditional networking is a nightmare — you need to track what each receiver is missing and retransmit accordingly. With RLNC, you just keep sending coded packets. Each receiver collects combinations until it has enough to decode. The slow receiver needs more packets, the fast one needs fewer, and the sender doesn’t need to know or care about the difference.
The tradeoffs
Encoding and decoding require linear algebra — matrix multiplications for encoding, Gaussian elimination for decoding. This is real overhead, especially on constrained devices. The finite field size matters too: too small and you risk generating linearly dependent combinations (useless duplicates), too large and the coefficient overhead per packet grows.
In practice, a field size of — the same one used in erasure coding and Reed-Solomon — gives a probability of linear dependence below 1% per packet, which is good enough for most applications.
The shift in thinking
What I find interesting about RLNC isn’t the math — it’s the idea that packets don’t need to be preserved. We’re so used to thinking of a packet as a discrete thing that must arrive intact or be retransmitted that it feels wrong to mix them together. But once you let go of that assumption, the properties you get — loss tolerance without feedback, natural multicast support, in-network redundancy — fall out almost for free.
It shows up in wireless mesh networks, content distribution, and satellite links — anywhere the channel is lossy and feedback is expensive. The math is well-understood. The insight is that the assumption we started with was optional.