The nice thing about garbage-collection is that it is makes memory management absolutely transparent to you
Of course, the above is true right up to the point when it stops being transparent, at which point it does a phase change from nice to clusterf**k of biblical proportions. There are entire posts, books, and suicide notes that have been written about this, and I have no great desire to add to them, save with the minor comment that paranoia about garbage-collection is still justified, albeit only in the edge-cases for most people.
First, the great part - the changes are on the sender side, and require no co-ordinated changes on the receiver, or the intermediary network. Which is awesome - it means that this can be incrementally deployed purely by updating the network stack at the end-points.
Seriously, that is awesome news. This field has been more art than science for the longest time, and despite the plethora of approaches out there, not much has really changed since the days of Reno.
Part of the reason for this is the network equivalent of the Heisenberg Uncertainty Principle - where bandwidth and network delay are inextricably linked, and can't be disambiguated. And the problem with that is that it turns out that you really, really want to look at the two independently to find the optimal operating point for a network.
Anyhow, Google's new algorithm - called BBR, for Bottleneck, Bandwidth, and RTT - is…