Internet openness pits collaborative history against competitive future
- Written by Lorenzo De Carli, Assistant Professor of Computer Science, Colorado State University
The debate about how open the internet should be to free expression – and how much companies should be able to restrict, or charge for, communication speeds – boils down to a conflict between the internet’s collaborative beginnings and its present commercialized form.
The internet originated in the late 1960s in the U.S. Department of Defense’s ARPANET project[1], whose goal was to enable government researchers around the country to communicate[2] and coordinate with each other. When the general public was allowed online in the early 1990s, intellectuals saw an opportunity to include all mankind in the collaborative online community that had developed. As internet rights pioneer John Barlow wrote, “We are creating a world that all may enter[3] without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs.”
Even today, many of the people who contribute to the technical evolution of the network continue to view the internet as a place to share human knowledge for self-improvement and the betterment of society. As a result, many people are troubled[4] when internet companies try to charge more money for faster access to digital commodities like streaming videos.
As a researcher in computer networks and security[5], I note that the problems are not just philosophical: The internet is based on technologies that complicate the task of commercializing the online world.
The ‘true’ internet
In practice, the designers of the technology at the foundation of the internet were not really attempting to enforce any particular philosophy. One of them, David Clark, wrote in a 1988 paper[6] that early internet architects did consider commercial features, such as accounting. Being able to keep track of how much data – and which data – each user is sending is very useful, if those users are to be charged for connectivity. However, most of those commercial features didn’t get included because they weren’t needed for a government and military network.
These decisions decades ago echo through the years: There is no effective and universal way to distinguish between different types of internet traffic, for example, to give some priority or charge extra for others. If whoever produces the traffic actively tries to evade restrictions, separating content gets even more difficult.
Using old tools in new ways
One of the few sources of information about how internet companies handle this challenge comes from recent research at Northeastern University[7]. It suggests that they may be using a technique called “deep packet inspection[8]” to identify, for example, video traffic from a particular streaming service. Then internet companies can decide at what speed to deliver that traffic, whether to throttle it or give it priority.
But deep packet inspection was not developed for this type of commercial discrimination. In fact, it was developed in the internet security community as a way of identifying and blocking malicious communications. Its goal is to make the internet more secure, not to simplify billing. So it’s not a particularly good accounting tool.
Like many other researchers working on deep packet inspection, I learned that its algorithms may fail to correctly identify different types of traffic[9] – and that it can be fooled by a data sender dedicated to avoiding detection. In the context of internet security, these limitations are acceptable, because it’s impossible to prevent all attacks, so the main goal is to make them more difficult.
But deep packet inspection is not reliable enough for internet service providers to use it to discriminate between types of traffic. Inaccuracies may cause them to throttle traffic they didn’t intend to, or not to throttle data they meant to slow down.
Breaking the cycle
The Northeastern team found that T-Mobile seems to throttle YouTube videos[10], but not ones from Vimeo – likely because the company does not know how to identify Vimeo traffic. As the researchers pointed out, this could lead sites like YouTube to disguise their traffic so it also does not get identified. The peril comes if that pushes internet companies to step up their deep packet inspection efforts. The resulting cat-and-mouse game could affect traffic from other sources.
As internet companies experiment with what they can achieve within their technical limitations, these sorts of problems are likely to become more common, at least in the short term. In the long term, of course, their influence could force changes in the technical underpinnings[11] of the internet. But, in my view, the internet’s current architecture means throttling and traffic discrimination will be at least as difficult – if not more so – as it is today.
References
- ^ U.S. Department of Defense’s ARPANET project (www.internetsociety.org)
- ^ researchers around the country to communicate (www.britannica.com)
- ^ We are creating a world that all may enter (www.eff.org)
- ^ many people are troubled (theconversation.com)
- ^ computer networks and security (scholar.google.com)
- ^ 1988 paper (doi.org)
- ^ recent research at Northeastern University (theconversation.com)
- ^ deep packet inspection (www.wired.co.uk)
- ^ may fail to correctly identify different types of traffic (doi.org)
- ^ T-Mobile seems to throttle YouTube videos (dx.doi.org)
- ^ changes in the technical underpinnings (harvardmagazine.com)
Authors: Lorenzo De Carli, Assistant Professor of Computer Science, Colorado State University