Brazil’s proposal to require companies to host Brazilian user data in-country has spurred discussions on whether local data storage is effective and if it would fragment the internet. Wendy Grossman, journalist and blogger, argues that the bigger possible threats that may lead to “Internet Balkanization” are structural censorship and the loss of net neutrality.
For as long as I can remember, the notion that the Internet might break apart along nation-state or commercial boundaries has been a threat. Usually called “Balkanizing the Internet”, the “Splinternet”, as Becky Hogge called it at the Cybersalon, has become a topic for discussion due to the Snowden revelations. To some people it doesn’t sound so bad any more; to others it’s still automatically doominous.
Eva Pascoe (@cybrsalon) sent out a query: was it likely? Would it be technical or political? And that’s when I realized for the first time that, at least in its current form, it’s a nonsense threat.
There are at least four different ways the Internet could be “Balkanized”. The current usage derives from threats by Brazil and Germany (chiefly, but not solely) to pass national laws requiring data pertaining to their citizens to be stored locally instead of shipped around the Internet into the purview of the NSA. Folks like the vocal Lauren Weinstein call this both hypocritical (because the countries spy on their citizens themselves) and dangerous (because it would split the Net by raising barriers to the free flow of information).
But this particular sense is about private data storage, not flows of data meant to be published. There are myriad vast data stores on the world’s computers that do not touch the Internet or do so in very restricted ways and it matters not at all. What difference does it make if the entity demanding local control over its users’ data is Brazil or Facebook? I would argue: none – and storing data locally is likely also to aid network efficiency. Sure, the US doesn’t like the EU’s widely copied data protection laws because they impede the free trade in data from which US data-driven companies make so much money. But that doesn’t mean the rest of us have to facilitate that. More important, as Caspar Bowden has been pointing out (PDF), local storage won’t protect against NSA spying because US companies’ foreign subsidiary are subject to FISA and the PATRIOT Act no matter where the data is stored.
The two original senses of “Balkanization” are more serious. The first is censorship. Twenty years ago, as now, there was much concern about attempts to regulate content at national borders. These would either turn the Internet into a patchwork of unpredictably unreliable access to information or a lowest-common-denominator medium hosting only the most universally acceptable (ie, blandest) content. Twenty years on, various countries do filter and censor, the Internet’s somewhat porous quality has largely survived, even though some people can’t access some types of information at some times from some locations.
The second was about the management of addressing, an issue of technical Internet governance overseen since 1998 by the Internet Corporation for Assigned Names and Numbers. Although there are many registrars selling domain names and many registries (one for each country and generic top-level domain), ultimately transforming human-friendly domain names into the numbers computers use is a monopoly distributed by 13 root servers. The feasibility and dangers of splitting the root have been the subject of many debates and disputes over the years. Snowden has revived the uneasiness outside the US about ICANN’s remaining ties to the US Department of Commerce. The upshot is much more interest in other, more international governance efforts such as the Internet Governance Forum than there was six months ago. Still, what scares people about this is not so much Balkanization as that the participation of so many governments with conflicting regulatory agendas could stall the Internet’s development entirely.
The final sense is the one network neutrality is meant to protect against. This would be the scenario in which access providers discriminate against traffic they don’t like: the cable company that also owns TV stations and accordingly demands a very large ransom from Google and Microsoft to allow subscribers to access YouTube and Skype or slows down the connections to the point where those services are unusable. The answer to this is not allowing this sort of cross-ownership (a classic anti-trust issue) or, failing that, for regulators to insist that all types of traffic be treated equally. There’s some delicacy surrounding how to phrase this so that it doesn’t outlaw traffic-shaping, a legitimate tool for defending against certain kinds of attacks and for ensuring that you don’t interrupt someone’s video stream of live tennis at match point in order to deliver an email message whose recipient wouldn’t notice a two-second delay. But delicately phrasing things is what lawyers and policy makers are for.
So, yes, all these threats have some potential to disrupt the Internet as we know it. Of the four, I’d say the local data aspect is more interest-group hype than real threat. The bigger threats are structural censorship and the loss of network neutrality, both of which could play into the hands of large, commercial interests who would prefer to turn the Internet into a captured broadcast medium cum giant data collection platform. Whether local data storage can be meaningfully forced on a network that was specifically designed to connect people regardless of nationality or location is an entirely different matter.
This post originally appeared as a net.wars column on 8 November 2013 and on ORGZine. It is re-posted with permission and thanks. The article gives the views of the author, and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics.