Skip to main content

Internet governance: the evolution of theories of governing cyberspace

by Mirta Cavallo


The Internet, with its dynamic and propulsive nature, has had a revolutionary impact on our society. Inter alia, it has changed the landscape of regulation and political communication.

If one looks at cyberspace from what Orin Kerr calls an internal perspective, it appears that emails, tweets, videos and anything else occurring online can be categorized as different forms of the same thing: social interactions. As the latin legal maxim “ubi societas, ibi ius” suggests, there cannot be a society without a rule. The idea of an unregulated cyberspace may well be suggestive, but actually it would lead to anarchy, not freedom. In fact, the Internet appears to be Janus-faced: a source of enormous benefits – in terms of democracy, sociability, freedom, creativity, spread of knowledge-, but also a source of harm –because of anonymity and pseudonymity, lack of control or oversight, reduced privacy, increased surveillance and cyber-attacks-. Paradoxically, in order to be free (and protected), individuals have to give up part of their freedom, so to recognize the legitimacy of a regulatory system inspired by democratic principles and fundamental rights. “Man is born free, and everywhere he is in chains”, Rousseau used to say. We can agree with Eli Noam that “as the Internet moves from being in the main a nerd-preserve, and becomes an office park, shopping mall and community center, it is sheer fantasy to expect that its uses and users will be beyond the law. The Internet has to be dealt with like the rest of the society. This is not to say that such rules, or similar ones, are desirable.  But they are unavoidable” [1].

At the same time, there is often a sense that the Internet is following a technologically determined path and is beyond control: its decentralized and borderless nature weakens the traditional concept of state sovereignty, it allows arbitrage and makes any state intervention easy to circumvent. States struggle with jurisdiction problems and delicate issues related to ethical relativism and clashing cultural standards. At an international level, attempts to regulate cyberspace have failed: the ITU and WSIS’s experiences are telling. Technological determinists have considered these failures as proof that “a bit is a bit”, meaning no bit can be treated differently from any other and attempts at control are doomed to fail. Similarly, cyberlibertarians argue that Internet freedom is hardwired into its technological infrastructure: cyberspace is a separate sovereign place where nation-states are illegitimate and powerless. Though, even cyberliberitarians believe that some sort of regulation could apply to cyberspace, in particular one “developeded organically with the consent of the majority of the citizens of cyberspace”.

The crucial issue is no longer whether the Internet can and should be regulated, but how it should be governed, by whom, with what values and in whose interest. Cyber-paternalist and network communitarists have addressed these questions.

The argument that the Internet is not unregulable due to its architecture, but in fact regulated by its architecture, was first developed by cyberpaternalists. Joel Reidenberg elaborated the idea of Lex Informatica, to be intended as a system of technological architectures that can achieve similar regulatory results to that of legal regulation. Then Lawrence Lessig identified law, social norms, market and architecture as the four existent regulatory forces which act upon individuals, isolated and patetic dots waiting to be instructed. Many modern states actually recur to a combination of both such direct and indirect strategies of regulation. In particular, in cyberspace (which is “plastic”!), the most succesfull modality of influencing behavior is the shaping of space: code is law. Under Benkler’s “layered vertical regulation” a distinction between a content layer, a logical infrastructure layer and a physical infrastructure layer can be drawn. Moreover, what the architecture enables or prohibits is the result of choices. When these choices are made what is at stake is not only regulability, but also values: a choice about code is a choice about values. Nevertheless, as Lessing [2] correctly points out, “so obsessed are we with the idea that liberty means ‘freedom from government’ that we don’t even see the regulation in this new space […] no thought is more dangerous to the future of liberty in cyberspace than this faith in freedom guaranteed by the code. For the code is not fixed […]. Our choice is not between ‘regulation’ and ‘no regulation’. The code regulates. It implements values, or not. It enables freedoms, or disables them. It protects privacy, or promotes monitoring […]. The only choice is whether we collectively will have a role in their choice or whether collectively we will allow the coders to select our values for us […]. Unless we do, or unless we learn how, the relevance of our constitutional tradition will fade”.

Cyberpaternalism provides an enlightening insight into the modalities of regulation in the digital environment. Not by chance Lessing is one of the most distinguished voices of cyberlaw. If you ask Frank Easerbrook there is no more a “law of cyberspace” than a “law of the horse”, but actually the majority of scholars, among which Viktor Mayer-Schönberger, believe that “we owe Lawrence Lessig, as economists owe John Maynard Keynes, the beat generation owes Jack Kerouac, and computer users owe Steve Jobs”. Nevertheless, there are several flaws in Lessing’s theory. Network communitarists such as Andrew Murray, Colin Scott and Roger Brownsword, have identified them and in so doing they have been able to deliver a better regulatory model than cyberpaternalism.

Firstly, according to Murray and Scott, by contextualising Lessig’s thesis within the accepted language of control theory and regulatory theory, it appears that a correct labelling takes into consideration that every regulation has a director (who sets the rules), a detector (who identifies violations) and an effector (who brings back to legality); moreover, the modalities of regulation can be restated as hierarchy, community, competition and design. However, Lessing focused only on director, with the result that his labels where under-inclusive and he failed to fully capture the true essence of the regulatory modality.

A part from the labelling, Murray’s classification of the four modalities of regulation into the two families of “socially mediated” and “environmental” modalities is particularly helpful as it highlights an aspect that otherwise would easily go unnoticed: when it comes to architecture, no human intervention is required to bring about a change in the regulatory settlement. The lack of human presence is also what Brownsword and Scott focused on, from the reference point of accountability. Accountability is a fundamental principle of democracy, strictly connected to the rule of law and an indispensabile requisite for the legitimacy of any regulatory authority. But digital technologies are characterized by automaticity: there is no human interaction, no scope for argument. For instance, in the case of a filtering system, often it is not clear what is being blocked, why or by whom. Sometimes the end-user is not made aware even that filtering is in operation, nor the site owner is aware, unless he or she spots and can diagnose a fall-off in traffic, like it happens with Cleanfeed –the filtering system managed by the IWF, well known to the public after the Virgin Killer affair. In extreme cases the end users may even be actively misled, like in Uzbekistan where users are told that sites banned for political reasons are blocked for supposed pornographic content. Even if aware, users cannot question the architecture itself, but rather who shaped it. Moreover, considering code a modality of regulation in the proper sense may rob users of moral agency or responsibility in their use of the Internet: they may think to be free to do whatever it is technically possible to do, with no necessity of moral engagement in their activities. All this is not to negate the effectiveness of architecture in cyberspace (which is remarkable), but one must be aware that architecture is only a tool to enforce a regulation originated somewhere else. Drawing an analogy, the success of Jeremy Bentham’s Panopticon (a design for a prison in which a small number of guards are able to keep an eye on all the prison corridors from a central tower) depends on the exercise of legal authority to detain prisoners and apply discipline to those who violate prison rules. Thus surveillance is not a legal power, it’s a tool to support the exercise of legal power.

Another crucial, but unconvincing, aspect of Lessing’s thought is the idea of individuals as isolated pathetic dots, passive observers of the regulatory matrix which surround and constrain them. The incorrectness of this view lies in the fact that it neglects the role of the Internet as primarily a tool of communication that shrinks distances and time and strengthens bonds between people with common interests or experiences. Moreover, in this way, it obviates some of the conditions of Rousseau’s strong form of democracy. In fact, the Internet is not just a network of networks: it is also a network of communities. When it comes to regulation, the dot is not alone and is not inert: when it feels a sense of unfairness or injustice, it can exert its will on other dots and together they may agitate as a group against the weight of regulation pressing down on them. When enough dots come together they form a community, which forms norms. Moreover, Lessing’s idea that the dot responds to regulation in a predictable and directed fashion, does not take into account that actually the response from the regulated individual is at variance with the desired regulatory outcome. In other words, unintended consequences (often called “regulatory failures”) occur. This happened in the field of DRM technologies, which completely failed as architectural solutions to prevent copyright infringement. It is memorable that a scheme such as Content Scramble System was reverse engineered by a teenager! It happened also in the CTB case, in regard of which the #IamSpartacus phenomenon raises also the question on where Network Communitarism stops and vigilantism begins.

In light of the role of individuals as active transmitter rather than passive receivers, the best regulatory solution is that suggested by Murray and based on the Actor Network Theory (developed by Michel Callon and Bruno Latour) and the Social Systems Theory (of Niklaus Luhmannand and Gunther Teubner): symbiotic regulation. In fact an active intervention into a settled regulatory environment is likely to be extremely disruptive. Rather, it is much better to design a control model that harnesses the relationships already in place by affording “all participants in the regulatory matrix an opportunity to shape the evolutionary development of their environment […] evolution rather than revolution is the key to effective regulatory intervention and this means that communication between all parties is essential”. This is certainly an extremely complex task, which entails uncertainty and a “leap of faith”, but it is the regulatory model most likely to succeed.

Murray’s position proves to be particularly enlightening and closer to reality than Lessing’s model also because draws attention on gatekeepers, those who control the flow of information over a particular network or community and therefore are in a position to exert significant power online. Though, as Giles Moss pointed out in Internet Governance, rights and democratic legitimacy, Murray does not seem to underscore enough the concerning aspects of such role. Cisco decides how data packets are routed on the Internet, Google determines the results of most of our searches, Facebook profiles (and shadow-profiles) each and everyone of us. A shrinking number of corporations shapes cyberspace, to the detriment of users’ power to choose. In my opinion, these growing asymmetries and such concentration of power are likely to lead to dangerous outcomes. Especially if one considers that in the majority of cases gatekeepers are private corporations, thus they have interests that do not always align with those of the public; that hey have assumed a central regulatory role, but they are not subject to the same obligation as public entities.

The adoption of a symbiotic model of regulation able to meet the interests of all stakeholders involved is one of the primary objectives to achieve in the near future, to be combined with human rights compliance and corporate social responsibility of gatekeepers.


Bibliography

[1] NOAM, Eli  – Regulating Cyberspace. Columbia University, 1997.

[2] LESSIG, Lawrence – Code and other laws of cyberspace. Basic books, 1999.


BARLOW, John Perry – A Declaration of the Independence of Cyberspace, 1996.

CASTELLS, Manuel  – The Internet galaxy: Reflections on the Internet, business, and society. Oxford University Press, 2002.

COLEMAN, Stephen; FREELON, Deen – Handbook of Digital Politics . Elgar, 2015.

EASTERBROOK, Frank – Cyberspace and the Law of the Horse. U. Chi. Legal F., 1996, 207.

KERR, Orin S – The problem of perspective in Internet law. Georgetown Law Journal, 2003, 91

KNIBBS, Kate – What’s A Facebook Shadow Profile, And Why Should You Care?. Digitaltrends.com, 2013.

MAYER-SCHONBERGER, Viktor – Demystifying Lessig. Wis. L. Rev., 2008, 713.

MCINTYRE, Thomas J.; SCOTT, Colin – Internet filtering: rhetoric, legitimacy, accountability and responsibility. 2008.

MOSS, Giles – Internet Governance, rights and democratic legitimacy. 2015.

MURRAY, Andrew  – The regulation of cyberspace: control in the online environment. Routledge, 2007.

MURRAY, Andrew  – Information technology law: the law and society. Oxford University Press, 2013.

ROUSSEAU, Jean-Jacques – The Social Contract. 1762.


Autore

 

en_US