Towards a definition of the Internet
The Internet is a loose arrangement of connected but autonomous networks of devices. Each device, a ‘host’ in networking jargon, uses a ‘proto- col’ to communicate with other devices on the network. These protocols tie together diverse networks and govern communications between all computers on the Internet.
‘Loose’ is an illuminating term here because it suggests that the Internet as is was not planned, but rather ‘became’ in a manner far closer to organic growth than to meticulous and ordered organisation. This differs greatly from other physical networks that connect people and place, such as road or rail networks, which invariably only grow with the explicit permission of large, centralised authorities. I add to the Internet however (in an admittedly miniscule way) when I turn on a device that is capable of connecting to it. The method by which different networks connect to each other, the ‘proto- col’ of our quote (better known as TCP/IP), is actually relatively simple and forms — as will be seen later on — a foundational element of this notion of the ‘openness’ of the Internet.
Certainly parts of it, both large and small, were very meticulously planned. The sum of these interconnections, however, was not and the permanent state of flux that the Internet operates under as the undulations of devices and networks connecting and disconnecting proceed means that there can never be an accurate picture of ‘the Internet as a whole’. It is, arguably, inaccurate to even use the definitive article as a prefix, given that ‘the Internet’ is never a single thing in a fixed state. We don’t use ‘the space’ to describe the interconnected, ever-shifting space beyond the Earth’s atmosphere. We do use the definitive article to append ‘the environment’ though. When we refer the environment, we are largely alluding to an interconnected system of biospheres, including as radically disparate habitats as the deep ocean, arid deserts, temperate woodlands, choked metropolitan cityscapes and the Ozone layer at the edge of the Earth’s atmosphere. Whether it is grammatically accurate to refer to ‘the’ internet or not, its proliferation through common parlance the world over makes it more likely that we do.
Returning to Ryan’s definition, ‘autonomous’ is also an illuminating term as it alerts us to a key operational principle for the Internet, dubbed the ‘end-to-end argument’. This was conceived by its architects as a ‘design principle (to help) guide placement of functions among the modules of a distributed computer system’ (Saltzer; Reed; Clark, 1984). The ‘end-to-end argument’ essentially states that the network itself is effectively ‘dumb’ and that features are implemented at the end points — the host computers themselves — rather than in the middle. Zittrain (2008) describes end-to-end’s benefits as including making the network more flexible and thus preventing the need for system architects or administrators having to anticipate every problem that could go wrong with it, and preserving user’s freedom by the routing of data packets between sender and recipient without anyone stopping them to ask what they contain.
Those managing the system and those using it thus both gain from this principle. For Wu (2010), end-to-end ‘abdicates control to the individual’, thus enabling a degree of autonomy and therefore also productive capability for the Internet user, arguably far greater than for users of other electronic media.
The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.
(Leiner, B M et al, 2012)
The UN Human Rights Council (La Rue, 2011) recommended states to treat access to the Internet as a human right, a recommendation argued against by one of the ‘fathers of the Internet’ Vint Cerf (2012) on the grounds that ‘technology is an enabler of rights, not a right itself’. This interconnected ‘network of diverse networks’ is thus also far more than the sum of its technical parts. It is a canvas onto which the modern world is increasingly drawn. Clichéd as it might now be to make such statements, the Internet permeates most aspects of modern life and is the prime driver of change in the early twenty-first century — social, cultural, political and economic change. These changes do not come without costs. As Ashdown stated, ‘the paradigm structure of our time…is the network’ (in contrast with the hierarchy) (TEDxBrussels, 2012), a circumstance that has arguably arisen as a direct result of the phenomenal growth and global impact of an open digital network that anyone can join.
A final attempt at a definition comes via Doctorow (2008):
Here are the two most important things to know about computers and the Internet: 1. A computer is a machine for rearranging bits 2. The Internet is a machine for moving bits from one place to another very cheaply and quickly.
This definition views the Internet as ‘the world’s greatest copying machine’, a factor that has led to the latest incarnation of what Doctorow (2013) and others have described as ‘the copyright wars’. This is a non-militarised conflict which sets owners of copyrighted works that are in favour of the ongoing expansion of the terms of copyright laws against those that argue in favour of either a reduction of or an end to the broad scopes of copyright laws (Patry, 2009). All of which matters here because it tells us that the Internet is the battleground over which these wars are now being fought. Neither are the copyright wars the only conflicts being wrought online. These conflicts will be returned to in Part One.
The Layers Model
A common cognitive framework for understanding communications systems comes from the layers model. Fransman (2002) observes that this model, traditionally subdivided to include equipment, network and services layers, has long been used by telecoms engineers and software developers to organise the interdependencies of their work and knowledge. He also notes that there is no consensus over either the number of layers that should be distinguished nor exactly what is to be included in each layer. Nevertheless, this model from the telecoms industries is borrowed to begin building an understanding of Internet functionality.
Probably the best-known proponent of a layer model for conceptualising the Internet comes from Benkler (2006), who proposed his version as a simple, three-layered representation of the basic functions involved in mediated human communications. Sharing some commonality with Fransman’s telecoms model, Benkler describes his model as consisting of a physical layer of actual network infrastructure at the bottom, a central logical layer above that comprising protocols, standards and algorithms running as software code, and finally a content layer of actual human utterances running over the top which are typically manifested as digitised text, images, audio, or video. For Benkler, all three layers must be used for mediated human communication to take place. He observed the emergence of non-proprietary models in the Internet space in the technical and practical capabilities of each layer, which therefore made access to communication or cultural production cheaper than with a proprietary model and less susceptible to control by any single party or class of parties. This can be compared with the proprietary US telephone network, which was run as a monopoly by the corporation AT&T for most of the twentieth century (Wu, 2010). AT&T owned the end points of the system (the telephone handsets), and forbid telephone users from modifying the handsets in any way, as well as owning the telephone line network itself. Benkler (ibid) also noted that significant policy battles had taken place at each layer over the facilitation or even permission of non-proprietary or open-platform practices. These battlelines have arguably been drawn in more than just the policy space, as Part One will indicate.
Other writers have taken the layers model and used it or built on Benkler’s version according to their domains of concern. This includes Kapur (2005, on Internet governance), Lessig (2001, on copyright and intellectual property), and Zittrain (2008). Kapur noted that additional layers may be included or the names changed, and that the three layers he described were interdependent. Lessig observed that each layer could be controlled or ‘owned’, as in a centralised communications system, or ‘free’ (meaning organised as a ‘commons’) or ‘unowned’, as in a decentralised communications system, although each layer will contain degrees of openness and closedness. Zittrain sketched his version of the model as an hourglass, with Internet Protocol (IP) being the slender central point through which everything passes on the Internet. He also identified a clear division of labour evident in a layer model amongst the people working towards overall improvement of the network. This stands in contrast to a proprietary communications network, which are offered to customers as one-stop solutions at the cost to the provider of having to design everything themselves. A final point from Zittrain is that openness at one layer can beget openness at another.
For the model proposed here, I will borrow further from Zittrain and add a fourth layer. This I will define as the social layer, which is concerned with behaviours and interactions amongst people or groups of people. This additional layer matters because, to paraphrase Cerf (2012), it is what technology enables in people that ultimately has the most profound implications. Does an open communications platform encourage openness in individuals, communities, or even societies? If it does, what does that mean, and why does that matter? While space does not permit deeper investigation of these questions here, they will nevertheless underpin tentative explorations of the Social Layer in Part Two.
Table 1 illustrates this model by defining each layer and indicating what each one can include:
This model both provides a means of understanding how the Internet functions and gives us a framework for assembling a catalogue of methods that can be deployed for ‘keeping the Internet open’, and which will be listed in Part Three.
The choice of the term ‘open’ as an adjective for describing an essential characteristic of the Internet and with which to differentiate it from other information systems or communication mediums that have preceded it is a naturally deliberate one. There are other terms that have been used in key literature for defining the Internet’s qualities, including ‘free’, ‘generative’ and ‘centrifugal’, so why choose ‘open’ for this study over any of these other terms?
Part Two of this study will take a closer look at the semantics of ‘open’ and ‘openness’, and will include a glimpse at broader social meanings and specific uses that relate to the Internet, as well as reasons for the choice of this term over others used in the literature. This section will also include a layer-by-layer investigation of ‘the open Internet’. However, as a driving motivation behind this work is to convince the reader that the Internet as we know it is under threat as well as provide indicators for what can be done about that, this study will begin with investigations of what a ‘closed Internet’ might be considered to comprise of.
Of course, setting up such binary distinctions is useful for the purposes of constructing an argument, but the reality is always more complex than such black and white differentiations make out. As Lessig (2001) suggests, ‘what is special about the Internet is the way it mixes freedom with control at different layers’, and that ‘changes to this mix will kill what we have seen so far’. It is the ‘changes to this mix’ that will make up this picture of a ‘closed Internet’.
‘198 Methods’, and nonviolence in the virtual realm
Before beginning the investigation, I will turn next to the number of methods in the title, and ask — why 198 of them? Gene Sharp is best known for his extensive writings on nonviolent struggle for the achievement of democratic rights and justice. His works have been translated into more than 30 languages and cited as influential on several democratic or resistance movements across the world, including in Serbia, Ukraine, Egypt and Iran (Arrow, 2011). His central argument is that power is only held by the consent of the people over whom it is exerted and that with clear strategies, the pillars of support that all regimes rely on can be nonviolently removed (Flintoff, 2013). He defines a nonviolent action as ‘a technique of socio-political action for applying power in a conflict without the use of physical violence’ (Sharp, 1973) and describes nonviolent struggle (Sharp, 2010) as more complex and varied than violent struggle, but a more effective means of achieving change than by the use of violence. This is primarily because those in power will always have stronger weapons or a monopoly on the use of force.
Freedom House define freedom for national peoples as ‘possible only in democratic political environments where governments are accountable to their own people; the rule of law prevails; and freedoms of expression, association, and belief, as well as respect for the rights of minorities and women, are guaranteed’ (Freedom House, n.d.). With support for nonviolent civic initiatives in societies where freedom is perceived to be under threat, they compile an annual international survey of the status of political rights and civil liberties worldwide. Figure 1 below shows their country ratings from the beginning of this survey up until 2012.
While the dataset used to produce this graph would clearly require far deeper investigation to be able to make bold claims about the impact of nonviolent methods in achieving effective social change, the overall global trend according to Freedom House over the last 40 years does appear to have been towards more freedom (and therefore less overall state violence).
Sharp (ibid: p. 30) catalogued a series of ‘psychological, social, economic, and political weapons’ that can be deployed by individuals or social institutions, which he then classified into three main categories: protest and persuasion, noncooperation, and intervention. Examples from this list ranged from using colours and symbols to teach-ins and mock funerals (the full list is linked to in Appendix I). This list was compiled over several years, stalling at 198 methods (Flintoff, 2013).
This thesis is not a work about nonviolence and democratic struggle. While I recognise that marrying Sharp’s work with our cause of keeping an open Internet invites questions about how the notions of violence or nonviolence can be applied to actions within the virtual realm, this study does not seek to address those questions, as vital as that investigation might be. Considerations of ‘cyberviolence’ are described in Part One, with references to DDoS (Distributed Denial Of Service) attacks, the Stuxnet worm or the notion of cyberwar, but that is a distinct area of study from this one. This is however a study that draws inspiration from Sharp’s work, firstly in that it aligns with his position of nonviolence being a more effective agent of change than violence, but also that it draws from his list in order to provide a framework onto which these Internet layers and their associated subcategorical issues can be mapped. In other words, borrowing from Sharp’s categories of nonviolent action, we will use the aforementioned layers model to categorise our list of ways to ‘keep the Internet open’.
Part Three of this study, then, will lay out a catalogue of (nonviolent) methods that can be deployed by individuals or social institutions towards the goal of maintaining the essential qualities of the Internet as described in Part Two. Sharp (2010: 31) indicated that ‘there are certainly scores more (methods of nonviolent action)’ than the 198 that he catalogued, and my list of method extends beyond 198 – to 241, in fact. While this means that my list contains more than that of Sharp’s, and thus loses any potential serendipitous benefits from a ‘search engine optimisation’ point of view, the title functions as a kind of alignment with Sharp’s work. It also provided a target to aim for whilst conducting the main research.
Before concluding this introduction, a few words on methodology. The methodology for this study included:
1) Desk research
2) Online engagement
The desk research involved analysing key texts from a corpus of Internet-related works of varying themes as well as relevant academic papers and journal articles. This contributed to the understanding of past, present and future threats to the Internet, the nature of openness online, and provided the bulk of the methods given in my catalogue. The online engagement was done via the platform of a self-hosted wiki (using Dokuwiki software, due to its simplicity to set up and learn the syntax) with commenting features that was promoted to solicit contributions via common social media channels such as Twitter, Facebook and YouTube. Taking the spirit of the enquiry, I used open collaborative software to host the project’s public platform as well as attempting to ‘crowdsource’ some of the contributions.
 Readers are guided towards Sharp’s work for that deeper investigation and evidenced examples of the effectiveness of the use of nonviolent methods, much of which can be found at The Albert Einstein Institution Free Resources page (https://www.aeinstein.org/free-resources/; last accessed 25/10/18)
Go to the previous section — 241 Ways To Keep The Internet Open: Preface
Go to the next section — 241 Ways To Keep The Internet Open: Part One — A Closed Internet
Open Glossary of Terms