‘We do not yet know what a network can do’: steps to a collective internet

Liam Mullally

20th May 2024


Cybersyn – Stafford Beer’s cybernetic management project – is often offered as a vision into a “socialist internet”: a lost link to the tech-utopia that could have been. Cybersyn was a product of Salvador Allende’s democratic socialist Chile, and while there are reasons to hesitate before making such bold claims, it remains a powerful counterpoint to the naturalisation of the private internet of today, formed as it has been by neoliberal dogmas of market efficacy. Not least, this comes out of the fact that Cybersyn (with Allende’s project) was cut off prematurely. In 1973, a brutal US-backed coup remade Chile as a dictatorship and as a petri-dish for neoliberalism, violently destroying Cybersyn in the process.

On the day of the coup, one of Pinochet’s soldiers was apparently so bewildered by Cybersyn that he started stabbing the slides designed to be projected in the operations room, presumably unaware that these were not the system itself (Medina 2011). The circle of chairs in its centre do not appear to have registered as significant to him, nor is there any evidence that he targeted the telex machines, computer or network itself. Perhaps more surprising than this violent outburst is that when Gabriel Rodriguez, one of Cybersyn’s technologists, arrived in Silicon Valley after the coup he was totally mystified by the personal computer, repeatedly shutting down his building’s LAN when trying to use it (see: The Santiago Boys). This underscores an important point that isn’t immediately clear when we read about Cybersyn: the system is profoundly collective, it is the economy. In both cases, these networks (extensions of the societies that produced them) were totally confounding to their ideological others.

Another internet is possible, but these cases won’t hand it to us. We cannot dig up Cybersyn (or even Minitel) and remake it in the present, and we should not want to. Instead, such archeologies offer a relief to what we do have today – imaginative departure points from which we can glimpse what might be possible and through which we can imagine new alternatives.

My recent blogs were interested in the “actually existing internet”; this one is more interested in thinking of what else it could become. As an alternative to systems which went on to form the internet, but one which was snuffed out before it could really develop, Cybersyn holds a curious position in the political imaginations of the cybernetic left. But when we look at what Cybersyn actually was from our capitalist present, we are not so unlike Pinochet’s enraged soldier; so much of the design of Cybersyn is alien to the networks and computational devices of today, and confronting it is profoundly alienating. It exposes much of what we tend to think of as natural in the internet to be in fact arbitrary and specific. And this is uncomfortable not least because it makes it clear that a genuine “socialist internet” might functionally have very little in common with the network we are used to today.

The question is ultimately one of social relations: how do we want to organise ourselves? “Broadband Communism” is a playful allusion, but simply making internet access public is not enough, even if it is the first step.

Figure 1: Alternate internets, Cleveland Freenet homepage (top) and Minitel terminals (bottom)

There are, of course, other examples one might follow that more neatly align with our internet: Minitel, a state run computer network which ran from 1982 to 2012 in France, although it was more or less subsumed into the internet before this date; and the Cleveland Freenet, founded in 1986 as an alternate model to ISPs, providing public-access to the network generally organised on a community or municipal basis. Even as both of these examples failed to resist the hegemonic pull of what has become the capitalist internet of today, they offer counter-examples to private models of network management and organisation which are sometimes made to seem inevitable or natural.

Another internet is possible, but these cases won’t hand it to us. We cannot dig up Cybersyn (or even Minitel) and remake it in the present, and we should not want to. Instead, such archeologies offer a relief to what we do have today – imaginative departure points from which we can glimpse what might be possible and through which we can imagine new alternatives.

New terminals

Figure 2: Cybersyn control room.

Terminals are interface points at which people (users) meet the network. Like the more common (and less awkward) “devices”, this is a generic term which encompasses a number of technical objects. But unlike “devices” (or the even more general “machines”) “terminals” contains an indication of use: terminals are the periphery of the computer and human portions of the network; they are the point of access and interaction but they are not neutral or arbitrary. Terminals are integral to the wider design and function of the network. Often they are represented as abstract “nodes” within the network.

But terminals have changed a great deal more than the simple node – “°” – in a topology diagram might suggest. The earliest were not necessarily computers at all – certainly not as we understand them today. In the 1960s there were a variety of terminals in use: massive room sized computers, telex machines (essentially network enabled typewriters), printers and less often basic monitors. These computers were used by specialists, often mathematicians or engineers, while the telex machines could have been used to send or receive information in a wider number of locations, such as factories. The telex machine would eventually be replaced by fax machines, before falling out of use entirely. In the 1970s, the first Personal Computers (PCs) were developed: small, at least for their time, devices which could be mounted on a desk in their owners’ studies, and were used by an individual or a small group of individuals, such as a team or family. This is a very familiar form of terminal, and one which is designed to be accessible to a much larger range of users. As desktop PCs were made smaller, laptop PCs were developed. These are PCs that we can take with us, so that our work and media consumption can follow us around, onto trains and planes, between cities and countries, from the study to the living room to the bedroom.

Figure 3: IBM personal computer.

Figure 4: A Telic Alcatel terminal for Minitel.

Network architecture and design from this period broadly aggregate computers into two categories, the personal computer and the server, which both appear as nodes within network topology diagrams. Without the PC, and without the advances in computer manufacturing which allowed it to become a staple of people’s homes, the internet could not have become what it is today. In a very literal sense, the shape of the internet from the 1990s onwards was determined by the PC, not the other way around.

But some time after the introduction of the personal computer there was a sleight of hand: the node came to represent not just a terminal, but also its user via a kind of synecdoche. The very strong cultural assumption behind this is that a) devices will belong to individuals, who will be the sole users of their devices and b) that terminals will allow only one user to connect to the network at any given time. This may have been ambiguous for a period, when ownership of the device was often shared and access was often via something like an internet cafe (although this is also a tension which is resolved by the personal user profile), but today the 1:1 relation between user and device has become more or less universal, and not just in the Global North. Both access to the network and computing are rendered personal (and often private) within this model.

The mobile phone has only accelerated this arrangement, allowing each individual to carry their terminals on their person, locked behind biometric encryption which ties the individual to the device via biometric data. Our phones don’t contain user profiles because they do not need them to individuate access. Even more than the PC, the mobile phone came to be the technology de jure of neoliberal capitalism – an individuating machine. Only more recently has the logic established by the Personal Computer been shaken, and then only through the arrival of cloud computing which, by moving services from the PC to the server, takes ownership of data and computing power away from the user without in any way undermining their individuation; this is computing as commodity and computing power as means of production, privately owned by data magnates.

It might seem strange to suggest that the mobile phone or PC are not the only kinds of terminals we might demand – these have become our most everyday, intimate possessions – but the case of Cybersyn reveals very quickly that they are neither natural nor inevitable.

Figure 4: Telex machine

Cybersyn – “the socialist internet” – had only one computer, which functioned as a kind of central processing unit and co-ordinated messages in the network. This was less a question of ideology than of pragmatic possibility; the American ITT (International Telegraph & Telephone Corporation) had put an embargo on Chile. When Allende took power, the country had just one computer at its disposal, and this situation didn’t change. But necessity is the mother of invention, and in Chile information was instead communicated via Telex machines (essentially typewriters hooked up to a network) located in factories and in a control room. These were the terminals in this network.

Stafford’s Beer’s control room is the most famous invention of Chile’s socialist design project, and a central innovation of Cybersyn. This is a circle of chairs, all with access to controls and surrounded by slides and visualisations; if it looks like something from Star Trek that is because this is what it was aspiring to. As a kind of collective terminal, the control room necessitates group engagement with the network, always mediated by discussion, collaboration and some amount of conflict; contemporary commentators are sometimes bewildered by the presence of multiple control points, suggesting this would inevitably lead to chaos, but the room presumes horizontality. Without architectural hierarchy, the users are presumably left to coordinate among themselves. As a kind of terminal it offers a stark contrast to the singular entry point and silent habit of mobile phones and PCs (even if these can and are used in ways that depart from their individuating designs).

Only one such room was built, but Beer’s original plans intended to replicate the rooms around Chile, creating a mass network which could only be engaged with communally: an internet of working groups.

Collective machines

As radically different as the control room is from a PC, as a system for democracy it falls somewhat short. Beer’s factory telex machines have been far less examined than his control room – partly because they were not novel innovations and partly because they are much less glamorous – but they are important because they point to this inadequacy. Workers inputting data from the factory are just machinic parts of the system, not democratic agents, at least within their workplace. There is asymmetry and hierarchy within Beer’s network, even if the small group (by design, of whisky drinking and cigar smoking men) at the terminal adopt an internal horizontality. And how do these small groups relate to the wider public? Beer appears to have been at best ambivalent about this question.

If Cybersyn didn’t solve the problem of collective computing, it does offer it as a speculative route forward. There have been experiments with collective computing within our internet ecosystem. Dynamicland, is a good example: an experimental operating system which, like Cybersyn, conceives of the room as the computer. Dynamicland comes with a vision of expanded collective provision, in which interactive collective computers might be built in every town following a model of museums and libraries. Dynamicland’s interface is significantly more complex (and much more interactive) than that built by Cybersyn, but in other regards it might be more limited: this is a “town hall” in the sense that people can have conversations, not in the sense that they might enact power. Tellingly, Carnegie libraries (i.e. philanthropic investment) are the model Dynamicland aspires towards, and any mechanisms for democracy are conspicuously absent from its collective medium.

As far as its actual development is concerned, it appears to primarily have been used as a pedagogical tool in which real objects, such as pieces of paper, can be intuitively integrated with virtual systems by a group of learners or collaborators. Still, it does challenge the basic individuating logic established by the PC, the same cannot be said for the much more plentiful experiments in virtual and immersive media. These experiments are not aiming to democratise computing, or establish new systems of economic and social relation, but to generate discrete group ‘experiences’.

Of course we must build out from what is already here, just as Beer made use of the telex machines left behind by a previous administration. Whether or not we like it we are inheriting a mass network of smartphones and laptops, the question is whether we can reshape the individuating machine into a collectivising machine, or whether we must start from scratch.

An ecosocialist network might not support every user engaging with the network via an environmentally costly smartphone, or to accept the violence and expropriation built into its supply chains. It definitely would not want smartphones designed to break down after a few years. Something like the fairphone might be seen as a limited response to this (limited in its imagining the solution to this problem being one of consumer choice and not regulation or public management). But we might also ask whether we really want to have such intimate relationships with these devices at all. In Ursula Le Guin’s The Dispossessed, her imperfect anarchist utopia is organised by computers – production, communication, even baby names – but everyday life is almost completely free of computation; computers do the machinic to free up humans to be human.

We should seriously question if mobile phones have fostered the kind of relation to the internet and each other that we would like, and if there might be better terminals for our purposes. We should also question narratives that suggest the appropriate, desirable or inevitable direction for terminals is ubiquity and invisibility (PCs > laptops > smartphones > wearables > augmented reality). Our use of communication and media technologies should be purposeful and this means they need to be conspicuous.

We should seriously question if mobile phones have fostered the kind of relation to the internet and each other that we would like, and if there might be better terminals for our purposes. We should also question narratives that suggest the appropriate, desirable or inevitable direction for terminals is ubiquity and invisibility (PCs > laptops > smartphones > wearables > augmented reality). Our use of communication and media technologies should be purposeful and this means they need to be conspicuous. This is the exact opposite of primitivism; it is the necessary response to acknowledging the determining role of technology today. By all means we can throw away or smash up existing terminals (or printers, as in the most famous expression of this desire: Office Space), but unless we also take on the task of building new ones we risk making our laptops and mobile phones scapegoats for our real adversaries – bosses, landlords, intellectual property barons, big tech, capital.The Dispossessed, her imperfect anarchist utopia is organised by computers – production, communication, even baby names – but everyday life is almost completely free of computation; computers do the machinic to free up humans to be human.

Collective topologies

Figure 5: Three models of internet topology – centralised, decentralised and distributed. Diagram produced by the RAND corporation in the 1960s.

At the highest level, topologies (the network shape) are often thought of as fitting into three broad groups: 1) a centralised topology, organised around a central node, 2) a decentralised topology which forks and splits at multiple points but maintains a hierarchy of central and peripheral nodes, and 3) a distributed topology, in which each node is linked to most of the surrounding nodes to form a kind of continuous lattice. Any actual network (including the actually existing internet) will not exactly be any one of these things, but to varying degrees a combination of them all. There are a number of technical and logistical reasons that any one of these arrangements might be pursued at the expense of the others – but first a more fundamental point: topology confronts very basic structures of relation. The question here is not just a technocratic one – of efficacy – but a social and economic one, of arrangement and relation: what kind of network do we want to live in?

Early visions of network architecture idealised distribution over both centralisation and decentralisation for a number of reasons. First was security: the internet started life as a military information sharing tool, and distribution offered resilience, in which destroying no node or group of nodes can break-up the network, even (or especially) in the case of a nuclear strike.

But cold war paranoia held some interesting bedfellows; for scientists and engineers working on ARPANET, visions of radical interconnection, of a “galactic network” (a term used as early as 1981), prevailed from the start. To achieve this interconnection, they pursued open design, a philosophy which underscores development of the internet for a period of at least thirty years. Any kind of network should be accommodated, and individual networks should be generous with each other’s traffic; this is the principle of full transit, according to which information is shared between participating networks as a gift (for free), not a commodity.

Figure 6: John Postel’s illustration of the ARPA model of network interconnection (1980). Networks are connected by gateways (G) in a ring structure and interconnect hosts (H).

But distribution no longer holds the status it once did. Since the 1990s there has been a massive centralisation of the internet’s topology. While it is not the only factor in this (backbones, state or private, entail some centralisation), privatisation accelerated this process and has functionally undermined these basic design principles. Functionally, it has eroded both the principles of openness and neutrality which underscored the internet’s development; full transit has been displaced by a system of paid-for-access and backroom deals (and all this with heavy state subsidies and payouts!) Would a socialist internet be decentralised? Would it be distributed? These questions have the issue the wrong way around: tactically, it’s clear that a genuinely distributed capitalist internet would be more hospitable to the growth of a socialist internet than a consolidated one. A variety of actors connected by principles of openness were easy prey for the growth of a capitalist internet, but the same principles form the conditions of possibility of a socialist alternative; a closed ecosystem would totally exclude this possibility.

If we’ve seen the privatisation of both network infrastructure and computing in recent decades, our basic goal should be the inverse, socialisation. Distribution as a network architectural goal is too often left to the libertarians; their conceptions are always limited by deference to market forces which drive monopolisation and centralisation, and within which equal distribution is impossible.

Real distribution

What tools have the tech-libertarians developed to help with this crisis? Etherium? As a means of buying drugs online, cryptocurrencies were socially useful; as tools of distributed governance they are profoundly flawed. As a mechanism of “distribution”, Etherium promises to distribute/share record-keeping, and in doing so to displace the need for a central authority or state. In place of the state, the contract becomes eternal and automatic (on the blockchain and integrated into wider computational systems). But without a theory of power (or at least one they are willing to share openly), these systems have no answer for financial (and other forms of) coercion – if you can be disciplined by poverty, homelessness and starvation, you are not in a position to agree a fair contract.

A world in which contracts become governance is a world in which property is the only basis of law, and in which power is increasingly the domain of very few individuals. Etherium represents a false and shallow distribution, in which each user/computer unit is enlisted in recording and reproducing their own position in an unjust hierarchy. Not only is this a technology which does nothing to promote even distribution or democracy, it’s one which totally fails once one group or actor exceeds 50%. Truth, history, record – these things would become immediately vulnerable under such conditions.

Figure 7: Sketch of a possible Cybersyn topology; telex machines relay information to a number of interconnected control rooms

So how else might we respond to this problem? Cybersyn offered a very different model. Beer’s plans were for a network of control rooms to be built – we might imagine these as connected by any number of topologies – but the central drive towards distribution here is a democratic mechanism within each node. Each node in this network (each control room) contains another network, a system of collective consultation and decentralised decision making. The telecommunications companies decided to centralise the American internet; the communal management of cybersyn’s control rooms might instead have decided the fate of this network.

Crucially, Beer’s plan was not for one room but many: this is a distributed democracy. The devil is in the detail: what is the precise architecture of the control room? Who is in each room? How many rooms? How should they be arranged? How will they share information and decision making? Beer doesn’t answer all these questions, and when he does his answers are often insufficient. He gives no good reason that factories or mines shouldn’t also be democratic actors, for instance.

To run another imaginative route: how about a United States composed of a federation of intermeshed community Freenets? These community and municipally run networks were an early alternative to the privately run Internet Service Provider (ISP), developing internet access with little or no cost to end-users. How might these (alongside a public backbone like the NSFNET, or a distributed alternative) have formed a different arrangement from the actual private provision?

At a certain point network structures might begin to sound like governance structures (here an interlinked mesh of municipal collectives, sharing governance), and this is not incidental: network relations are sedimented social relations. But importantly, they also reproduce those relations. Today, network structures are the circuit of digital production and reproduction.

Protocols for a socialist network

It is a mistake to think that the basic protocols underneath the internet are purely technical in nature; these are deeply political devices which negotiate relations within and between physical infrastructures (see Paul Dourish’s chapter on this in Signal Traffic). The development of the Internet Protocol Suite (TCP/IP) within DARPA was guided, explicitly, by a design philosophy

Just as liberalism is flawed in its calls for democracy to extend only to parliamentary representation, the core internet protocols are flawed in their assumption that distribution need only apply to network topology. We have seen the consequences of asking private enterprise to construct a distributed system: centralisation, monopolisation. Just as we demand more democracy than offered by parliaments, we should demand more distribution than offered by private telecommunications. Distribution can only be achieved in this sense via socialisation.

Effective network design demands collective, not private governance. If the private market cannot distribute the network itself, as we have seen it cannot, we will need to find new protocols which can achieve this. And there is no reason to limit this aspiration to topology; we could also distribute computing power. If “the cloud” offers the appearance of distribution, a socialist network might achieve the real thing: equitable access to the power of computation, autonomy over algorithms, open and public archives, digital democracy.

We can go beyond the existing conceptions of distribution to build four necessary protocols for a socialist network:

  1. Distribute network topology (a demand already latent in TCP/IP)
  2. Distribute computing power
  3. Socialise communication
  4. Socialise the development of network infrastructure itself

The question of how to go about socialising these things remains an open one, but however weak the left might be in this domain today, we are not without options.

Selected bibliography

  • Benjamin Peters, How Not to Network a Nation (London: MIT Press, 2016)
  • Eden Medina, Cybernetic Revolutionaries (London: MIT Press, 2014)
  • How to Design a Revolution: The Chilean Road to Design (Zurich: Lars Müller Publishers, 2024)

Liam Mullally is a CHASE-funded PhD candidate in Cultural Studies at Goldsmiths, University of London. His research interests include digital culture, the history of computing, information theory, glitch studies, and the politics and philosophy of noise. Previously he has worked as a copywriter and tech journalist. He is working on several projects with Autonomy, from skills commissions to policy strategy.