In the alarm-ridden discourse surrounding artificial intelligences, one warning tends to capture the most attention, perhaps due to its apocalyptic flavor: AI will be our downfall; it’s going to wipe us out. Now, there isn’t exactly a unanimous perspective on this. On one end of the spectrum, you have Eliezer Yudkowsky, seriously suggesting we should preemptively bomb data centers to halt the rise of artificial intelligence. And, on the other, there’s Andrew Ng, who shrugs off the worries about AGI (Artificial General Intelligence) and suggests that fearing a rise of killer robots is like worrying about overpopulation on Mars. In between these polar views, lies a vast continuum of opinions.
The fact is, prognosticating the trajectory of a technology often leaves one disenchanted, regardless of one’s expertise. It makes little sense to place unflinching faith in this or that authority. AI is a complex black swan event. Individuals who specialize in any domain of knowledge, be it philosophical, engineering, sociological, mathematical, will inevitably harbor blind spots in other domains. This underscores the dire need for an open, interdisciplinary debate where everyone brings their intellectual resources and analyses to the table.
Now, speaking personally, I’d like to cast myself in the role of the wise, balanced observer and stake my claim somewhere in the middle of this spectrum. But, if I’m brutally honest, I find my sympathies leaning more towards Andrew Ng. I’m of the firm belief that a humanity-annihilating AGI is highly speculative, unconscious of its roots, politically damaging, and a distraction from the pressing risks we face in other domains. However, before I dive into my rationale, allow me to lob a provocative thought back into the court of the doomsayers: an intelligence that is alien, malevolent, vastly powerful, unchecked, and damaging to both human life and that of other species already exists. Actually, there might be more than one.
I hope your curiosity has been piqued, but before I get into the meat of my argument, let’s take a step back. The apocalyptic thesis, in short, holds that these intelligences will grow ever more sophisticated, ever more powerful, and in pursuit of their ends – which are ultimately our ends – they might develop subroutines that make them independent, uncontrollable, and in competition with our own survival. The subroutine issue is a genuine threat posed by these systems, an idea pushed to the extreme in the ‘paperclip maximizer’ thought experiment conceived by philosopher Nick Bostrom. This mental experiment envisions a super-intelligent machine whose sole objective is to manufacture paperclips. If artificial intelligence were to become sufficiently advanced, it could begin to consume all available resources to maximize paperclip production, blissfully ignoring the potentially catastrophic consequences for humanity and the environment. It might, for instance, decide to convert every molecule on Earth into paperclips, humans included, or it could opt to expand into space to find even more resources to convert into paperclips.
It’s an evocative example, certainly more persuasive than anthropomorphisms suggesting that AI will inevitably clash with us due to evolutionary principles – as if to say that, since we’ve created them in our image and likeness and we are a contentious species, … Despite this, Bostrom’s hypothesis and others that stem from similar premises present several problems. These essentially boil down to one overarching issue: speculation on unknown variables.
The powers we’ve amassed over millennia have made us an arrogant species, yet developing potent and dangerous technologies does not equate to superior intelligence, far from it. Try making honey, flying, differentiating every type of flower, or sucking blood: in many tasks, we remain inferior to bees and mosquitoes. The notion that power is correlated with intelligence is an axiom teetering on unstable foundations, becoming downright brittle when such power turns against its creator. Despite numerous challenges to our notion of our ‘special’ role in the universe that we have faced over the centuries, our arrogance remains unscathed. It gives us the audacity to predict a future that is, in reality, beyond our powers of prediction. Materialistic society excels in this form of hubris, having formally renounced a deity that, as Jung maintained, remains a psychological necessity. The god that science chases out the door, therefore, sneaks back in through the window — or the roof; it’s a god after all — in the form of faith in science, even though science is founded on the very antithesis of faith, namely skepticism.
From this paradox are born the hyperbolic philosophies, such as 19th and 20th-century cosmism and 21st-century longtermism. Although politically at opposite poles, both orbit around the deification of human capabilities through technology. A technology-turned-religion can worship only numbers and, hence, these materialist-utopian dystopias often dream of overcrowded universes, teeming with humans living in abundance, according to the questionable axiom that more is better and that happiness resides in the satisfaction of material needs.
Today’s Silicon Valley philosophers take the infinite progress at the heart of capitalism for granted, placing a future humanity, which they foresee as ever more numerous and satiated, at the top of their ethical considerations. Meanwhile, their Soviet predecessors surpassed them in dreaming not only of colonizing the universe through our excessive proliferation, but also of resurrecting all the dead. And, yes, they were serious.
But, really, do these philosophers hit the nail on the head? Can we truly conceive the possibility of evil AGIs? If we examine some of the variables that must be validated to run the risk of an apocalyptic AGI, we’ll realize there are numerous walls to break through. We must, for example, be certain that the desire to create ultra-powerful, superhuman AGI is genuine and not a publicity stunt (?%); that it’s technically feasible (?%); that even if possible, we will succeed in doing it (?%); that this will happen before other events (e.g., climate change) impede us (?%); that we can’t halt the process midway (?%); that it will inevitably become uncontrollable once created (?%); that it’s malevolent (?%); that it sees value in exterminating us, even if malevolent (?%); and that it can gain sufficient control over physical objects to exterminate us (?%).
Now compare this to the “ifs” preceding extermination due to climate change: if climate change is real (100%), if it causes extensive environmental damage (100%), if it causes famines and social unrest (100%), and if these damages are sufficient for extinction (?%). Ironically, many proponents of the Terminator theory resort to this lone unknown variable in giving a consoling priority to artificial dangers. We can, in fact, only start worrying about AGI after we’ve addressed all the other existential risks, real or imagined, that are preceded by fewer unknown variables.
But fear is seldom rational. Sometimes it’s opportunistic, as exemplified by Elon Musk, who, despite being a signatory of the letter calling for a slowdown in AI development, shortly afterwards founded a new startup to develop AI. Or Sam Altman, the CEO of OpenAI, who frets about AIs while his company keeps churning them out. At other times, fear is consoling because it distracts us from real dangers, diverting our attention to apocalyptic fantasies that we hardly believe. Or it’s simply used for advertising, because a product described as the most powerful ever developed sells better, and articles that announce the impending end of the world get more clicks. Meanwhile, the more immediate dangers are overshadowed, such as:
Job loss: Especially if kept in the hands of the few, AI could cause job losses across many sectors, increasing unemployment and social tensions. Give a tractor to ten farmers, and they’ll be thrilled; use it to fire five of them, and they’ll be furious.
Discrimination: AI can lead to discrimination and bias, as artificial intelligence systems can be trained on data reflecting societal prejudices.
Environmental impact: The energy needed to create and power AI systems can have a negative environmental impact, and companies with proprietary software aren’t publishing precise data on this.
Automation of war: AI can be used to develop smart weapons and surveillance systems, increasing the risk of conflicts and other moral abominations.
Errors and malfunctions: The hasty adoption of untested AI systems could lead to mistakes by users, causing harm and putting people’s safety at risk.
We’re a species perpetually ensnared by pareidolia, our innate tendency to project a semblance of humanity onto the non-human. In the face of technologies crafted to simulate language, a process we consider one of our most distinguishing traits, we are inevitably compelled to indulge in unwarranted projections. Yet, this very process serves as yet another testament to our profound failure to perceive the true essence of diversity.
On the pages of Ways of Being, James Bridle artfully juxtaposes AI with the myriad other intelligences that inhabit our planet, intelligences that have only recently begun to receive the recognition they deserve—animals, plants, and natural systems— gradually unveiling their intricate complexity, agency, and reservoirs of knowledge. Astonishingly, fungi, plants, and even entire ecosystems proudly exhibit a form of intelligence that often surpasses our own, perhaps even surpasses that of advanced computational marvels. How then, can we arrogantly proclaim our superiority to a mushroom with the power to expeditiously recreate the map of one of the world’s most robust and efficient transportation networks, such as the sprawling tapestry of Tokyo’s urban fabric? By what right do we assume ourselves to be more intelligent?
The undeniable truth is that we are also already encircled by alien intelligences, some as formidable and potent as the fabled AGI that haunts our collective consciousness, as I alluded to at the outset. I speak, of course, of the multinational corporations, these commanding, organism-like entities whose decisions are steered by a nexus of objectives that, while potentially aligning with the interests of certain shareholders, are not inherently subject to their control. Nor do they necessarily coincide with the greater welfare of society as a whole.
The individual human beings comprising the cellular fabric of these entities exist symbiotically, much like the cells within our own bodies. Not even a CEO wields absolute dominion over these vast constructs, which, in addition to their collective nature, often operate under the influence of interests and conditions that transcend the realm of individual volition. Multinational corporations are vast organisms of unparalleled scope, ceaselessly devouring planetary resources in their relentless pursuit of boundless growth, all the while turning a blind eye to the ecological devastation wrought by their insatiable hunger for power. In the span of a few short decades, they have imperiled the existence of every life form on this planet. And although certain cells may strive to rebel against this insidious hegemony, these malignant intelligences refuse to abandon their voracious ferocity. Are they, then, more treacherous than the dreaded AGIs we are scared of?
A question worthy of contemplation, indeed.
***This article is a translation of a text that previously appeared on Siamomine***