Interview Phil Torres: “We know almost nothing about existential risks"
An interview with x-risk researcher Phil Torres
Dear reader,
Below you can find my interview with Phil Torres, which is part of the Anti-Apocalyptus newsletter. Each week I send you five links about some of the most important challenges of our time: climate change, weapons of mass destruction, emerging technologies, mass causes of death and great power wars. If you haven’t done so yet, free to subscribe at the button below or share this email with anyone who would be interested.
Today I’m expanding the Anti-Apocalyptus Newsletter. I will occasionally use it to feature interviews with scholars and experts who work on highly important topics, such as existential risk. Which is exactly what this first interview is about.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf8e0947-8e6b-4a29-96cf-a0b4206e3543_634x594.png)
Phil Torres is an author, scholar and freelance writer, who focuses on existential risk. He has previously worked at the Centre for the Study of Existential Risk, the Institute for Ethics and Emerging Technologies and is currently a PhD student at Leibniz University Hannover. He also authored Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks and The End: What Science and Religion Tell Us about the Apocalypse. His third book, Human Extinction: A Short History, is about to be published.
In this interview we talk about existential risk, the origins of the idea, what makes it an interesting and worthy field to study and why Torres is pessimistic about humanity’s capacity to combat these threats.
Could you shortly describe some of your previous work, and what you're currently doing?
‘I’ve spent the past decade or so working on a range of existential risk issues. Since the field is so young—it was founded in the early 2000s, although not until the past decade or so has the topic of existential risks attracted the attention of more than a small handful of misfit scholars—there are myriad fundamental questions that quite literally no one has written about. I’ve tried to identify a number of these questions and do my best to say something of interest, thus filling in various lacuna in the literature. For example, I have a few papers on what I call agential risks, or the risks posed by any agent who would press a “doomsday button” if one were put within finger’s reach. It turns out that there are at least four categories of agents who would indeed happily push such a button! More recently, I just finished a book titled Human Extinction: A Short History, which offers a sprawling account of the origin and evolution of our modern scientific idea of human extinction from the ancient Greek philosophers to contemporary “existential risk studies.” To my utter delight, Paul Ehrlich wrote a foreword for it. This project has really underlined for me just how incredibly new the topic of human extinction and existential risk is within the western tradition. In fact, one of my main theses is that the idea of “human extinction” was quite literally unthinkable to nearly everyone for the vast majority of western history. It was only during the second half of the nineteenth century that it became intelligible to some people, and not for another century that anyone began to take the possibility seriously. Right now, though, my primary focus is modelling the potential effects of stratospheric geoengineering, which is one of the proposed techno-fixes for anthropogenic climate change.’
How did you come to study existential risk? Why do you think the end of humanity is interesting?
‘I started off around 2007 as a fierce critic of transhumanism, and the field of existential risk studies emerged from the transhumanist movement. The reason why should be obvious upon reflection: transhumanists want to use powerful emerging technologies—synthetic biology, nanotechnology, artificial intelligence, and so on—to transcend the biological limitations bequeathed to us by contingent evolution. But all of these technologies are dual-use and, as Bill Joy famously discussed in his 2000 Wired article “Why the Future Doesn’t Need Us,” carry with them unprecedented risks to human survival and prosperity. Hence, one response to this situation is to become a neo-Luddite and advocate for harsh regulations to prevent these technologies from being created. Another is to found a new field that aims to study the associated risks and neutralize them—this was the transhumanist response that led to the novel concept of an existential risk. While I was initially more sympathetic with the neo-Luddite view, I became convinced that the “autonomous technology” thesis is very likely true, according to which there are no emergency brakes on the juggernaut of technological innovation. The question isn’t whether innovation will continue but how this will occur, in what order certain technologies will arrive, and so on. In other words, the best we can do is attempt to alter the trajectory of technologization, since imposing moratoriums on certain fields of potentially dangerous science is completely unrealistic.
As for the topic being interesting, what about human extinction isn’t fascinating? Albert Camus famously claimed that suicide is the ultimate philosophical question, given the “absurdity” of existence. But one could respond that this is far too parochial a view. The ultimate question is whether humanity or civilization should commit suicide. After all, everything we value in the world presupposes the continued existence or flourishing of one or both, and in this sense, the question of collective suicide is therefore prior to all other questions. There could hardly be a “deeper” or “more grand” issue than the survival or annihilation of our species, or so I would argue.’
Why is it important for humanity to look at and combat existential risk? Particularly compared to other cause areas?
‘One reason was already gestured at: the topic is so profoundly novel that we know almost nothing about it. Consider the shocking fact that just three decades ago or so, the scientific community as a whole roundly rejected the notion that global catastrophes involving asteroids, comets, or supervolcanoes could cause mass extinctions. Such scenarios were thought to be impossible, or at least completely implausible. And many of the anthropogenic threats to human survival that we currently take seriously have only been identified in the decades since the 1950s. If we extrapolate this sudden explosion of new threats—a trend that involves both ontological (the total number of risks) and epistemic (the risks we know about) risk multiplication—then we should anticipate the threat environment of tomorrow with a great deal of trepidation. There is simply no reason to expect our existential predicament to get safer rather than riskier, and indeed I’ve written before about the possibility of an “existential risk singularity,” or phase of human history during which the introduction of new scenarios mirrors the incomprehensible pace of technological “progress.” But perhaps there are indeed solutions to the situation that are currently hidden from view. The only way to find out is of course to focus our attention on understanding the nature and causes of these existential hazards.’
What are the most urgent existential risks humanity should focus on? And how should they combat them?
‘My view at the moment is that artificial superintelligence, if it is possible, could very well pose an all-or-nothing threat to humanity—that is, to paraphrase the late Stephen Hawking, if it’s not the worst thing to happen to us, it will likely be the best. There’s something very religious about this idea: solving the control problem is quite similar to Armageddon, but with an atheistic twist. If we get things right and “win” the battle—the last one we need ever fight, to parallel a famous line from IJ Good—then a utopian world awaits. But if we get things wrong and “lose,” what awaits is almost certain annihilation. Yet there are fairly good epistemological grounds for accepting this picture, unlike the eschatological narratives of the world’s many religions. So, I worry greatly about this threat, and do not have any special insights about how to overcome it, although it’s a bad sign, in my view, that the overwhelming majority of people currently focused on the “value-alignment problem” are white men with a particular, narrow perspective on what matters.
Otherwise, I think climate change and ecological ruination are good contenders for being the most urgent threats. This is not because either is likely to cause human extinction—in the absence of what appears to be the unlikely scenario of a runaway greenhouse effect, we will very likely survive, albeit in degraded conditions much worse than the Paleolithic. Rather, these are “frame risks” that frame the general existential conditions in which all human affairs will unfold in the future, and as such have the potential to significantly modulate the probability of other risks, including nuclear conflict, bioterrorism, nanoterrorism, superintelligence, and so on. My current work focuses on stratospheric geoengineering, and indeed there could very well be some “magical” techno-fix in the future that saves civilization from collapsing, but we should not count on it. The point is that climate change, global biodiversity loss, and so on, are both threat multipliers and threat intensifiers that are, put crudely, going to make everything in the foreseeable future so much worse. In fact, studies suggest a correlation between steeper time-discounting rates and environmental instability, which of course makes sense. If this is the case, then climate change, etc. could actually discourage people from thinking more about the longer-term future of humanity.
We have perhaps a decade or so before the window for meaningful action on these threats closes. Hence, now is the time to do everything possible to mitigate them.’
Is it realistic to expect that world governments will focus on long-termist issues like existential risk? Particularly since the response of many governments to COVID-19 has proven to be less than great.
‘I would answer “no.” From an economic perspective, mitigating existential risk is a global-transgenerational public good, and as such there is zero market pressure to address this issue. The business world is based on myopic thinking — the quarterly report — and capitalist-realism is so widespread that, as the saying goes, it is far easier to imagine civilization collapsing than an end to capitalism. From a governmental perspective, elections every two or four years means that politicians have a strong incentive not to think about the consequences that policies will have decades from now. From an evolutionary perspective, our brains evolved both in and for an environment in which we never had to think about problems beyond our small tribes of around 150, also known as Dunbar’s number, and the near future, so short-termism is very likely built-into our cognitive architecture, a phenomenon that Christopher Williams calls “brain lag.” From an ethical perspective, as mentioned, there’s been almost no serious work on the implications of our extinction, the importance of avoiding this outcome, and so on, meaning that there are no established norms — integrated into our more general cultural orientation toward the future — to guide our individual and collective behaviors. Indeed, “aggregative harm” cases like climate change fall almost completely outside of traditional ethical theorizing, which has focused instead on the individual. From a psychological perspective, as Nassim Taleb writes, “prevention is not easily perceived, measured, or rewarded; it is generally a silent and thankless activity,” and of course a successful anti-extinction policy regime would result in the absence of anything significant happening, meaning that people would likely lose interest over time. I could go on, but the point is that there are not only a strong plethora of forces working against efforts to ensure our future but many of these are deeply rooted and extremely entrenched. Consequently, I very much find myself in the pessimistic camp, and think that the “doomsday hypothesis,” which resolves the Fermi paradox by claiming that nearly every civilization that reaches our stage of sophistication self-destructs, is highly plausible.’
I hope you enjoyed this interview. Feel free to send me comments or remarks by responding to this email. If you haven’t done so yet and liked this newsletter, please subscribe at the below link, click the like button or forward this email to someone who would be interested.