Technology, Geopolitics and the Hinge of History
Do we live in an exceptionally influential historical time?
This article sets out the overarching theme for this website: the extraordinary confluence of developments in technology, geopolitics and culture that seem likely to have an inordinate influence on the long-term prospects for humanity.
Over a decade ago the late British philosopher Derek Parfit included the following statement in the conclusion of his last book, On What Matters, in which he attempted to reconcile the three main theories of ethics in Western philosophy:
We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy.
This set off a low-key debate, mainly involving other philosophers, about what it could mean to say we are in “the” hinge of history, and if we are, what it could imply for the present generation.
The basic idea of what came to be termed the Hinge of History Hypothesis is that we live in an exceptionally influential time in human history that will have a disproportionate, possibly decisive, impact on the long-term prospects of humanity.
Parfit was the most influential advocate in recent times of the consequentialist theory of ethics, which holds that the morally right course of action is the one that leads to the best overall consequences, for everyone, for all time. If you are a consequentialist, and you also think we live in the hinge of history, then that puts a special onus on influential people to try and “get things right” for the innumerable generations of people to follow.
One of the philosophers who followed up the hinge debate, William MacAskill of Oxford University, also a consequentialist, wrote a best-selling book setting out his conclusions, coining the term longtermism to describe his viewpoint. The title of his book is What We Owe the Future: A Million-Year View.
The million-year view? Now, that is some serious consequentialism! MacAskill’s book took the debate about the hinge hypothesis and longtermism out of academia and into mainstream media, gaining some influential adherents including Elon Musk, who described the book as “a close match for my philosophy.” Musk sees his goal of making humanity a multiplanetary species as necessary to ensure its long-term survival.
Unsurprisingly, this perspective has come in for some trenchant criticism, including from other philosophers such as the philosopher and historian Émile P. Torres, who described longtermism as “arguably the most influential ideology that few members of the general public have ever heard about.”
Torres thinks longtermism is toxic, potentially providing justifications based on utilitarian ethics for atrocious acts in the present to ensure a glorious future for numberless future generations. He instances Nick Bostrom, another Oxford philosopher, who made the case for a global surveillance system to safeguard against a superhuman AI going rogue.
More recently, the longtermism movement has lost some skin, in part because of the extremity of some proposals from its advocates, but also because of its embrace by some dubious characters, not least the cryptocurrency fraudster Sam Bankman-Fried, who appointed William Macaskill as his ethics adviser.
So, what do I make of all this? The core ethical premise of longtermism—that future people matter intrinsically as much as present people—seems sound. The problem with it is the extreme difficulty of predicting the consequences of present actions and policies into the distant future. Would anyone living in the Roman Empire have the faintest conception of the kind of problems, challenges and opportunities we have today?
To try and make policies today geared to the needs of people a million years hence seems absurd—except in one respect. If the present generation does something that results in the annihilation of our species, the implications for future generations are unambiguous—there won’t be any.
Parfit’s main argument for the hinge theory was that we live in an age of transformative technologies that, on the plus side, may provide solutions to some of the world’s most intractable problems, but also create multiple new forms of existential risk, including the possibility of species extinction, at a time when humanity is still confined to a single planet.
This reasoning is termed the time of perils argument and is tied to the growing interest in recent years in the problem of existential risk, which regularly figures in public discourse, most often in the context of climate change. Since the main effects justifying climate change’s status as an existential issue are likely to pan out over the present and subsequent centuries, anyone seriously bothered about it is a longtermist, with respect to this issue at least.
But here is the problem: it turns out there is much more to existential risk than climate change, nuclear war and lethal pandemics, the dangers that get the most current attention. As has been revealed by the work of researchers associated with a number of recently established cross-disciplinary centres devoted to existential risk and human futures, some of the most serious existential risks scarcely figure in public debates.
The researcher Toby Ord, has attempted to identify and quantify the main sources of existential risk humanity could face in the coming century, and set out his conclusions in a book The Precipice: Existential Risk and the Future of Humanity. He estimates the probability of at least one of the risks eventuating at one in six, which is not the highest recent estimate. The British cosmologist and former Astronomer Royal Martin Rees put it at one in two.
As the title of Ord’s book implies, he thinks we are traversing a particularly dangerous period in human history in which our species could pass to sunlit uplands of a future in which technology provides solutions to some of our most pressing problems—diseases, short lifespans, poverty, environmental degradation, and so on—provided that we don’t succumb to one or more of the dangers that he surveys.
Just about all that posited risk is attributable to anthropogenic (human created) dangers, rather than purely natural ones. This includes things like synthetic pathogens, out of control molecular nanotechnology, and physics experiments like those at CERN’s Large Hadron Collider creating extreme conditions that, some argued before it went live, could destroy the earth.
This is not an entirely new worry. If you saw the Oppenheimer movie you would be aware that the scientists who developed the atomic bomb in the 1940s were, for a time, seriously concerned that a fission bomb detonation could trigger a chain of fusion reactions in the atmosphere and oceans that would vaporize the surface of the earth (I wrote an article about this for the Australian last year). They estimated the probability of this at slightly under 3 in a million, and decided on this basis it was OK to proceed!
What about AI? Toby Ord, the researcher mentioned above who has produced order-of-magnitude estimates of the likelihood of various sources of existential risk, rates this as by far the most serious, not least because it greatly compounds other dangers, such as engineered synthetic pathogens.
Consider again Parfit’s statement, made over a decade ago. Notice the timeframe he envisages. He identifies the “next few centuries” as the critical hinge period. This now seems dated, to put it mildly, given the extraordinary, indeed exponential, acceleration of progress in AI over the past 5-10 years.
This entered the public consciousness with the public release of ChatGPT in November 2022, following which a procession of increasingly powerful AIs based on similar large-language model (LLM) architectures have appeared.
This has led some experts to speculate that we could be just decades, or even years, away from artificial general intelligence (AGI) that equals or exceeds human capabilities across almost all domains.
A survey published in January 2024 of 2,700 AI experts who had published in top-level venues reported the following:
Another striking pattern is widespread assignment of credence to extremely bad outcomes from AI. As in 2022, a majority of participants considered AI to pose at least a 5% chance of causing human extinction or similarly permanent and severe disempowerment of the human species, and this result was consistent across four different questions, two assigned to each participant. Across these same questions, between 38% and 51% placed at least 10% chance on advanced AI bringing these extinction-level outcomes.
These prognoses from AI experts, if remotely accurate, imply the risk from AI dwarfs that from more commonly discussed risks, including climate change. Would you get on airplane if a majority of aeronautical engineers thought there was at least a five percent chance of it crashing?
It should be noted that there is significant argument about this, though most of those most closely involved with the AI field think the existential danger is real. Two of the trio of figures credited with laying the foundation of the current AI revolution based on neural net architectures, who shared the 2018 Turing Prize (regarded as the computing world’s equivalent to the Nobel Prize), think the danger is serious (Geoffrey Hinton and Yoshua Bengio) while Yann LeCun, the chief AI scientist at Meta, is less alarmed.
However, even LeCun acknowledges that it is just a matter of time before superhuman AI arrives:
There’s no question that we’ll have machines assisting us that are smarter than us. And the question is: is that scary or is that exciting? I think it’s exciting because those machines will be doing our bidding. They will be under our control.
Will they? It is hard to understand LeCun’s complacency about this. One of the scenarios being discussed is a human-level AI starting to redesign itself, leading to a process of recursive self-improvement and an intelligence explosion. How easy would it be to control an entity vastly smarter than any human?
What if it were to develop goals incompatible with human values, what those in the field term the AI alignment problem, something that all the main Western participants in AI development acknowledge and are starting to address, with varying degrees of urgency.
A further complication with AI based on neural nets, capable of learning how to do things without being encoded with explicit rules and procedures, is that it is very difficult for humans, even the AI’s designers, to understand the exact processes leading to its outputs. In effect, such AIs are black boxes, leading to research efforts to make such AIs interpretable, meaning humans can understand the reasoning behind their predictions and decisions.
Notice that the conclusion from the AI survey cited above talks not just about human extinction, but about “human extinction or similarly permanent and severe disempowerment of the human species.” This accords with the most widely accepted definition of an existential risk.
The Swedish researcher Joseph Carlsmith contends that highly capable AIs capable of strategizing will have an incentive to maximize their power in order to best achieve whatever goals they are given, or develop on their own, and that there is a real risk this could lead to the “permanent disempowerment of humanity.”
So, we could well have power-seeking, indeed power-maximizing, artificial agents, with goals that are incompatible with human values. This would be a difficult problem, even under otherwise ideal circumstances.
But circumstances are far from ideal. We live at a time when geopolitical relations are at their most fraught and dangerous since the height of the Cold War, as a coalition of autocratic states are striving to overturn the rules-based international order and replace it with one where naked power rules.
The Putin regime in Russia has launched a major land war in Europe aimed at subjugating and absorbing a neighouring sovereign state with a democratically elected government. The Xi Jinping regime is resorting to force in pursuit of its goal to incorporate a vast expanse open ocean, the South China Sea, into its territory. The seemingly benign post-Cold War era has come to a definitive end.
This is a huge problem for the ensuring safe artificial intelligence. There is general agreement that a viable solution to the AI alignment problem will need to be global in nature, with verifiable international agreements. With the collapse of trust between the major powers, this is a huge problem.
Émile Torres, the critic of longtermism mentioned above, has argued that it is crucial to avoid an “AI arms race”. Sorry, Emile we are already in one, as described in a report by the National Security Commission on Artificial Intelligence, co-chaired by Eric Schmidt, a former CEO of Google. As the report states: “AI systems will also be used in the pursuit of power. We fear AI tools will be weapons of first resort in future conflicts … It is no secret that America’s military rivals are integrating AI concepts and platforms to challenge the United States’ decades-long technology advantage.”
Schmidt and his colleagues intended this report to be a wake-up call to the US and its allies. But here is a bitter irony: the CCP regime in China received its wake-up call as to the potential of AI from the British AI startup DeepMind (now part of Google) in 2016, when that company’s AlphaGo agent publicly defeated the world champion in the East Asian game Go, the South Korean Lee Sedol.
This passed almost completely unnoticed in the West, but not in Asia, where it was a sensation, with forests of TV cameras beaming the event live, like a football grand finale. Until recently, Go was thought to pose an intractable problem for AI, because of the astronomical number of possible configurations of the board (10 raised to power 170). Brute force computation, of the kind used by earlier chess playing AIs, could not cut it.
AlphaGo used a technique called deep reinforcement learning whereby an AI agent learned to play at expert level by playing innumerable games against copies of itself, leading to successively better versions, starting with no knowledge of Go beyond the rules of the game.
According to Kai-Fu Lee, the former president of Google China, the game where AlphaGo beat Lee Sedol was watched by 280 million Chinese. It was a sensation: “to people here, AlphaGo’s victories were both a challenge and an inspiration … they turned into China’s Sputnik moment for artificial intelligence” This prompted the regime to announce an ambitious plan to pour resources into AI development with the goal of making China the world leader by 2030.
The US is thought to hold a tenuous lead in this race, but the ultimate outcome is anything but clear. One key advantage the CCP regime has is that all of the results of civilian and academic research are required by law to be made available for military and security purposes—the doctrine of military-civil fusion. In the US and the West, by contrast, companies and researchers reserve the right to decline to participate in weapons-related AI work.
But get this. For several years the US has been working on a program to apply reinforcement learning to submarine detection, in conjunction with researchers at the Scripps Institute of Oceanography at the University of California San Diego.
Well, that is good to know. But it seems these researchers have some interesting collaborators. Here is some information, from an article on the defence technology site International Defense, Security and Technology:
Scientists from China and the United States have developed a new artificial intelligence-based system that they say will make it easier to detect submarines in uncharted waters … The researchers said the new technology should allow them to track any sound-emitting source—be it a nuclear submarine, a whale or even an emergency beeper from a crashed aircraft—using a simple listening device mounted on a buoy, underwater drone or ship.”
The above article refers to a paper with the arcane title Deep-learning source localization using multi-frequency magnitude-only data that appeared in the July 2019 issue of The Journal of the Acoustical Society of America, which is indeed authored by a combination of American and Chinese authors. It describes an AI technique that allows the location of objects in the deep ocean with significantly greater precision.
So, if I understand this correctly, US researchers at Scripps Oceanography are collaborating with colleagues from their countries’ chief strategic rival on the development of a sophisticated new technique with clear application to anti-submarine warfare. Reading this, you wonder about the limits of human naivete.
Consider the implications of this for the US, the UK and Australia’s AUKUS program to get nuclear submarines by collaborating with the former. In July 2022, IEEE Spectrum, the journal of the international Institute of Electrical and Electronic Engineers, published an article titled Will AI Steal Submarine’s Stealth? with the sub-title “Better detection will make the oceans transparent—and perhaps doom mutually assured destruction”.
To paraphrase Marx in the opening lines of the Communist Manifesto, a spectre is haunting the world, the spectre of misaligned superintelligent AI, with goals incompatible with human values and interests.
There are good reasons to regard this as the most significant, truly existential, risk our species could face in the coming century. And because of the exponential acceleration of progress on AI of the past decade, this eventuality could arrive much earlier than previously thought.
There are still many people, including a minority of experts in the field, who regard this as a science-fiction scenario. Some argue that the main focus of concern should be on short term impacts, such as the disruptive effects on the labour market, and the use of AI to manufacture disinformation, including deep fakes, that can be used to distort political processes.
However, in between the short-term effects of AI, and the more distant existential dangers of misaligned AGI, there is another type of danger: the possible rise of an AI-empowered global totalitarian hegemonic power. The principal candidate for such a power is, of course, the CCP regime that rules China. This regime is putting huge resources into AI, aiming overtake the US to become dominant in the field by 2030.
As noted above, the CCP regime has adopted a policy of military-civil fusion, that requires all research results and other outcomes from government, academia, and the private sector to be made available to the military and the security state.
As well as having a critical role in the regime’s quest for military dominance, AI has already been harnessed to create a system of social surveillance and control beyond anything Orwell could imagine for his 1984 fictional dystopia—a veritable digital panopticon. Moreover, the CCP regime is already exporting this technology to the wider world, starting with other autocratic states looking for better ways to repress their populations.
The CCP is now the dominant partner in a coalition of autocratic, trending towards totalitarian, states striving to overturn the Western-dominated world order and replace it with one more congenial to their purposes. Second to China in this coalition is Russia, ruled as a personalist dictatorship by Vladimir Putin, who has said that “whoever becomes the leader in the AI sphere will become the ruler of the world.”
This combination, powerful malign geopolitical actors and accelerating progress on AI, brings to the fore one of the less-discussed existential dangers: the risk of stable totalitarianism, a regime capable of detecting and suppressing emerging dissent at its most nascent stage, making it extremely difficult, if not impossible to overthrow (see this article for an interesting debate about this scenario).
Recall the alignment problem that AI researchers and entrepreneurs in the democratic world agonize about: how to ensure that a super-intelligent AI, when it emerges, will not pursue goals antithetical to human interests. How would this goal be compromised if AI dominance is achieved by a secretive, totalitarian state?
So, if we are not on “the” hinge of history, we certainly seem to be living in a very influential time. Toby Ord made an apt choice for the title of his book on existential risk, The Precipice.
Coming in next week’s newsletter
Professor John J. Mearsheimer: Theorist for Aggression
The next two newsletters will have a geopolitical focus. If you have followed the debates about the wars in Ukraine and Gaza, you will probably have head of John J. Mearsheimer, a professor of political science at the University of Chicago.
Apologist for tyranny? That seems a bit harsh, you might think. To clarify, I am not claiming Mearsheimer is an agent of influence for tyrannical regimes, paid or otherwise.
It is the case, though, that he has gone far beyond most critics of Western policy toward Russia, such as those who think the post-Cold War expansion of NATO was imprudent, to positively endorse Putin’s decision to invade Ukraine.
You will rarely, if ever, hear Mearsheimer make the slightest moral condemnation of Russian conduct in initiating the war, or its atrocious conduct of it. He insists that the Ukraine war is solely the West’s fault. He basks in the praise he receives in the halls of power in Moscow and Beijing and boasts of his “many friends” in both capitals, who find his worldview highly congenial and consistent with their goals.
Mearsheimer has been extraordinarily influential in Western debates on the war, not least because he has been able to erect a theoretical scaffolding around his outrageous views. Mearsheimer is the most prominent advocate nowadays of the realist school of international relations, having developed his own variant of the theory that he aptly terms “offensive realism”, that dramatically lowers the threshold at which recourse to war is deemed legitimate.
He has been prolific commentator on both of these conflicts. With regard to Ukraine, Mearsheimer invokes his own theory to claim that the Russian actions are both rational and warranted in the circumstances, and he endorses Putin’s ludicrous claim that Russia’s very survival was threatened by NATO expansion.
With regard to Gaza, he has described the terrorist onslaught of 7 October 2023 as a “prison break”, and attributes almost all blame for these tragic events to Israel and the United States. He bizarrely dismisses his own theory, which views survival as the prime imperative for all states, as inapplicable to this conflict despite the explicit and all-too-credible threats to Israel’s survival by Iran and its proxies.
What to make of that? I have written a detailed critique of Mearsheimer’s theory and his commentaries on these two conflicts.
very interesting article. Morals and ethics are not universally shared I feel. the development of AI would be no different. I have often said that Children are the ultimate AI. You can provide the best education, inculcate them with morals , ethics and so on. YET I does not guarantee a favourable outcome . Why would AI be any different.
Having said that, there are 5 groups of people who invariably get things wrong: economists , meteorologists , computer modellers , politicians, and doomsayers .
in re climate change(see doomsayers), Rev Malthus is alive and well. Human ingenuity and adaptablitity are ALWAY discounted from the equation
Thanks Mr Baldwin. you have been missed . I hope the philosophy forum will begin again
Every generation, every year, even every day, represents a “hinge-point” in history: today is always the first day of our future (and the last day of our past). Forget about the next million years – just thinking about our next generation’s challenges raises countless headaches, with the changes taking place right now. And surely, anyone with an even moderate awareness of history would know that the future is unpredictable (apart from “death and taxes”), as it always has been. Many of our present-day prophets seem to have been too deeply immersed in sci-fi and computer games to understand the practical limitations of the real world. Musk’s idea of settling Mars is bordering on psychosis (we already know he’s a megalomaniac) – he thinks we’ll be able to radically transform an alien planet’s environment to support human life, while we can’t even preserve a perfectly liveable ecosphere here. Seeing his business activities are in no small part contributing to our environmental degradation, perhaps his thinking is partly drive by guilt (were he capable of such feeling)?
While transformative technologies have helped solve (and postpone) some serious challenges for humanity, they have inevitably facilitated the further growth of human populations, creating even greater problems. And they all seem to require growing amounts of energy. But hoping for international cooperation for their solutions encounters a fundamental contradiction: we have evolved for life in small, hunter-gatherer groups, yet are now forced to work closely together on a planetary scale, suppressing small-group¸ even individual, interests for the benefit of the global collective. How realistic is that? There are good evolutionary reasons for the high prevalence of the “dark triad” in human groups, where manipulative, narcissistic psychopaths habitually emerge to attain positions of influence and control in most large-sale human endeavours, such as running governments, or big corporations. How to deal with that will always be a stumbling block.
It's unlikely that Homo sapiens is at risk of soon becoming extinct, but its prospects of sustaining massive populations as at present, are diminishing fast, and growing rather bleak; the cull, when it comes, will be grotesque. I don’t think it will be climate change, but rather the constraints on food production and distribution which will prove the ultimate limiting factor (there go our cities). Forget AI: it needs far too much energy, and someone can always pull the plug. In the real long term, biology will always prevail over technology (autocrats take note: “while there’s death, there’s hope”). Human populations will drop to levels that can survive in a degraded biosphere with technology that requires low energy input – a sort-of neo-hunter-scavenger economy, probably living in small groups again. Sorry to be so nihilistic, but it’s what age does, at least to some of us.
On a practical point: given the rapid advances in undersea monitoring technology, how sensible is it for us to be planning to acquire N-subs “some time in the future”, when killer drones are likely to be developed soon (if not already) that will render manned submarines obsolete?