“The Race For AI Supremacy”. A conversation with Parmy Olson
In the race to develop superintelligence — a technology that will transform the world more fundamentally than electricity or the internet — two figures stand out: Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of DeepMind. Both are brilliant, ambitious, and determined to create machines that can think, reason, and learn as humans do. In her new book, “Supremacy: AI, ChatGPT, and the Race That Will Change the World”, journalist Parmy Olson explores their rivalry, and the forces shaping the future of AI. She discussed her work on the London Futurists Podcast.
“This is really about a battle for control of technology,” Olson explains, “and governance over AI’s future.” The competition between Altman and Hassabis is taking place within the context of an industry dominated by the US tech giants, with Google and Microsoft in pole position, and Meta, Amazon, Apple, and Nvidia eager to catch up.
Olson’s book reveals a world of powerful ambition, ethical dilemmas, and high-stakes competition, with the tensions between the US and China making it a vital geopolitical battle as well as a corporate one.
The costly journey to superintelligence
Artificial General Intelligence (AGI) is a term used to mean a range of different things, but the best way to understand it is a machine which denotes the arrival of superintelligence. An AGI has all the cognitive capabilities of an adult human, and some of them to superhuman level. Very soon after AGI arrives we will have superintelligence, and quite likely an intelligence explosion, as machines rapidly improve themselves.
Developing superintelligence is not only an extraordinary technological challenge, but an immensely expensive one. As Olson points out, the capital needed to develop advanced models, manage enormous compute power, and attract top talent means that founders and companies often find themselves caught between idealism and commercial reality.
For instance, OpenAI began life as a non-profit, supported by significant donations, including a large gift from Elon Musk. However, when Musk’s demand to be put in charge was denied, he quit, taking his money with him. OpenAI needed billions to sustain its research, so it restructured into a “capped profit” model, where profits are limited to attract commercial funding while seeking to maintain its non-profit ideals. Similarly, DeepMind started with grand ambitions to build AGI for the benefit of humanity, but once Google acquired the company, they were inevitably subject to the demands of their corporate managers and shareholders.
“Developing AI is just so expensive,” Olson explains. “It’s almost impossible to do it without being drawn into the force of gravity of companies like Microsoft or Google.” The need for funding, combined with the ambition to develop ever more advanced systems, often leads to compromises that severely test founders’ original ideals.
Strategic minds and corporate power
As the leaders of OpenAI and DeepMind, Altman and Hassabis have become icons in the AI world, not only for their intelligence, but also for their obligation to grapple with the ethical and existential risks posed by AGI. Hassabis, who began his career as a neuroscientist and was once a chess champion, has a reputation for thinking several moves ahead. Olson notes that those who have worked with him describe him as a master strategist—someone who excels at “managing both up and down,” which might explain his ability to rise within Google’s ranks.
After a power struggle between DeepMind and Google Brain, Hassabis leads Google’s entire AI division, responsible not just for DeepMind but for Google’s overarching AI strategy. His leadership has placed him in a powerful position, with some speculating he could one day take the helm of Alphabet as a whole. Altman, meanwhile, has also gained a reputation for being both open and pragmatic, warning of the existential risks AI could pose if mishandled, but criticised by some for changing his approach to suit the financing requirements of his company.
“Both Altman and Hassabis are sincere in their intentions to make a positive impact,” says Olson. “But they’re also dealing with intense pressures and conflicts of interest. They’re trying to hold onto their ideals while balancing the pull of huge commercial interests, which often pulls them in opposite directions.”
Navigating the existential risks of superintelligence
A unique aspect of this race is that both Altman and Hassabis acknowledge that superintelligence could pose an existential risk to humanity. While they are optimistic about its potential, they also know that once superintelligence exists, it will almost certainly be beyond our control.
This paradox — racing to develop superintelligence while trying to ensure its safety — reflects the complex motivations of these two leaders. Both believe that if superintelligence is inevitable, it’s better for it to be developed by responsible actors rather than left to “bad actors”, including certain foreign governments. Yet, their pursuit raises difficult questions about how much caution they are really exercising.
“They’re caught in this incredible balancing act,” says Olson. “They’re committed to advancing AI, yet they’re constantly aware of the risks it poses. It’s like they have to perform mental gymnastics to reconcile their ambitions with their concerns.”
The rising influence of tech giants—and the spectre of China
While the rivalry between Altman and Hassabis might be the most visible competition, the broader power struggle between Microsoft and Google has intensified. Both companies have invested heavily in AI, and both are deeply entangled with their respective AGI labs—Microsoft with OpenAI, and Google with Google DeepMind.
Complicating matters further is the global race between the US tech giants and China, where the government provides extensive support to AI initiatives. Although Chinese models lag behind in sophistication, Chinese tech giants like Baidu and Alibaba benefit from subsidies that make their large language models accessible to businesses at a fraction of the cost. Meanwhile, the role that the US government will play is just one of the many imponderables now that Trump has secured his return to the White House.
Will governments step in?
Given the immense power and impact that superintelligence will have, it’s likely that intelligence agencies and governments will intervene once they believe its arrival is imminent, trying to avoid losing control to the technology itself, and trying to make sure they don’t fall behind foreign competitors.
Olson suggests this might take the form of covert collaboration between tech companies and intelligence agencies, similar to what was revealed by Edward Snowden’s revelations. Recently, OpenAI appointed a former NSA official to its board, possibly signalling an openness to increased government oversight.
The idea of nationalizing the companies developing AI would be highly controversial. “The tech lobby in Washington is incredibly powerful, arguably even more influential than the government itself,” Olson points out, “but some form of collaboration or oversight seems increasingly likely.”