In the global AI race, battle lines are being drawn and the stakes couldn’t be higher. Technology executives now have a direct line to government leaders and are writing policy and influencing key decisions around AI development that will shape the future of global politics, diplomacy, security and competition. Mairéad Pratschke discusses these and other developments in light of a new book by Henry Kissinger, Craig Mundie and Eric Schmidt.
Mairéad Pratschke discussed sovereign AI with Craig Mundie in an LSE event. Watch the video.

2025 begins with Donald J Trump back in the Oval Office. After months courting the soon-to-be president, the CEOs of big tech companies were displayed front and centre at Trump’s inauguration. On this side of the Atlantic, the close alliance between technology and government is also influencing the direction of government policy. In the UK, the 50-point AI Opportunities Plan, authored by tech entrepreneur Matt Clifford, was published on 13 January. The same day, Open AI published AI in America, its economic blueprint framing infrastructure as “destiny”. A week earlier, the CEO of AI company Anthropic, Dario Amodei, and former US National Security Advisor Matt Potinger had penned an op-ed for the Wall Street Journal arguing for proactive moves to maintain the United States’ early lead. One such move is Stargate, a $500-billion private investment in AI infrastructure.
If data is the new oil, AI is the new Manhattan Project, and governments the world over are investing in infrastructure to support so-called sovereign AI. The stakes could not be higher and battle lines are being drawn, with the US and China in direct competition and the EU occupying an uncomfortable position as regulatory champion. Technology executives now have a direct line to government leaders, and they are not just writing policy, they are influencing key decisions around AI development that will shape the future of global politics, diplomacy, security and competition.
If AI is the arms race for the 21st century, who better to provide advice on strategy than the man whose entire career was defined by the Atomic Age? Henry A Kissinger spent the last years of his life becoming an expert in AI and his experience in government from the early days of the nuclear threat is the context that frames this cautionary yet optimistic tale about our AI future.
In their new book “Genesis: Artificial Intelligence, Hope and the Human Spirit”, co-authors Henry Kissinger, Craig Mundie (ex-Microsoft) and Eric Schmidt (ex-Google) present a vision of the AI future that is equal parts cautionary tale and optimistic techno-futurism. Genesis is a sequel to The Age of AI, which Kissinger and Eric Schmidt co-wrote with computer scientist Daniel Huttenlocher.
Drawing on Kissinger’s wealth of expertise, Genesis situates AI development and its implications for our shared future squarely in the terrain of geopolitics and national security. In it, the authors lay out precisely what is at stake for politics and policy, global diplomacy and the choices that will define the future of human civilisation. Defining that future requires us to choose, sooner rather than later, between co-evolution and co-existence with AI.
AI pioneers may underestimate the scope of the economic and political challenges that they have set in motion.
Genesis presents AI as humanity’s new beginning, an opportunity to transcend the physical limitations of the human body, to explore new territory of the mind. Insights on politics, security, prosperity and science are peppered with excerpts from philosophical writings and lessons from global history and diplomacy. Comparing the polymaths of the Enlightenment to the cognitive architectures underlying today’s AI systems, the vision is one in which the mixture of expertise in AI systems can connect formerly isolated branches of knowledge to create a new collective intelligence.
While AI is bringing exciting advances in science, its impact on politics is much less clear. The authors point to human weakness, visible in human bias and emotional decision-making, which makes democracy itself vulnerable. AI could bring a new rationalism to politics but rather than functioning as a modern-day Machiavelli, the authors focus on using AI to the more mundane tasks, to reduce the data overload in the civil service and as an assistant to centralise government policy. As for leaders, the authors wisely suggest striking a balance between leveraging AI and becoming dependent upon it – a critical issue common to all domains in the knowledge economy, as AI is increasingly integrated into daily workflows.
On security, the authors again suggest that AI’s machine objectivity can introduce a rationalism into strategic decision-making, potentially removing the imperative for war as we know it. They suggest that quasi-religious groupings might replace our current national allegiances. This is particularly interesting, considering the battles already underway between AI tribes including the “effective altruists” and “effective accelerationists”. However, rather than ushering in a new era of reason, the proliferation of such techno-political groupings suggests a darker age of division between AI tribes.
The authors anticipate an era of abundance, in which AI will move humanity beyond the zero-sum game and towards a future of prosperity. While an attractive prospect, tackling the details of the transition period in which we now find ourselves is an urgent task that needs attention. AI disruption means that some will lose their livelihoods long before any universal income appears to pay their mortgage.
The vision of the future treads a fine line between the utopic and dystopic. In particular, the agentic future, in which we share our planet with multiple non-human intelligences, will not appeal to many. But these same systems, promising savings and efficiencies from task automation, are being aggressively promoted in industry. And as the authors also caution, these are the systems that pose the greatest risk to human civilisation, so it is critical that humans do not lose control of them.
The key question is, what sort of AI future do we want to create? For the authors, the choice boils down to co-evolution or co-existence with AI. While co-evolution with AI might allow us to consign our human inefficiencies to the rubbish bin of evolution, the risks might also outweigh the benefits and integration with AI is a path that will not be easy to retreat from.
And so, in the short-term, co-existence it is. How do we co-exist with machines? The authors suggest starting by defining what makes humans distinct, and suggest the concept of dignity. But again, keen watchers of AI development will be quick to point out the distinct lack of dignity afforded to those tasked with training these very AI systems. The devil is truly in the details.
Kissinger understood, just as AI developers do today, that aligning AI systems to human values is the challenge for our age. “Profit-driven or ideologically-driven purposeful misalignments are serious risks, as are accidental ones”. Creating safe, responsible AI requires models that respect human values and behave accordingly. But the quest also raises several questions, the first of which is, whose values? So far, AI models do not represent or reflect the diversity of voices in the world.
Indeed, the heterogeneous nature of our response to AI to date underscores this need for unity, as well as the distinct lack of it when it comes to current discourse on AI. And even as the authors call for collaboration in this quest for alignment, the world is hurtling headlong into the next arms race.
Sign up for our weekly newsletter here.
- This blog post discusses Genesis: Artificial Intelligence, Hope and the Human Spirit, by Eric Schmidt, Henry A. Kissinger and Craig Mundie (John Murray, 2023).
- The post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
- Featured image provided by Shutterstock.
- When you leave a comment, you’re agreeing to our Comment Policy.
Oh Yes…thank you….in my post a few days ago I expressed my thoughts and concerns about the upcoming Paris Meeting on February 11-12 during which the various Economic-Political Potentates of the Digital Industry could have shown off with the media support of heads of state, AI experts and so on to obtain large funding to spread AI. In the post I produced I had used the historical metaphor relating to the Greek myth of the “Procrustean bed” to warn about the announced “wild” invasiveness of AI especially in education, instruction and strategic sectors of personal training and therefore also in the academic world. Aware of the potential risk of “mental conditioning” induced by cognitive processes determined by an indiscriminate use of AI, but also by that connected in this way by the modern, algorithmic “Turing machine” based, as is known, on a thought, a type of “forma mentis” – to use the terminology adopted by the Human Intelligence expert Howard Gardner – which could lead over time to selecting and favoring the development only of “logical thinking”, of “logical-mathematical thinking” to the detriment of other “formae mentis” and lead them to “extinction” and with it the various and multiple “talents” that manifest themselves in coherence with the modern concept of “biodiversity” applied within our species. This is my concern aimed at highlighting a potential risk of arriving at a form of “Thought Monopoly” to the detriment of the richness of the current “embodied cognition”, established during the process of hominization of our species along an evolutionary path of adaptation and survival in the multiple and diverse environments of the Biosphere of this extraordinary Planet. If, alongside this concern, I read and hear about the use and potential – already identified – for military purposes (!!), of “belligerence” (?!) then I believe that this personal “concern” of mine necessarily transforms into a general “Appeal” addressed to all global institutions (starting with the UN) to achieve control, some form of caution in this area because we have already seen and “experienced” the effects of atomic bombs and Humanity has absolutely no need to achieve – in a scientifically and technologically accurate way – other and more “sophisticated means of death” and destruction of entire Ecosystems of this Biosphere that nourishes us and hosts us…..after having given birth to us!!