Eurasia Group | Ungoverned AI: Eurasia Group's #4 Top Risk of 2024
Back

Risk 4: Ungoverned AI

AI generated robot stands in front of a globe
Gaps in AI governance will become evident in 2024 as regulatory efforts falter, tech companies remain largely unconstrained, and far more powerful AI models and tools spread beyond the control of governments.

Last year brought a wave of ambitious AI initiatives, policy announcements, and proposed new standards, with cooperation on unusual fronts. America's leading AI companies committed to voluntary standards at the White House. The United States, China, and most of the G20 signed up to the Bletchley Park Declaration on AI safety. The White House issued a groundbreaking AI executive order. The European Union finally agreed on its much-heralded AI Act. And the United Nations convened a high-level advisory group (of which Ian is a member).

But breakthroughs in artificial intelligence are moving much faster than governance efforts.  Four factors will contribute to this AI governance gap in 2024: 

1) Politics. As governance structures are created, policy or institutional disagreements will cause them to limit their ambitions. The lowest common denominator of what can be agreed politically by governments and what tech companies don't see as a constraint on their business models will fall short of what's necessary to address AI risks. This will result in a scattershot approach to testing foundational AI models, no agreement on how to deal with open source vs. closed source AI, and no requirements for assessing the impact of AI tools on populations before they are rolled out. A proposed Intergovernmental Panel on Climate Change (IPCC)-style institution for AI would be a useful first step toward a shared global scientific understanding of the technology and its social and political implications, but it will take time … and is not going to “fix” AI safety risks on its own any more than the IPCC has fixed climate change.

2) Inertia. Government attention is finite, and once AI is no longer "the current thing," most leaders will move on to other, more politically salient priorities such as wars (please see Top Risks #2 and #3) and the global economy (please see Top Risk #8). As a result, much of the necessary urgency and prioritization of AI governance initiatives will fall by the wayside, particularly when implementing them requires hard trade-offs for governments. Once attention drifts, it will take a major crisis to force the issue to the fore again.

3) Defection. The biggest stakeholders in AI have so far decided to cooperate on AI governance, with tech companies themselves committing to voluntary standards and guardrails. But as the technology advances and its enormous benefits become self-evident, the growing lure of geopolitical advantage and commercial interest will incentivize governments and companies to defect from the non-binding agreements and regimes they've joined to maximize their gains—or to not join in the first place. 

4) Technological speed. AI will continue to improve quickly, with capabilities doubling roughly every six months—three times faster than Moore's law. GPT-5, the next generation of OpenAI's large language model, is set to come out this year—only to be rendered obsolete by the next as-of-yet inconceivable breakthrough in a matter of months. As AI models become exponentially more capable, the technology itself is outpacing efforts to contain it in real time. 

Which brings us to the core challenge for AI governance: Responding to AI is less about regulating the technology (which is well beyond plausible containment) than understanding the business models driving its expansion and then constraining the incentives (capitalism, geopolitics, human ingenuity) that propel it in potentially dangerous directions. On this front, no near-term governance mechanisms will come close. The result is an AI Wild West resembling the largely ungoverned social media landscape, but with greater potential for harm. 
Technological breakthroughs moving faster than governance efforts

Two risks stand out for 2024. The first is disinformation. In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors—notably Russia—to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale. Sharply divided Western societies, where voters increasingly access information from social media echo chambers, will be particularly vulnerable to manipulation. A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.

Beyond elections, AI-generated disinformation will also be used to exacerbate ongoing geopolitical conflicts such as the wars in the Middle East and Ukraine (please see Top Risks #2 and #3). Kremlin propagandists recently used generative AI to spread fake stories about Ukrainian President Volodymyr Zelensky on TikTok, X, and other platforms, which were then cited by Republican lawmakers as reasons not to support further US aid to Ukraine. Last year also saw misinformation about Hamas and Israel spread like wildfire. While much of this has happened without AI, the technology is about to become a principal risk shaping snap policy decisions. Simulated pictures, audio, and video—amplified on social media by armies of AI-powered bots—will increasingly be used by combatants, their backers, and chaos agents to sway public opinion, discredit real evidence, and further inflame geopolitical tensions around the world.

The second imminent risk is proliferation. Whereas AI has thus far been dominated by the United States and China, in 2024 new geopolitical actors—both countries and companies—will be able to develop and acquire breakthrough artificial intelligence capabilities. These include state-backed large-language models and advanced applications for intelligence and national security use. Meanwhile, open-source AI will enhance the ability of rogue actors to develop and use new weapons and heighten the risk of accidents (even as it also enables unfathomable economic opportunities). 

AI is a “gray rhino,” and its upside is easier to predict than its downside. It may or may not have a disruptive impact on markets or geopolitics this year, but sooner or later it will. The longer AI remains ungoverned, the higher the risk of a systemic crisis—and the harder it will be for governments to catch up.  


SUBSCRIBE TO GZERO DAILY
Sign up now for GZERO Daily, the newsletter for anyone interested in global politics, published by GZERO Media.
publications_detail.inc
Searching...