Academia.eduAcademia.edu

The Democratic Challenges Facing the Global Governance of Artificial Intelligence

2025, Current History

1 The Democra+c Challenges Facing the Global Governance of Ar+ficial Intelligence Eva Erman & Markus Furendal Forthcoming in Current History, Volume 124 (2025) *** At the heart of the current AI boom is the steadily repeating mantra that we live in extraordinary times. Depending on who you ask, we seem to be just a few years away from unleashing AI technologies that will boost overall productivity, solve medical enigmas, turn politics on its head, or dispose of humankind. This sentiment was recently expressed by AI’s poster boy Sam Altman, CEO of OpenAI, when he argued in The Washington Post with characteristic gravity that we currently “…face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s bene@its and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power? There is no third option — and it’s time to decide which path to take.” Keeping regimes like Russia and China at bay, Altman explained, requires the U.S. to invest signiMicantly in both digital infrastructure and human capital, as well as to help set up global AI institutions akin to the International Atomic Energy Agency (IAEA) or the Internet Corporation for Assigned Names and Numbers (ICANN). Importantly, this would not only be beneMicial for the U.S. economy, but also create “a world shaped by a democratic vision for AI”. The right kind of AI strategy would thus not only help “democratic AI” win over “authoritarian AI”, but also help create a more democratic world. Altman is entirely right to conceptualize the governance of AI technology as a multi-level issue, and two key reasons explain why we should not underestimate the importance of global regulatory initiatives in particular. The Mirst is that the AI industry is a truly global phenomenon, in the sense that it is driven by large multinational companies like Microsoft, Google, and Meta, who recruit talent from all over the world, train AI 2 models on data collected from the world wide web, and release their products in markets across jurisdictions. Local and national regulatory initiatives – like the recently discussed California State Senate Bill 1047 and last year’s executive order on AI from the Biden cabinet – may end up being toothless against these giants, since AI companies could simply decide to withhold their products from the more strictly regulated markets, and move their headquarters if need be. This might, in turn, trigger a race dynamic where legislation is successively weakened in each jurisdiction until we reach an equilibrium where AI is regulated less extensively than most people want. Consider, also, that signiMicant breakthroughs in AI are happening in the open source community, which is even more amorphous and difMicult to pinpoint to any particular jurisdiction. A second reason why AI governance is a truly global issue is that the disruptive effects of the technology’s introduction are not conMined by geographical boundaries. Just like air pollution and the release of greenhouse gases, AI models create signiMicant value for some, but serious and tangible problems for many others. If you ask economists, they would describe this as AI technology having signiMicant externalities – costs and beneMits carried by others than those who created them. Experts on international relations would add that, when we face potential race dynamics and negative externalities, well-designed global institutions can play key roles in resolving the problem. Altman is thus correct in suggesting that, just as the IAEA was created to promote the safe, secure and peaceful use of nuclear technology, global AI governance institutions could potentially help promote everyone’s interest, by avoiding a regulatory race to the bottom and promoting a fair distribution of AI’s positive and negative effects. At the same time, Altman’s analysis is overly simplistic, since he exploits the rhetorical power of terms like “democratic” and “authoritarian” AI without explaining what they mean. In addition, he assumes that we can identify each kind by asking if it was developed in a democratic and authoritarian states, implicitly suggesting that Western means democratic, and democratic means good AI. However, there are plenty of examples where such an equivocation breaks down. China is often criticized for using AI-powered automated surveillance of its minorities, for instance, but it would arguably not have had the capacity to build such systems were it not for American chip makers like Nvidia willingly doing business with them. Privacy-violating surveillance is, in addition, not restricted to authoritarian states. The American company ClearviewAI, which has scraped the internet of personal pictures to build a powerful facial recognition software, has 3 marketed its services to Western law enforcements, tried to shut down investigative reporting into its practices, and was recently Mined by the Netherlands’ data protection authority under the EU’s GDPR legislation for including Dutch people in the training data without their consent. Perfectly illustrating our point about AI’s global reach, the company claims the Mine is unenforceable since ClearviewAI does not offer its services to customers in the Netherlands or the EU. Hence, in the efforts to spell out a “democratic vision for AI”, being clear and precise about the concept of democratic AI is crucial, since it not only lends rhetorical power to inMluential people like Altman, but is also appealed to in increasingly common calls for a ‘democratization of AI’. On closer inspection, such calls appear to mix up several distinct claims. One is the view that it is desirable to increase diversity among AI developers, assuming that AI development will be more in line with what people want if there is greater similarity between AI users and AI developers. Another claim is that it would be good if AI technology was available for more people to use, using ‘democratic’ as a placeholder for ‘inclusiveness’ and ‘equal access’. Altman’s infrastructural suggestions is a third variant, which has less to do with democracy as an ideal, and should rather be viewed as a suggestion for how to leverage AI as a powerful technology in the global struggle for power between democracies and non-democracies. We suggest that more attention should be paid to the arguably most important sense in which we can speak of democratic AI, namely that the development and deployment of AI technology should ultimately be democratically controlled. In other words, a larger focus on AI governance. Even if only a fraction of the predictions around how society will be changed by AI technology come to pass, it is clear that people will be affected both as private individuals and as citizens. Technological development is, after all, not an independent and unstoppable force, but can be partly guided on the basis of certain values and toward particular goals. Democratic AI, on this view, means that those who are signiMicantly affected by the advent of AI technology – which is all of us – should have a say in how it is governed. Who controls AI today? Although the era of AI has just begun, it is obvious that the technology already affects our daily lives. Just how this happens depends, of course, on who you are. Early adopters are 4 probably reaping signiMicant beneMits of incorporating the technology before most others, but you do not have to use AI yourself to be affected. If you have regular interactions with bureaucracies, you are most likely already subject to automated decision-making with farreaching consequences for your chances in life. If you are working in a profession that was previously thought to be difMicult to automate, recent advances in generative AI might raise concerns around your job safety. If you are a student or a teacher, you may be confused about how to use AI as a tool, given that its arrival seems to question our idea about what the point of education is. In fact, even sceptics who may be hesitant to try AI applications have likely had pictures of their face used in training data. And whether we like it or not, we Mind ourselves in a society where the culture, norms and social expectations are being transformed, much like what happened with the arrival of the smartphone or social media in the last decades. Despite recent efforts from governments and organizations to regulate the AI industry, it is fair to say that most AI development is currently beyond democratic control. At the very least, some people – like venture capitalists, Silicon Valley CEOs and AI engineers – exert signiMicantly more inMluence over AI development than others, leaving the rest of us to react and adapt to their disruptive technology. Importantly, we are not claiming that democratic AI governance requires each design decision around AI to be made by committee. This would not only be infeasible but also halt what is often highly desirable technological progress. Moreover, democratically controlled AI would not necessarily mean that you get to decide how AI inMluences the world. It is an inherent fact about democracies that some people end up in the minority, and see their preferences be disregarded in favour of the majority. Democratic governance is, however, a way to make sure this happens in a legitimate way. There is thus an important difference between not being taken into consideration because you lack a seat at the table, and having a say but ending up in the minority. As we illustrate below, spelling out what democratic AI governance means requires us to ask a set of complex questions, which Altman’s dichotomy cannot capture, including what should be democratically controlled, who should have a say, and how this should happen. Our answers to these questions depend, in turn, on why we value the democratic ideal and what reasons there are to extend it to the AI domain. What should be democratized? 5 What needs to be democratized in order to help ensure that those signiMicantly affected have an inMluence in the decision-making? Even though it is common to describe AI as such a fast-moving technology that it cannot be regulated, it is more accurate to say that there is, by now, an emerging ‘regime complex’ governing its development. So far, there is no specially designed international institution of the kind Altman envisages, but existing organizations have developed standards, guidelines and principles to which AI companies can more or less voluntarily commit. In what is commonly called a turn from soft law to hard law, we are also seeing legislative efforts like the California state senate bill and the European Union’s AI Act, which include formal constraints, monitoring, and economic sanctions for AI developers who do not abide by them. As we have seen earlier in other emerging policy areas, like internet governance, the global AI regime complex is characterized by a lack of central institutions and hierarchies, and different actors develop partly overlapping or even conMlicting legislation. The right kind of AI governance can, it seems, help to build the proper legal and institutional framework within which more speciMic aspects of AI deployment and development may be democratically controlled. Before these recent developments, much of the discourse on AI governance was centred around ‘AI ethics’, often developed by well-funded think tanks, tech companies or academic institutions. At Mirst glance, these ethical frameworks seem to promote the ideal of democracy, since they promote the kinds of outcomes we expect from democratic governance. UNESCO’s “Recommendation on the Ethics of ArtiMicial Intelligence” and the OECD’s AI Principles, for instance, are posed to promote the effectiveness of governance in achieving social justice by requiring that AI systems follow ethical principles and human rights standards. This has also led to a focus on accountability, for example, by stressing the importance of establishing mechanisms to secure public access to governing documents, which in turn could help prevent corruption among decision-makers. Along similar lines, the popular notion of ‘alignment’ is often used to describe AI systems that perform in accordance with what their creators prefer, but it also captures the idea that the outcomes of democratic decisions should as far as possible align with people’s interests or with what they think is important. Some technology optimists go so far as suggesting that democratic decision-making could help achieving this through AI 6 technology, which can track citizens’ preferences and help experts make informed judgments. Many suspect that business-initiated corporate social responsibility efforts could be disregarded as ‘ethics washing’. A number of criticisms have been raised against these self-regulatory frameworks, such as their lack of enforceability due to their voluntary nature; their tendency to lend power to a few private actors, which tend to prioritize proMit-driven goals over ethical concerns; their lack of mechanisms for redress in cases of harm or breaches of the standards. Even if we set these concerns aside for now, democratic theory leads us to believe there is an additional, distinct pitfall with these efforts: they build on the assumption that what we could call output aspects of democracy can replace or fully compensate for the lack of important input aspects, that is, to what extent people who are signiMicantly affected by AI governance have a say in the decisionmaking. Even if business-led efforts end up perfectly tracking citizens’ interests or aligning with their preferences – securing output aspects like accountability – they will always fall short with regard to the input aspect of having a say in the decision-making and thus cannot lead to a democratization of AI governance. This raises critical concerns about the inclusivity, participation and legitimacy of the emerging regime complex of AI governance. Indeed, that decision-makers are held accountable for their actions is a virtue in any form of governance, but to count as democratic accountability, these decisionmakers must somehow have been authorized to take those decision by those who are expected to abide by them. Recognizing this suggests that AI governance is more democratic when people have democratic agency in the form of approving of or authorizing decision-makers and political bodies to design and implement AI regulation. On this view, even if governance efforts coming from non-state actors and international organizations may be laudable, there is an additional layer of democratic accountability in authorized entities like governments and certain international organizations, which is essential for AI governance to be democratic. This is intimately related to another important issue relating to the ‘what’ aspect of ‘democratizing AI governance’. In the debate around the (lack of) democratic control in AI governance, there has been a tendency to focus on particular decisions in speciMic policy domains, such as privacy concern and bias mitigation, and not least the notion of AI safety, i.e. the notion that a sufMiciently powerful model could pose a threat to humans. Initiatives like the EU AI Act and the OECD AI Principles are typically structured around 7 speciMic policy issues, where various stakeholders such as states, tech companies, academia and civil society organizations are invited to provide input and feedback on speciMic regulatory or ethical guidelines. While these efforts are valuable for making decision-making more inclusive, the drawback is that they tend to fall short in addressing the more foundational democratic problem of who gets to decide what counts as a problem to be put on the agenda in the regulation of AI in the Mirst place. As pointed out by the famous political scientist Robert Dahl in asking the question “Who governs?”, agenda-setting – the process by which certain issues are prioritized, framed, and given attention in policy discussions – is an overlooked but essential issue when assessing the democratic character of a society. Without democratic inMluence in the agenda-setting – inMluence over what the questions to be decided are – any downstream democratization in decision-making remains fundamentally limited and potentially skewed. As we illustrate below, this observation is highly relevant in the context of AI governance, where agenda-setting determines which aspects of AI development and deployment are considered problematic and worthy of regulation and oversight. Agenda-setting to a large degree shapes what kind of society people want to live in, the direction of society in dealing with the societal impact of AI, and our common supranational and international institutions. Who should have a say in the decision-making? This naturally leads us to the question of who should be present in a satisfyingly democratic AI governance, and on what grounds. Many are familiar with the longstanding concern that global governance in general suffers from a democratic deMicit. In response, scholars have suggested that apart from states and international organizations, non-state actors such as non-governmental organizations (NGOs), advocacy groups and social movements could play a central role. These organisations may not only represent citizens’ interests and make sure they appear in the decision processes of international organizations and institutions, but also function as watchdogs, holding those who wield power accountable. In the last decades, international organizations in several policy areas, such as global environmental governance and global health governance, have opened up and expanded their interaction with civil society and organizations. 8 With regard to the speciMic problem of democratizing the global governance of AI, however, we need to consider the fact that many of the most inMluential non-state actors are not civil society watchdogs, but rather the very same multinational AI companies that are being affected by the governance. When the U.S. Department of Homeland Security recently announced a new AI safety and security board, for instance, 14 of the 22 members were CEOs of large tech companies. Similarly, the U.S. State department has partnered with Amazon, Anthropic, Google, IBM, Meta, Microsoft, Nvidia, and OpenAI to launch the Partnership for Global Inclusivity on AI, aiming to promote sustainable development and improved quality of life in developing countries, including efforts to use AI tools to advance democracy. There might be good reasons to include AI-developers in AI governance discussions, for example, because of their deep technical expertise, innovation capacities, and the fact that they are directly responsible for the development and implementation of AI systems. However, the role of the private sector in AI governance is problematic from a democratic point of view. First, it grants a few large tech companies – e.g. Google, Amazon, Microsoft and Nvidia – disproportionate inMluence in the decision-making in AI governance, which they tend to use to promote governance frameworks skewed in favor of corporate interests, for instance by setting the agenda in ways that do not upset their business. In addition, even if non-state actors were to promote what we called the output aspects of democracy above, they cannot promote any input aspects, since none of these actors has received a democratic mandate, through processes of authorization, to make the decisions they make on the behalf of those signiMicantly affected. The importance of agenda-setting and the role of non-state actors can be illustrated by considering that the debate around AI is centred around two central concerns. One is the possibility that AI models may achieve capacities that allow them to threaten human life and property, and ultimately pose an existential risk. The other main concern has to do with near-term risks of AI implementation, such as data privacy and algorithmic bias and broader structural concerns such as labour displacement and the socio-economic inequalities exacerbated by AI. Although these two camps sometimes are described as at odds with each other, there is no principled reason for why a sensible discussion around AI could not contain both. Scholars from each camp have nevertheless expressed frustration around the distraction the other camp produces by airing their concerns. And inMluence in the academic debate arguably translates into agenda-setting 9 power: the fact that the above-mentioned SB1047 bill in California focused exclusively on long-term AI safety issues concerning existential risk could, for instance, be taken to indicate that one of the groups managed to exercise greater agenda-setting power and inMluence the way we understand the risks surrounding AI technology. The bill would have required developers of the largest class of AI models to adopt security measures, not to prevent the risk of bias or economic disruption, but rather the risk that the models themselves engage in conduct that leads to substantial direct harm to humans or the economy in, for instance, cyber-, nuclear- or chemical attacks. This is likely to strike many as an odd priority for lawmakers, given that these risks are hypothetical, while many other harms from AI are already visible and signiMicant throughout society. California governor Newsom’s eventual decision to veto the bill did not reject the notion that AI must be regulated to prevent catastrophic risks, but rather objected to the bill’s focus on large models, citing that smaller models could pose the same risk. It also echoed the AI industry’s mantra, repeated in their lobbying efforts, that excessive regulation might stiMle innovation. We should also note that it is not obvious that initiatives like SB1047 mark a signiMicant step toward democratized AI governance. They can also be example of when a minor but highly invested interest group concerned with AI safety manages to shape the AI governance agenda in a way that mirrors their particular way of understanding what is at stake. This is not to say that AI safety is unimportant, or that the electorate could not start to care about it if they became more well-informed. Rather, the point is to illustrate that, when we ask “who governs” AI and what it means for AI governance to be democratic, we should not simply take decision-making on a set of Mixed issues as a given but also look at who is inMluencing the way we understand the values at stake and the agenda to be decided. How can AI governance be made more democratic? In light of our analysis of the ‘what’ and ‘who’ of AI governance, how could we go about to democratize it? In short, we believe the shift from a soft-law approach to hard regulation is welcome from a democratic point of view. Granted, there is clear value in ethics guidance documents, strategies, and policies authored by intergovernmental organizations, multinational tech companies and international NGOs, not least because 10 studies of these documents reveal that key values and principles deeply connected to democracy are stressed in them. But from a democratic point of view, the participatory and authorization aspects are nevertheless missing. Without formal and inclusive processes of decision-making, in which those signiMicantly affected by AI technologies have a say, at least on the most fundamental matters such as the overall direction of AI development and its main intended role in society, self-regulatory frameworks will not contribute to the democratization of AI governance. Needless to say, the development of hard law in global AI governance faces many challenges. For example, it is difMicult for an inherently slow legislative process to keep up with the rapid technological development. Moreover, while AI is a global phenomenon, hard law has so far been enacted at national and regional levels. For all their possible Mlaws, however, international regulations like the EU AI Act nevertheless have this democratic feature. In democratic societies, law-making involves elected representatives and public consultations, which are essential to establish a robust overall institutional structure for democratizing AI governance. It is up to ongoing and future empirical research to determine whether and to what extent voluntary commitments to soft-law regulation prevent the development of stronger mechanisms or whether they could be advanced in tandem. Hard law nevertheless promises to creates uniform standards that apply across jurisdictions, which reduces the risks of disintegrated regulatory frameworks. International agreements, and the kind of international institutions that Altman suggests, could help harmonize rules on AI use concerning cross-national issues like data privacy, surveillance, and algorithmic bias. Moreover, hard law such as international treaties and legislation, imposes binding obligations on all parties involved and provides enforcement mechanisms. Why is this important? Let us end by stressing, again, why this matters. If what the AI developers are telling us about the technology which they are developing is true, AI promises to be a powerful new tool that will not only impact the way we work, live, and interact, but may also reinforce or upend economic cooperation and power relations. As it stands, AI development is spearheaded by a tiny minority of the world’s population, and shaped much more by their conceptions of what seems to work as a product or proMitable business model, than by the 11 political preferences of the majority. Most of us are being offered to take part of AI technology as consumers, but lack inMluence over AI as citizens. To the extent that we care about people having a say over issues with such profound impact on their lives, AI governance ought to be democratized on all levels, including the global level. To achieve democratization of AI governance, we need to constantly remind ourselves that we are all in the same boat. Even though AI developers are rowing, democratic AI governance can hopefully allow us to steer the boat, and ensure that we all get to decide in what direction we are heading.