Governing AI For Humanity
Governing AI For Humanity
Governing AI For Humanity
GOVERNING
AI FOR
HUMANITY
Governing AI
for Humanity
Final Report
About the High-level Advisory Body
on Artificial Intelligence
The multi-stakeholder High-level Advisory Body on Artificial Intelligence, initially
proposed in 2020 as part of the United Nations Secretary-General’s Roadmap
for Digital Cooperation (A/74/821), was formed in October 2023 to undertake
analysis and advance recommendations for the international governance of
artificial intelligence.
The members of the Advisory Body have participated in their personal capacity,
not as representatives of their respective organizations. This report represents
a majority consensus; no member is expected to endorse every single point
contained in this document. The members affirm their broad, but not unilateral,
agreement with its findings and recommendations. The language included
in this report does not imply institutional endorsement by the members’
respective organizations.
Final Report 5
3. Global AI governance gaps 42
A. Representation gaps 42
B. Coordination gaps 44
C. Implementation gaps 45
4. Enhancing global cooperation 47
A. Common understanding 48
International scientific panel on AI 48
B. Common ground 52
Policy dialogue on AI governance 52
AI standards exchange 55
C. Common benefits 58
Capacity development network 64
Global fund for AI 65
Global AI data framework 67
D. Coherent effort 70
AI office in the United Nations Secretariat 70
E. Reflections on institutional models 73
An international AI agency? 73
5. Conclusion: a call to action 77
Annexes 79
Annex A: Members of the High-level Advisory Body on Artificial Intelligence 79
Annex B: Terms of reference of the High-level Advisory Body on Artificial Intelligence 80
Annex C: List of consultation engagements in 2024 81
Annex D: List of “deep dives” 82
Annex E: Risk Global Pulse Check responses 83
Annex F: Opportunity scan responses 93
Annex G: List of abbreviations 99
1 See https://un.org/ai-advisory-body.
Final Report 7
and other domains. Such an approach can turn a xv Equity demands that more voices play meaningful
patchwork of evolving initiatives into a coherent, roles in decisions about how to govern technology
interoperable whole, grounded in international law that affects us. The concentration of decision-
and the SDGs, adaptable across contexts and over making in the AI technology sector cannot be
time. justified; we must also recognize that historically
many communities have been entirely excluded
xi In our interim report, we outlined principles2 that from AI governance conversations that impact
should guide the formation of new international them.
AI governance institutions. These principles
acknowledge that AI governance does not take xvi AI governance regimes must also span the globe to
place in a vacuum, that international law, especially be effective — effective in averting “AI arms races”
international human rights law, applies in relation or a “race to the bottom” on safety and rights, in
to AI. detecting and responding to incidents emanating
from decisions along AI’s life cycle which span
2. Global AI governance
multiple jurisdictions, in spurring learning, in
encouraging interoperability, and in sharing AI’s
2 Guiding principle 1: AI should be governed inclusively, by and for the benefit of all; guiding principle 2: AI must be governed in the public interest; guiding
principle 3: AI governance should be built in step with data governance and the promotion of data commons; guiding principle 4: AI governance must be
universal, networked and rooted in adaptive multi-stakeholder collaboration; guiding principle 5: AI governance should be anchored in the Charter of the United
Nations, international human rights law and other agreed international commitments, such as the SDGs.
3 Excluding the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021) and
the two General Assembly resolutions on AI in 2024: “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable
development” (78/265) and “Enhancing international cooperation on capacity-building of artificial intelligence” (78/311).
* Per endorsement of relevant intergovernmental issuances. Countries are not considered involved in a plurilateral initiative solely because of membership in the European Union or
the African Union. Abbreviations: AG, African Group; APG, Asia and the Pacific Group; EEG, Eastern European Group; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership
on Artificial Intelligence; LAC, Latin America and the Caribbean; OECD, Organisation for Economic Co-operation and Development; WEOG, Western European and Others Group.
3. Enhancing global
United Nations Secretariat, close to the Secretary-
General, working as the “glue” to unite the initiatives
Final Report 9
International scientific panel on AI xxv Risk assessments could also draw on the work of
other AI research initiatives, with the United Nations
offering a uniquely trusted “safe harbour” for
xxiii Learning from precedents such as the
Intergovernmental Panel on Climate Change (IPCC) researchers to exchange ideas on the “state of the
and the United Nations Scientific Committee on art”. By pooling knowledge across silos in countries
the Effects of Atomic Radiation, an international, or companies that may not otherwise engage or be
multidisciplinary scientific panel on AI could collate included, a United Nations-hosted panel can help to
and catalyse leading-edge research to inform rectify misperceptions and bolster trust globally.
scientists, policymakers, Member States and other
stakeholders seeking scientific perspectives on AI xxvi Such a panel should operate independently, with
support from a cross-United Nations system
technology or its applications from an impartial,
team drawn from the below-proposed AI office
credible source.
and relevant United Nations agencies, such as
the International Telecommunication Union (ITU)
xxiv A scientific panel under the auspices of the United
Nations could source expertise on AI-related and the United Nations Educational, Scientific and
opportunities. This might include facilitating “deep Cultural Organization (UNESCO). It should partner
dives” into applied domains of the SDGs, such as with research efforts led by other international
health care, energy, education, finance, agriculture, institutions, such as the Organisation for Economic
climate, trade and employment. Co-operation and Development (OECD) and the
1
Global Partnership on Artificial Intelligence.
Recommendation 1
An international scientific panel on AI
c) Issuing ad hoc reports on emerging issues, in particular the emergence of new risks or
significant gaps in the governance landscape.
4 Analogous to the high-level political forum in the context of the SDGs that takes place under the auspices of the Economic and Social Council.
5 Relevant parts of the United Nations system could be engaged to highlight opportunities and risks, including ITU on AI standards; ITU, the United Nations
Conference on Trade and Development (UNCTAD), the United Nations Development Programme (UNDP) and the Development Coordination Office on AI
applications for the SDGs; UNESCO on ethics and governance capacity; the Office of the United Nations High Commissioner for Human Rights (OHCHR) on
human rights accountability based on existing norms and mechanisms; the Office for Disarmament Affairs on regulating AI in military systems; UNDP on
support to national capacity for development; the Internet Governance Forum for multi-stakeholder engagement and dialogue; the World Intellectual Property
Organization (WIPO), the International Labour Organization (ILO), the World Health Organization (WHO), the Food and Agriculture Organization of the United
Nations (FAO), the World Food Programme, the United Nations High Commissioner for Refugees (UNHCR), UNESCO, the United Nations Children’s Fund, the
World Meteorological Organization and others on sectoral applications and governance.
Final Report 11
2
Recommendation 2
Policy dialogue on AI governance
a) Share best practices on AI governance that foster development while furthering respect,
protection and fulfilment of all human rights, including pursuing opportunities as well as
managing risks;
c) Share voluntarily significant AI incidents that stretched or exceeded the capacity of State
agencies to respond; and
110 9
IEEE
101
100 ISO/IEC 8
ITU 29
90
Further ISO and IEEE standards under development 22
80
70
60 58
3
50
16
40 79
32 71
30 2
10
20 16 39
6
10 6 20
3
3 10
0 1 2 3
2018 2019 2020 2021 2022 2023 2024
3
(Jan.–Jun.)
Sources: IEEE, ISO/IEC, ITU, World Standards Cooperation (based on June 2023 mapping, extended through inclusion of standards related to AI).
Recommendation 3
AI standards exchange
b) Debating and evaluating the standards and the processes for creating them; and
Final Report 13
C. Common benefits developing country.6 It is unrealistic to promise
access to compute that even the wealthiest
countries and companies struggle to acquire.
xxxvi The 2030 Agenda for Sustainable Development,
Rather, we seek to put a floor under the AI divide
with its 17 SDGs, can give clarity of purpose
for those unable to secure needed enablers via
to the development, deployment and uses of
other means, including by supporting initiatives
AI, bending the arc of investments towards
towards distributed and federated AI development
global development challenges. Without a
models.
comprehensive and inclusive approach to AI
governance, the potential of AI to contribute
xli Turning to data, it is common to speak of misuse
positively to the SDGs could be missed, and
of data in the context of AI (such as infringements
its deployment could inadvertently reinforce or
on privacy) or missed uses of data (failing to
exacerbate disparities and biases.
exploit existing data sets). However, a related
problem is missing data, which includes the
xxxvii AI is no panacea for sustainable development
large portions of the globe that are data poor.
challenges; it is one component within a broader
Failure to reflect the world’s linguistic and cultural
set of solutions. To truly unlock AI’s potential
diversity has been linked to bias in AI systems,
to address societal challenges, collaboration
but may also be a missed opportunity for those
among governments, academia, industry and civil
communities to access AI’s benefits.
society is crucial, so that AI-enabled solutions are
inclusive and equitable.
xlii A set of shared resources – including open
models – is needed to support inclusive and
xxxviii Much of this depends on access to talent,
effective participation by all Member States in the
computational power (or “compute”) and data, in
AI ecosystem, and here global approaches have
ways that help cultural and linguistic diversity to
distinct advantages.
flourish. Basic infrastructure and the resources to
maintain it are also pre-requisites.
Capacity development network
xxxix Regarding talent, not every society needs cadres
of computer scientists for building their own xliii Growing public and private demand for human
models. However, whether technology is bought, and other AI capacity coincides with emergent
borrowed or built, a baseline socio-technical national, regional and public-private AI centres
capacity is needed to understand the capabilities of excellence that have international capacity
and limitations of AI, and harness AI-enabled use development roles. A global network can serve
cases appropriately while addressing context- as a matching platform that expands the range of
specific risks. possible partnering and enhances interoperability
of capacity-building approaches.
xl Compute is one of the biggest barriers to entry in
the field of AI. Of the top 100 high-performance xliv From the Millennium Development Goals to the
computing clusters in the world capable of SDGs, the United Nations has long embraced
training large AI models, not one is hosted in a developing the capacities of individuals and
institutions.7 A network of institutions, affiliated
6 Proxy indicator since most high-performance computing clusters do not have graphics processing units (GPUs) and are of limited use for advanced AI.
7 Through the work of UNESCO, WIPO and others, the United Nations has helped to uphold the rich diversity of cultures and knowledge-making traditions
across the globe. The United Nations University has long had a commitment to capacity-building through higher education and research, and the United
Nations Institute for Training and Research has helped to train officials in domains critical to sustainable development. The UNESCO Readiness Assessment
Methodology is a key tool to support Member States in their implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence.
Other examples include the WHO Academy in Lyon, France, the UNCTAD Virtual Institute, the United Nations Disarmament Fellowship run by the Office for
Disarmament Affairs and the capacity development programmes led by ITU and UNDP.
Recommendation 4
Capacity development network
4
expertise, compute and AI training data to key actors. The purpose of the network would be to:
a) Catalyse and align regional and global AI capacity efforts by supporting networking
among them;
c) Make available trainers, compute and AI training data across multiple centres to
researchers and social entrepreneurs seeking to apply AI to local public interest use
cases, including via:
Final Report 15
Global fund for AI xlviii This public interest focus makes the fund
complementary to the proposal for an AI capacity
development network, to which the fund would
xlvi Many countries face fiscal and resource
constraints limiting their ability to use AI also channel resources. The fund would provide
appropriately and effectively. Despite any capacity an independent capacity for monitoring of impact,
development efforts (recommendation 4), some and could source and pool in-kind contributions,
may still be unable to access training, compute, including from private sector entities, to make
models and training data without international available AI-related training programmes, time,
support. Other funding efforts may also not scale compute, models and curated data sets at lower-
without it. than-market cost. In this manner, we ensure that
vast swathes of the world are not left behind and
xlvii Our intention in proposing a fund is not to are instead empowered to harness AI for the SDGs
guarantee access to advanced compute resources in different contexts.
and capabilities. The answer may not always
be more compute. We also need better ways to xlix It is in everyone’s interest to ensure that there
connect talent, compute and data. The fund’s is cooperation in the digital world as in the
purpose would be to address the underlying physical world. Analogies can be made to efforts
capacity and collaboration gaps for those unable to to combat climate change, where the costs of
access requisite enablers so that: transition, mitigation or adaptation do not fall
a. Countries in need can access AI enablers, evenly, and international assistance is essential
putting a floor under the AI divide; to help resource-constrained countries so that
b. Collaborating on AI capacity development they can join the global effort to tackle a planetary
leads to habits of cooperation and mitigates challenge.
geopolitical competition;
c. Countries with divergent regulatory approaches
have incentives to develop common templates
for governing data, models and applications for
societal-level challenges related to the SDGs
and scientific breakthroughs.
We recommend the creation of a global fund for AI to put a floor under the AI divide. Managed
by an independent governance structure, the fund would receive financial and in-kind
contributions from public and private sources and disburse them, including via the capacity
development network, to facilitate access to AI enablers to catalyse local empowerment for
the SDGs, including:
a) Shared computing resources for model training and fine-tuning by AI developers from
countries without adequate local capacity or the means to procure it;
b) Sandboxes and benchmarking and testing tools to mainstream best practices in safe
and trustworthy model development and data governance;
d) Data sets and research into how data and models could be combined for SDG-related
projects; and
Global AI data framework AI training data. This aim motivates our proposal
for a global AI data framework.
Final Report 17
share data in a fair, safe and equitable manner. The AI ethics and governance. This is analogous to
development of these templates and the actual the role of the United Nations Commission on
storage and analysis of data held in commons International Trade Law in advancing international
or in trusts could be supported by the proposed trade by developing legal and non-legal cross-
capacity development network and global fund for border frameworks.
AI (recommendations 4 and 5).
lv Similarly, the Commission on Science and
liv The United Nations is uniquely positioned to Technology for Development and the Statistical
support the establishment of global principles Commission have on their agenda data for
and practical arrangements for AI training development and data on the SDGs. There are
data governance and use, in line with agreed also important issues of content, copyright and
international commitments on human rights, protection of indigenous knowledge and cultural
intellectual property and sustainable development, expression being considered by the World
building on years of work by the data community Intellectual Property Organization (WIPO).
and integrating it with recent developments on
Recommendation 6
6
Global AI data framework
a) Outlining data-related definitions and principles for global governance of AI training data,
including as distilled from existing best practices, and to promote cultural and linguistic
diversity;
b) Establishing common standards around AI training data provenance and use for
transparent and rights-based accountability across jurisdictions; and
i) Data trusts;
iii) Model agreements for facilitating international data access and global
interoperability, potentially as techno-legal protocols to the framework.
lviii The patchwork of norms and institutions outlined lxi Such a body should be agile, champion inclusion
under the section “Global AI governance gaps” and partner rapidly to accelerate coordination and
above, reflect widespread recognition that implementation – drawing as a first priority on
governance of AI is a global necessity. The existing resources and functions within the United
unevenness of that response demands some Nations system. The focus should be on civilian
measure of coherent effort. applications of AI.
AI Council of
GPAI OECD
summits Europe
National &
National & Group of Group of regional
regional 20 Seven Initiatives
Initiatives
…
Regional
SDOs
United Nations organizations AI data
engagement
framework
International Capacity
Governance Standards Global fund
scientific development
dialogue exchange for AI
panel network
Abbreviations: GPAI, Global Partnership on Artificial Intelligence; OECD, Organisation for Economic Co-operation and Development;
SDOs, standards development organizations.
Final Report 19
lxii It could be staffed in part by United Nations for fostering common understanding, common
personnel seconded from specialized agencies ground and common benefits in the international AI
and other parts of the United Nations system, such governance ecosystem.
as ITU, UNESCO, the Office of the United Nations
High Commissioner for Human Rights (OHCHR), lxiii Recommendation 7 is made on the basis of a
UNCTAD, the United Nations University and the clear-eyed assessment as to where the United
United Nations Development Programme (UNDP). Nations can add value, including where it can lead,
It should engage multiple stakeholders, including where it can aid coordination and where it should
companies, civil society and academia, and work step aside. It also brings the benefits of existing
in partnership with leading organizations outside institutional arrangements, including pre-negotiated
of the United Nations (see fig. (c)). This would funding and administrative processes that are well
position the United Nations to enable connections established and understood.
7
Recommendation 7
AI office within the Secretariat
We recommend the creation of an AI office within the Secretariat, reporting to the Secretary-
General. It should be light and agile in organization, drawing, wherever possible, on relevant
existing United Nations entities. Acting as the “glue” that supports and catalyses the
proposals in this report, partnering and interfacing with other processes and institutions, the
office’s mandate would include:
a) Providing support for the proposed international scientific panel, policy dialogue,
standards exchange, capacity development network and, to the extent required, the
global fund and global AI data framework;
c) Advising the Secretary-General on matters related to AI, coordinating with other relevant
parts of the United Nations system to offer a whole-of-United Nations response.
8 See https://safe.ai/work/statement-on-ai-risk.
Final Report 21
Figure (d): High-level Advisory Body on Artificial Intelligence at its meeting in
Singapore, 29 May 2024
1 See https://un.org/ai-advisory-body.
2 See annex C for an overview of the consultations.
Final Report 23
9 Beyond immediate multilateral debates and by AI’s potential for power and prosperity, at a
processes involving Governments, our report is time of intense geopolitical competition. Many
also intended for civil society and the private sector, societies are still at the margins of AI development,
researchers and concerned people around the world. deployment and use, while a few are gripped by
We are acutely aware that achieving the ambitious excitement mixed with concern at AI’s cross-cutting
goals that we have outlined can only happen with impact.
multisector global participation.
13 Despite the challenges, there is no opt-out. The
10 Overall, we believe that the future of this technology stakes are simply too high for the United Nations,
is still open. This has been corroborated by our its Member States and the wider community whose
deep dive into the direction of technology and the aspirations the United Nations represents. We hope
debate between open and closed approaches to its that this report provides some signposts to help our
development (see box 9). Larger and more powerful concerted efforts to govern AI for humanity.
models developed in fewer and fewer corporations
A. Opportunities and
is one alternative future. Another could be a more
diverse global innovation landscape dominated
by interoperable small to medium-sized AI models
delivering a multitude of societal and economic
enablers
applications. Our recommendations seek to make
14 AI is transforming our world. This suite of
the latter more likely, while also acknowledging the
technologies4 offers tremendous potential for good,
risks.
from opening new areas of scientific inquiry (see
box 1) and optimizing energy grids, to improving
11 From its founding, the United Nations has been
public health or agriculture.5 If realized, the potential
committed to promoting the economic and social
opportunities afforded by the use of AI tools for
advancement of all peoples.3 The Millennium
individuals, sectors of the economy, scientific
Development Goals sought to establish ambitious
research and other domains of public interest could
targets so that economic opportunities are made
play important roles in boosting our economies (see
available to all the world’s people; the Sustainable
box 2), as well as transforming our societies for the
Development Goals (SDGs) then sought to reconcile
better. Public interest AI – such as forecasting of
the need for development with the environmental
and addressing pandemics, floods, wildfires and
constraints of our planet. The expanded
food insecurity – could even help to drive progress
development, deployment and use of AI tools and
on the SDGs.
systems pose the next great challenge to ensuring
that we embrace our digital future together, rather
than widening our digital divide.
3 This included through trade, foreign direct investment and technology transfer as enablers for long-term development.
4 According to the Organisation for Economic Co-operation and Development (OECD), “An AI system is a machine-based system that, for explicit or implicit
objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical
or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment” (see https://oecd.ai/en/wonk/ai-system-
definition-update).
5 We believe, however, that rigorous assessment by domain experts is needed to assess claims of AI’s benefits. Pursuit of AI for good should be based on
scientific evidence and a thorough evaluation of trade-offs and alternatives. In addition to scientific inquiry, the social sciences are also being transformed.
The impact of AI on science spans major disciplines. From biology to physics, and from environmental science
to social sciences, AI is being integrated in research workflows, and is accelerating the production of scientific
knowledge. Some of the claims today might be hyped, while others have been demonstrated, and its long-term
potential appears promising.a
For example, in biology, the 50-year challenge of protein-folding and protein structure prediction has been
addressed with AI. This includes predicting the structure of over 200 million proteins, with the resulting open-
access database being used by over 2 million scientists in over 190 countries at the time of writing, many of them
working on neglected diseases. This has since been extended to life’s other biomolecules, DNA, RNA and ligands
and their interactions.
For Alzheimer’s, Parkinson’s and amyotrophic lateral sclerosis (ALS), experts using AI are identifying disease
biomarkers and predicting treatment responses, significantly improving precision and speed of diagnosis and
treatment development.b Broadly, AI is helping in advance precision medicine (e.g. in neurodegenerative diseases)
by tailoring treatments based on genetic and clinical profiles. AI technology is also helping to accelerate the
discovery and development of new chemical compounds.c
In radio astronomy, the speed and scale of data being collected by modern instruments, such as the Square
Kilometre Array, can overwhelm traditional methods. AI can make a difference, including by helping to select
which part of the data to focus on for novel insights. Through “unsupervised clustering”, AI can pick out patterns
in data without being told what specifically to look for.d Applying AI to social science research could also offer
profound insights into complex human dynamics, enhancing our understanding of societal trends and economic
developments.
In time, by enabling unprecedented levels of interdisciplinarity, AI may be designed and deployed to spawn new
scientific domains, just as bioinformatics and neuroinformatics emerged from the integration of computational
techniques with biological and neurological research. AI’s ability to integrate and analyse diverse data sets from
areas such as climate change, food security and public health could open research avenues that bridge these
traditionally separate fields, if done responsibly.
AI may also enhance the public policy impact of scientific research by allowing for the validation of complex
hypotheses, for example combining climate models with agricultural data to predict food security risks and linking
these insights with public health outcomes. Another prospect is the boosting of citizen science and the leveraging
of local knowledge and data for global challenges.
a See John Jumper and others, “Highly accurate protein structure prediction with AlphaFold”, Nature, vol. 596 (July 2021), pp. 583–589; see also Josh
Abramson and others, “Accurate structure prediction of biomolecular interactions with AlphaFold 3”, Nature, vol. 630, pp. 493–500 (May 2024).
b Isaias Ghebrehiwet and others, “Revolutionizing personalized medicine with generative AI: a systematic review”, Artificial Intelligence Review, vol. 57,
No. 127 (April 2024).
c Amil Merchant and others, “Scaling deep learning for materials discovery”, Nature, vol. 624, pp. 80–85 (November 2023).
d Zack Savitsky, “Astronomers are enlisting AI to prepare for a data downpour”, MIT Technology Review, 20 May 2024.
Final Report 25
Box 2: Economic opportunities of AI
Since the Industrial Revolution, a handful of innovations have dramatically accelerated economic progress. These
earlier “general-purpose technologies” have reshaped multiple sectors and industries. The last major change
came with computers and the digital age. These technologies transformed economies and increased productivity
worldwide, but their full impact took decades to be felt.
Generative AI is breaking the trend of slow adoption. Experts believe its transformative effects will be seen within
this decade. This quick integration means new developments in AI could rapidly reshape industries, change work
processes and increase productivity. The rapid adoption of AI may thus transform our economies and societies in
unprecedented ways.
The economic benefits of AI may be considerable. Although it is difficult to predict all the ramifications of AI on
our complex economies, projections indicate that AI could significantly increase global gross domestic product,
with relevant impacts across almost all sectors. For businesses, especially micro and small and medium-sized
enterprises, AI can offer access to advanced analytics and automation tools, which were previously only available
to larger corporations. The wide applicability of AI suggests that AI could be a general-purpose technology. As
such, AI could enable productivity for individuals, small and large businesses, and other organizations in sectors
as diverse as retail, manufacturing and operations, health care and the public sector, in developed and developing
economies.a They will require broad adoption within and across sectors; application in productivity-enhancing
uses; and AI that makes workers more productive and ushers in new economic activities at scale. They will
also require investment and capital deepening, co-innovations, process and organizational changes, workforce
readiness and enabling policies.
Opportunities Risks
• New products and business models — • Obsolescence of traditional export-led
including leapfrogging solutions, path to economic growth
solutions for bottom of pyramid
individuals, and easier access • Increased digital and technological
to credit divide
a James Manyika and Michael Spence, “The coming AI economic revolution: can artificial intelligence reverse the productivity slowdown?”, Foreign
Affairs, 24 October 2023.
Research has also shown that when it occurs, job displacement is expected to occur differently in economies at
different stages of development.d While advanced economies are more exposed, they are also better prepared to
harness AI and complement their workforce. Low- and middle-income countries may have fewer capabilities to
leverage this technology. Additionally, the integration of AI in the workforce may disproportionately affect certain
demographics, with women potentially facing a higher risk of job displacement in some sectors.
Without focused and coordinated efforts to close the digital divide, AI’s potential ability to be harnessed in support
of sustainable development and poverty alleviation will not be realized, causing large segments of the global
population to remain disadvantaged in the swiftly changing technological environment, with exacerbation of
existing inequalities.
To successfully integrate AI into the global economy, we need effective governance that manages risks and
ensures fair outcomes. This means among other options creating regulatory sandboxes for testing AI systems,
promoting international cooperation on standards and setting up mechanisms to continuously evaluate AI’s
impact on labour markets and society. Apart from sound national AI strategies and international support, it
specifically requires:
• Skills development: Implementing education and training programmes to develop AI skills across the
workforce, from basic digital literacy to advanced technical expertise, to prepare workers for an AI-
augmented future.
• Digital infrastructure: Significant investment in digital infrastructure, especially in developing countries, to
bridge the AI divide and facilitate widespread AI adoption.
• Workplace integration: Leveraging social dialogue and public-private partnerships for managing AI
integration in the workplace, ensuring worker participation in the process and protecting labour rights.
• Value chain considerations: Ensuring decent work conditions along the entire AI value chain, including
often overlooked areas, such as data annotation and content moderation, for equitable AI development.
b Erik Brynjolfsson and others, “Generative AI at work”, National Bureau of Economic Research, working paper 31161, 2023; see also Shakked Noy
and Whitney Zhang, “Experimental evidence on the productivity effects of generative artificial intelligence”, Science, vol. 381, No. 6654, pp. 187–192
(July 2023).
c Pawel Gmyrek and others, Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality (Geneva: ILO, 2023).
d Mauro Cazzaniga and others, “Gen-AI: artificial intelligence and the future of work”, staff discussion note SDN2024/001 (Washington, D.C.:
International Monetary Fund, 2024).
Final Report 27
18
B. Key enablers for
Challenges to traditional regulatory systems
arise from AI’s speed, opacity and autonomy. AI’s
beyond a few people in a few countries. Ensuring believe that it is more useful to look at risks from
that AI is deployed for the common good, and the perspective of vulnerable communities and the
that its opportunities are distributed equitably, commons (see paras. 26–28 below).
6 “An analysis of the location of grant recipients’ headquarters from a database of US-majority foundations reveals that from 2018 to 2023, only 10 percent
of grants allocated toward AI initiatives that address one or more of the SDGs went to organizations based in low- or middle-income countries … Analysis of
private capital shows that 36 percent of 9,000 companies addressing SDGs are headquartered in the United States, but these companies received 54 percent of
total funding. We also found that while 20 percent of 9,000 companies addressing SDGs are headquartered in lower- or middle-income countries, they received
a higher proportion (25 percent) of total funding. One reason for this is that Chinese companies receive a high proportion of investment … The remaining
developing countries in the sample received only 3 percent of funding while representing 7 percent of the sample” (Medha Bankhwal and others,
“AI for social good: improving lives and protecting the planet”, McKinsey & Company, May 2024).
7 The invitee list was constructed from the Office of the Secretary-General’s Envoy on Technology (OSET) and the Advisory Body’s networks, including
participants in deep dives. Additional experts were regularly invited during the fielding period to improve representation. The final n=348 represents a strong,
balanced global sample of respondents with relevant expertise to provide an informed opinion on AI risks (see annex E for the methodology).
n. Environmental harms
8 12 25 29 26
(e.g. accelerating energy consumption and carbon emissions)
22 From a list of example AI-related risk areas,8 23 In all but two example risk areas, most AI experts
a plurality of experts were concerned or very polled were concerned or very concerned about
concerned about harms related to: harms materializing. Although fewer than half
a. Societal implications of AI: 78 per cent of experts expressed such concern regarding
regarding damage to information integrity unintended harms from AI [questions e and f], 1 in 6
[question j], 74 per cent regarding inequalities of those who were very concerned about unintended
such as concentration of wealth and power AI harms mentioned that they expected agentic
in a few hands [question l] and 67 per cent systems to have some of the most surprising or
regarding discrimination / disenfranchisement, significant impacts on AI-related risks by 2025.9
particularly among marginalized communities
[question i]; 24 Expert perceptions varied, including by region and
b. Intentional use of AI that harms others: 75 per gender (see annex E for more detailed results).
cent regarding use in armed conflict by State This highlighted the importance of inclusive
actors [question b], 72 per cent regarding representation in exercises concerning definition of
malicious use by non-State actors [question a] shared risks. Despite the variation, the results did
and 65 per cent regarding use by State actors reveal concerns about AI harms over the coming
that harms individuals [question c]. year, highlighting a sense of urgency among
experts to address risks across multiple areas and
vulnerabilities in the near future.
8 Built on the vulnerability-based risk categorization in box 4, an earlier version of which was in our interim report.
9 Question: “What emerging trends today do you think could have the most surprising and/or significant impact on AI-related risks over the next 18 months?”
Final Report 29
25 Moreover, autonomous weapons in armed conflict, 26 Risk management requires going beyond listing or
crime or terrorism, and public-security use of prioritizing risks, however. Framing risks based on
AI in particular, raise serious legal, security and vulnerabilities can shift the focus of policy agendas
humanitarian questions (see box 3). 10
from the “what” of each risk (e.g. “risk to safety”) to
“who” is at risk and “where”, as well as who should
be accountable in each case.
Among the challenges of AI use in the military domain are new arms races, the lowering of the threshold of
conflict, the blurring of lines between war and peace, proliferation to non-State actors and derogation from long-
established principles of international humanitarian law, such as military necessity, distinction, proportionality and
limitation of unnecessary suffering. On legal and moral grounds, kill decisions should not be automated through
AI. States should commit to refraining from deploying and using military applications of AI in armed conflict in
ways that are not in full compliance with international law, including international humanitarian law and human
rights law.
Presently, 120 Member States support a new treaty on autonomous weapons, and both the Secretary-General
and the President of the International Committee of the Red Cross have called for such treaty negotiations to be
completed by 2026. The Advisory Body urges Member States to follow up on this call.
The Advisory Body considers it essential to identify clear red lines delineating unlawful use cases, including
relying on AI to select and engage targets autonomously. Building on existing commitments on weapons reviews
in international humanitarian law, States should require weapons manufacturers through contractual obligations
and other means to conduct legal and technical reviews to prevent unethical design and development of military
applications of AI. States should also develop legal and technical reviews of the use of AI, as well as of weapons
and means of warfare and sharing related best practices.
Furthermore, States should develop common understandings relating to testing, evaluation, verification and
validation mechanisms for AI in the security and military domain. They should cooperate to build capacity
and share knowledge by exchanging good practices and promoting responsible life cycle management of AI
applications in the security and military domain. To prevent acquisition of powerful and potentially autonomous
AI systems by dangerous non-State actors, such as criminal or terrorist groups, States should set up appropriate
controls and processes throughout the life cycle of AI systems, including managing end-of-life cycle processes
(i.e. decommissioning) of military AI applications.
For transparency, “advisory boards” could be set up to provide independent expert advice and scrutiny across the
full life cycle of security and military applications of AI. Industry and other actors should consider mechanisms to
prevent the misuse of AI technology for malicious or unintended military purposes.
10 This list is intended to be illustrative only, touching on only a few of the risks facing individuals and societies.
Individuals
• Human dignity, value or agency (e.g. manipulation, deception, nudging, sentencing, exploitation,
discrimination, equal treatment, prosecution, surveillance, loss of human autonomy and AI-assisted
targeting).
• Physical and mental integrity, health, safety and security (e.g. nudging, loneliness and isolation,
neurotechnology, lethal autonomous weapons, autonomous cars, medical diagnostics, access to health
care, and interaction with chemical, biological, radiological and nuclear systems).
• Life opportunities (e.g. education, jobs and housing).
• (Other) human rights and civil liberties, such as the rights to presumption of innocence (e.g. predictive
policing), the right to a fair trial (e.g. recidivism prediction, culpability, recidivism, prediction and
autonomous trials), freedom of expression and information (e.g. nudging, personalized information, info
bubbles), privacy (e.g. facial recognition technology), and freedom of assembly and movement (e.g.
tracking technology in public spaces).
Economy
• Power concentration.
• Technological dependency.
• Unequal economic opportunities, market access, resource distribution and allocation.
• Underuse of AI.
• Overuse of AI or “technosolutionism”.
• Stability of financial systems, critical infrastructure and institutions.
• Intellectual property protection.
Environment
• Excessive consumption of energy, water and material resources (including rare minerals and other natural
resources).
Final Report 31
28 The policy-relevance of taking a vulnerability-based vary. The AI Risk Global Pulse Check also asked
lens to AI-related risks is illustrated by examining AI experts which individuals, groups, societies/
governance considerations from the perspective of economies/(eco)systems they were particularly
a particular vulnerable group, such as children (see concerned would be harmed by AI in the next 18
box 5). months. Marginalized communities and the global
South, along with children, women, youths, creatives
29 The individuals, groups or entities of concern and those with jobs susceptible to automation, were
identified via a vulnerability-based framing of AI risks particularly highlighted (see fig. 3).
– and implied policy agendas – can themselves
Less educated
LGBT+ Everyone Low-skilled
Indigenous workers
Marginalized
Africans
Children Global
Workers People who treat AI as
People in Ecosystems a companion
autocratic
States
South
Public
institutions Journalists Sub-Saharan
Africans Informal
Teachers
Persons with workforce
Latin Americans disabilities Small businesses
Early career
workers Minorities Health sector
Migrants Intellectual property holders Students
Note: Keywords tagged for each response by OSET. Showing only keywords identified in 2+ responses. Font size is proportional to number of responses mentioned. For scale, “global
South” was identified by 46 of 188 respondents who provided meaningful responses to this question; “marginalized communities” by 43 of 188.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
30 These results illustrate the importance of 32 The race to develop and deploy AI systems defies
inclusive representation when reaching common traditional regulatory systems and governance
understandings of AI risks and common ground regimes. Most experts polled for the AI Risk Global
on policy agendas, as per recommendations 1 and Pulse Check expected AI acceleration over the next
2. Without such representation, AI governance 18 months, both in its development (74 per cent)
policy agendas could be framed in ways that miss and adoption and application (89 per cent) (see fig. 4).
the concerns of portions of humanity, who will
nonetheless be affected. 33 As mentioned in paragraph 23, some experts
expect the deployment of agentic systems in 2025.
F. Challenges to be
Moreover, leading technical experts acknowledge
that many AI models remain opaque, with their
Final Report 33
Figure 4: Experts’ expectations regarding AI technological development
74% expect pace of technical change to 89% expect pace of adoption & application to
accelerate (30% substantially) accelerate (34% substantially)
“In the next 18 months, compared to the last 3 months, do you “In the next 18 months, compared to the last 3 months, do you
expect the pace of technical change in AI (e.g. development / expect the pace of adoption and application of AI (e.g. new uses of
release of new models) to...” (n = 348) AI in business / government) to...” (n = 348)
89%
55%
44%
10%
21% 0% 0%
No respondents expected
0% 5% deceleration in adoption
and application
Note: Numbers may not add up to 100% owing to rounding. Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
35 A societal risk thus emerges that ever-fewer 38 The pace, breadth and uncertainty of AI’s
individuals end up being held accountable for harms development, deployment and use highlight the
arising from their decisions to automate processes value of a holistic, transversal and agile approach
using AI, even as increasingly powerful systems to AI. Internationally, a holistic perspective needs to
enter the world. This demands agile governance to be mirrored in a networked institutional approach
ensure that accountability mechanisms keep pace to AI governance across sectors and borders, which
with accelerating AI. engages stakeholders without being captured by
them.
36 If the pace of AI development and deployment
challenges existing institutions, so does the breadth. 39 On climate change, the world has come to realize
A general-purpose technology with global reach, only belatedly that a holistic approach to global
advanced AI can be deployed across domains collective action is needed. With AI, there is an
affecting societies in manifold ways, with broad opportunity to do so by design.
policy implications.
40 The above challenges are compounded by
37 The implications and potential impact of AI’s an associated concentration of wealth and
intersection with multiple areas, including finance, decision-making among a handful of private AI
labour markets, education and political systems, developers and deployers, particularly multinational
presage broad consequences that demand a corporations. This raises another question of how
whole-of-society approach (see examples in box stakeholders can be engaged in AI’s governance
6). Existing institutions must mount holistic, cross- without undermining the public interest.
sectoral responses that address AI’s wide-ranging
societal impacts.
As AI becomes more powerful and widespread, its development, deployment and application will become more
personalized, with the potential to foster alienation and addiction. To some Advisory Body members, AI trained on
an individual’s data, and its consequent role as a primary interlocutor and intermediary, may reflect an inflection
point for human beings – with the potential to create urgent new societal challenges, while exacerbating existing
ones.
For example, future AI systems may be able to generate an endless feed of high-quality video content tailored
to individuals’ personal preferences. Increased social isolation, alienation, mental health issues, loss of human
agency and impacts on emotional intelligence and social development are only a few of the potential outcomes.
These issues are already insufficiently explored by policymakers in the context of technologies such as smart
devices and the Internet; they are almost completely unexplored in the context of AI, with current governance
frameworks prioritizing risks to individuals, rather than society as a whole.
As policymakers consider future responses to AI, they must weigh these factors as well, and develop policies
that promote societal well-being, particularly for youth. Government interventions could foster environments that
prioritize face-to-face interactions between humans, making mental health support more readily available, and
investing more into sports facilities, public libraries and the arts.
Nevertheless, prevention is better than cure: industry developers should design their products without addictive
personalized features, ensure that the products do not damage mental health and promote (rather than
undermine) a sense of shared belonging in society. Tech companies should establish policies to manage societal
risks on an equal basis to other risks as part of efforts to identify and mitigate risks across the entire life cycle of
AI products.
Deepfakes, voice clones and automated disinformation campaigns pose a specific and serious threat to
democratic institutions and processes such as elections, and to democratic societies and social trust more
generally, including through foreign information manipulation and interference (FIMI). The development of closed
loop information ecosystems, reinforced by AI and leveraging personal data, can have profound effects on
societies, potentially making them more accepting of intolerance and violence towards others.
Protecting the integrity of representative government institutions and processes requires robust verification and
deepfake detection systems, alongside rapid notice and take-down procedures for content that is likely to deceive
in a way that causes harm or societal divisions, or which promotes war propaganda, conflict and hate speech.
Individuals who are not public figures should have protections from others creating deepfakes in their likeness for
fraudulent, defamatory or otherwise abusive purposes. Sexualized deepfakes are a particular concern for women
and girls and may be a form of gender-based violence.
Final Report 35
Box 6: AI-related societal impacts (continued)
Voluntary commitments from private sector players – such as labelling deepfakes or enabling users to flag and
then take down deepfakes made or distributed with malicious intent – are important first steps. However, they do
not sufficiently mitigate societal risks. Instead, a global, multi-stakeholder approach is required, alongside binding
commitments. Common standards for content authentication and digital provenance would allow for a globally
recognized approach to identify synthetic and AI-modified images, videos and audio.
Additionally, real-time knowledge-sharing between public and private actors, based on international standards,
would allow for rapid-response capabilities to immediately take down deceptive content or FIMI before it has
a chance to go viral. Nonetheless, these processes should incorporate safeguards to ensure that they are not
manipulated or abused to abet censorship.
These actions should be accompanied by preventive measures, to increase societal resilience to AI-driven
disinformation and propaganda, such as public awareness campaigns on AI’s potential to undermine information
integrity. Member States should additionally promote media and digital literacy campaigns, support fact-checking
initiatives and invest in capacity-building for the FIMI defender community.
11 See https://un.org/ai-advisory-body.
Final Report 37
48
A. Guiding principles and
Box 7 summarizes the feedback on these principles,
which emphasized the importance of human
functions for international rights and the need for greater clarity on effective
implementation of the guiding principles, including
47 In our interim report, we outlined five principles that for inclusivity was backed by action, and that
should guide the formation of new international AI marginalized groups would be represented.
governance institutions:
• Guiding principle 1: AI should be governed 49 In our interim report, we also proposed several
inclusively, by and for the benefit of all institutional functions that might be pursued at the
• Guiding principle 2: AI must be governed in the international level (see fig. 5). The feedback largely
public interest confirmed the need for these functions at the global
• Guiding principle 3: AI governance should level, while calling for additional complementary
be built in step with data governance and the functions related to data and AI governance to
• Guiding principle 4: AI governance must be be built in step with data governance and the
universal, networked and rooted in adaptive promotion of data commons) into practice.
multi-stakeholder collaboration
• Guiding principle 5: AI governance should be
anchored in the Charter of the United Nations,
international human rights law and other agreed
international commitments such as the SDGs
AI governance functions
Based on the extensive consultations conducted by the High-level Advisory Body following the publication of its interim
report, guiding principle 5 (AI governance should be anchored in the Charter of the United Nations, international human
rights law and other agreed international commitments) garnered the strongest support across all sectors of stakeholders,
including governments, civil society, the technical community, academia and the private sector. This included respecting,
promoting and fulfilling human rights and prosecuting their violations, as well as General Assembly resolution 78/265 on
seizing the opportunities of safe, secure and trustworthy AI systems for sustainable development, unanimously adopted in
March 2024.
The Advisory Body in its deliberations was convinced that to mitigate the risks and harms of AI, to deal with novel use
cases and to ensure that AI can truly benefit all of humanity and leave no one behind, human rights must be at the centre of
AI governance, ensuring rights-based accountability across jurisdictions. This foundational commitment to human rights is
cross-cutting and applies to all the recommendations made in this final report.
Many stakeholders emphasized the need for detailed action plans and clear guidelines to ensure effective implementation
of the Advisory Body’s guiding principles for international AI governance. Governmental entities suggested developing
clear recommendations for defining and ensuring the public interest, along with mechanisms for public participation and
oversight. The need for clear policies and leveraging existing regulatory frameworks to maintain competitive and innovative
AI markets was often stressed by private sector entities. Many international organizations and civil society organizations
also called for agile governance systems designed to respond in a timely manner to evolving technologies. Some
specifically requested a new entity with “muscle and teeth”, beyond mere coordination.
A common concern was accountability for discriminatory, biased and otherwise harmful AI, with suggestions for
mechanisms to ensure accountability and remedies for harm and address the concentration of technological capacity and
market power. Many organizations highlighted the necessity of addressing unchecked power and ensuring consumer rights
and fair competition. Academic institutions recognized the strengths of the guiding principles in their universality and
inclusivity, but suggested improvements in stakeholder engagement. Private sector actors emphasized responsible use of
AI, along with breaking down barriers to access.
The absence of data governance systems was mentioned in multiple consultations, with stakeholders indicating that
the United Nations was a natural venue for dialogue on data governance. Governments emphasized the need for robust
data governance frameworks that prioritized privacy, data protection and equitable data use, advocating for international
guidelines to manage data complexities in AI development. The frameworks were requested to be developed through a
transparent and inclusive process, integrating ethical considerations such as consent and privacy.
Academia highlighted that data governance should be dealt with as a priority in the short term. Private sector entities
noted that data governance measures should complement AI governance, emphasizing comprehensive privacy laws and
responsible AI use. International organizations and civil society organizations stressed that governance of AI training data
should protect consumer rights and support fair competition among AI developers via non-exclusive access to AI training
data, underscoring the call for specific and actionable data governance measures. The United Nations was identified as a
key venue for addressing these governance challenges and bridging resource disparities.
Final Report 39
Figure 6: Interregional and regional AI governance initiatives, key milestones,
2019–2024 (H1)
NOT EXHAUSTIVE
United States-United AI summits, United AI summits,
Kingdom / FMF, IEC, IEEE,
Interregional New Zealand-United CoE, G7, G20, Nations ISO, ITU, WSC…
international Kingdom / GPAI, OECD…
between regions United States-
Singapore /
United States-EU…
ASEAN, AU,
by governments
CEN-
by companies
Adoption
Adoption
Abbreviations: ANSI, American National Standards Institute; ASEAN, Association of Southeast Asian Nations; AU, African Union; BSI, British Standards Institution; CEN, European
Committee for Standardisation; CENELEC, European Committee for Electrotechnical Standardization; CoE, Council of Europe; ETSI, European Telecommunications Standards
Institute; EU, European Union; FMF, Frontier Model Forum; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership on Artificial Intelligence; IEC, International
Electrotechnical Commission; IEEE, Institute of Electrical and Electronics Engineers; ISO, International Organization for Standardization; ITU, International Telecommunication
Union; NIST, National Institute of Standards and Technology; OAS, Organization of American States; OECD, Organisation for Economic Co-operation and Development; SAC,
Standardization Administration of China; WSC, World Standards Cooperation.
since 2023, spurred by releases of multiple general- 57 Examples of relevant regional and interregional
purpose AI large language models following the plurilateral initiatives include those led by the African
release of ChatGPT in November 2022. Union, various hosts of AI summits, the Association
of Southeast Asian Nations, the Council of Europe,
55 In parallel, industry standards on AI have the European Union, the Group of Seven (G7),
been developed and published for adoption the Group of 20 (G20), the Global Partnership on
internationally. Other multi-stakeholder initiatives Artificial Intelligence, the Organization of American
have also sought to bridge the divide between the States and the Organisation for Economic Co-
public and private sectors, including in discussion operation and Development (OECD), among others.
arenas such as the Internet Governance Forum.
58 Our analysis of current governance arrangements is
56 A survey of some of the sources of AI governance likely to be outdated within months. Nevertheless,
initiatives and industry standards, mapped by it can help to illustrate how current and emerging
geographical range and inclusiveness, is provided in international AI governance initiatives relate to our
figure 7 (in listing this recent work, we acknowledge guiding principles for the formation of new global
many years of efforts by academics, civil society governance institutions for AI, including principle 1
and professional bodies). (AI should be governed inclusively, by and for the
benefit of all).
Final Report 41
3. Global AI governance gaps
59 The multiple national, regional, multi-stakeholder overlapping membership, seven countries are
and other initiatives mentioned above have yielded parties to all of them, whereas fully 118 countries
meaningful gains and informed our work; many are parties to none (primarily in the global South,
of their representatives have contributed to our with uneven representation even of leading AI
deliberations in writing or participated in our nations; see fig. 8).
consultations.
65 Selectivity is understandable at an early stage
60 Nonetheless, beyond a couple of initiatives emerging of governance when there is a degree of
from the United Nations, none of the initiatives
12
experimentation, competition around norms and
can be truly global in reach. These representation diverse levels of comfort with new technologies.
gaps in AI governance at the international level are However, as international AI governance matures,
a problem, because the technology is global and will global representation becomes more important in
be comprehensive in its impact. terms of equity and effectiveness.
61 Separate coordination gaps between initiatives and 66 Besides the non-inclusiveness of existing
institutions risk splitting the world into disconnected efforts, representation gaps also exist in national
and incompatible AI governance regimes. and regional initiatives focused on reaching
common scientific understandings of AI. These
62 Furthermore, implementation and accountability representation gaps may manifest in decision-
gaps reduce the ability of States, the private sector, making processes regarding how assessments are
civil society, academia and the technical community scoped, resourced and conducted.
to translate commitments, however representative,
into tangible outcomes. 67 Equity demands that more voices play meaningful
roles in decisions about how to govern technology
A. Representation gaps
that affects all of us, as well as recognizing that
many communities have historically been excluded
from those conversations. The relative paucity of
63 Our analysis of the various non-United Nations AI topics from the agendas of major initiatives that are
governance initiatives that span regions shows that priorities of certain regions signals an imbalance
most initiatives are not fully representative in their stemming from underrepresentation.13
intergovernmental dimensions.
12 The United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021), and two General
Assembly resolutions on AI.
13 For example, governance of AI training data sets, access to computational power, AI capacity development, AI-related risks regarding discrimination of
marginalized groups and use of AI in armed conflict (see annex E for results of the AI Risk Global Pulse Check, which shows different perceptions of risks by
respondents from the Western European and Others Group versus others). Many States and marginalized communities have also been excluded from the
benefits of AI or may disproportionately suffer its harms. Equity demands a diverse and inclusive approach that accounts for the views of all regions and that
spreads opportunities evenly while mitigating risks.
Sample: OECD AI Principles (2019), G20 AI principles (2019), Council of Europe AI Convention INTERREGIONAL ONLY,
drafting group (2022–2024), GPAI Ministerial Declaration (2022), G7 Ministers’ Statement (2023), EXCLUDES REGIONAL
Bletchley Declaration (2023) and Seoul Ministerial Declaration (2024).
* Per endorsement of relevant intergovernmental issuances. Countries are not considered involved in a plurilateral initiative solely because of membership in the European Union or
the African Union. Abbreviations: AG, African Group; APG, Asia and the Pacific Group; EEG, Eastern European Group; G20, Group of 20; G7, Group of Seven; GPAI, Global Partnership
on Artificial Intelligence; LAC, Latin America and the Caribbean; OECD, Organisation for Economic Co-operation and Development; WEOG, Western European and Others Group.
from the transboundary character of AI, spurring 70 The two General Assembly resolutions on AI
learning, encouraging interoperability and sharing AI adopted in 2024 so far15 signal acknowledgement
benefits. There are, moreover, benefits to including
14
among leading AI nations that representation gaps
diverse views, including un-likeminded views, to need to be addressed regarding international AI
anticipate threats and calibrate responses that are governance, and the United Nations could be the
creative and adaptable. forum to bring the world together in this regard.
69 By limiting the range of countries included 71 The Global Digital Compact in September 2024,
in key agenda-shaping, relationship-building and the World Summit on the Information Society
and information-sharing processes, selective Forum in 2025 offer two additional policy windows
plurilateralism can limit the achievement of its own where a globally representative set of AI governance
goals. These include compatibility of emerging AI processes could be institutionalized to address
governance approaches, global AI safety and shared representation gaps.16
understandings regarding the science of AI at the
global level (see recommendations 1, 2 and 3 on
what makes a global approach particularly effective
here).
14 If and when red lines are established – analogous perhaps to the ban on human cloning – they will only be enforceable if there is global buy-in to the norm, as
well as monitoring compliance. This remains the case despite the fact that, paradoxically, in the current paradigm, while the costs of a given AI system go down,
the costs of advanced AI systems (arguably the most important to control) go up.
15 Resolutions 78/265 (seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development) and 78/311
(enhancing international cooperation on capacity-building of artificial intelligence).
16 Various plurilateral initiatives, including the OECD AI Principles, the G7 Hiroshima AI Process and the Council of Europe Framework Convention on Artificial
Intelligence, are open to supporters or adherents beyond original initiating countries. Such openness might not, however, deliver representation and legitimacy
at the speed and breadth required to keep pace with accelerating AI proliferation globally. Meanwhile, representation gaps in international AI governance
processes persist, with decision-making concentrated in the hands of a few countries and companies.
Final Report 43
75
B. Coordination gaps
The level of activity shows the importance of AI
to United Nations programmes. As AI expands to
affect ever-wider aspects of society, there will be
72 The ongoing emergence and evolution of AI growing calls for diverse parts of the United Nations
governance initiatives are not guaranteed to system to act, including through binding norms.
work together effectively for humanity. Instead, It also shows the ad hoc nature of the responses,
coordination gaps have appeared. Effective which have largely developed organically in specific
handshaking between the selective plurilateral domains and without an overarching strategy. The
initiatives (see fig. 8) and other regional initiatives is resulting coordination gaps invite overlaps and
not assured, risking incompatibility between regions. hinder interoperability and impact.
73 Nor are there global mechanisms for all international 76 The number and diversity of approaches are a sign
standards development organizations (see fig. 7), that the United Nations system is responding to
international scientific research initiatives or AI an emerging issue. With proper orchestration, and
capacity-building initiatives to coordinate with each in combination with processes taking a holistic
other, undermining interoperability of approaches approach, these efforts can offer an efficient and
and resulting in fragmentation. The resulting sustainable pathway to inclusive international AI
coordination gaps between various sub-global governance in specific domains. This could enable
initiatives are in some cases best addressed at the meaningful, harmonized and coordinated impacts
global level. on areas such as health, education, technical
standards and ethics, instead of merely contributing
74 A separate set of coordination gaps arise within to the proliferation of initiatives and institutions
the United Nations system, reflected in the array of in this growing field. International law, including
diverse United Nations documents and initiatives international human rights law, provides a shared
in relation to AI. Figure 9 shows 27 United Nations- normative foundation for all AI-related efforts,
related instruments in specific domains that may thereby facilitating coordination and coherence.
apply to AI – 23 of them are binding and will require
interpretation as they pertain to AI. A further 29 77 Although the work of many United Nations entities
domain-level documents from the United Nations touches on AI governance, their specific mandates
and related organizations focus specifically on AI, mean that none does so in a comprehensive
none of which are binding.17 In some cases, these manner; and their designated governmental focal
can address AI risks and harness AI benefits in points are similarly specialized.18 This limits the
specific domains. ability of existing United Nations entities to address
17 A survey conducted by the United Nations Chief Executives Board in February 2024 of 57 United Nations entities reported 50 documents concerning AI
governance; 44 of the 57 entities responded, including the Economic Commission for Latin America and the Caribbean; the Economic and Social Commission
for Asia and the Pacific; the Economic and Social Commission for Western Asia; the Food and Agriculture Organization of the United Nations (FAO); the
International Atomic Energy Agency (IAEA); the International Civil Aviation Organization (ICAO); the International Fund for Agricultural Development; ILO; the
International Monetary Fund; the International Organization for Migration; International Trade Centre; the International Telecommunication Union (ITU); the
United Nations Entity for Gender Equality and the Empowerment of Women (UN-WOMEN); the Joint United Nations Programme on HIV/AIDS (UNAIDS); the
United Nations Conference on Trade and Development (UNCTAD); the Department of Economic and Social Affairs; the Department of Global Communications;
the Executive Office of the Secretary-General; the Office for the Coordination of Humanitarian Affairs; the Office of the United Nations High Commissioner
for Human Rights; the Office of Counter-Terrorism; the Office for Disarmament Affairs; the Office of Information and Communications Technology; OSET; the
United Nations Development Programme (UNDP); the United Nations Office for Disaster Risk Reduction; the United Nations Environment Programme; UNESCO;
the United Nations Framework Convention on Climate Change; the United Nations Population Fund; the United Nations High Commissioner for Refugees
(UNHCR); the United Nations Children’s Fund; the United Nations Interregional Crime and Justice Research Institute; the United Nations Industrial Development
Organization; the United Nations Office on Drugs and Crime/United Nations Office at Vienna; the United Nations Office for Project Services; the United
Nations Relief and Works Agency for Palestine Refugees in the Near East; United Nations University; United Nations Volunteers; the World Trade Organization;
the Universal Postal Union; the World Bank Group; the World Food Programme; the World Health Organization (WHO); and the World Intellectual Property
Organization (WIPO). See “United Nations system white paper on AI governance: an analysis of the UN system’s institutional models, functions, and existing
international normative frameworks applicable to AI governance” (available at https://unsceb.org/united-nations-system-white-paper-ai-governance).
18 For example, ministries of education, science and culture (UNESCO); telecommunication or ICT (ITU); industry (United Nations Industrial Development
Organization); and labour (ILO).
Source: “United Nations system white paper on AI governance: an analysis of the UN system’s institutional models, functions, and existing international normative frameworks
applicable to AI governance”, 28 Feb 2024.
the multifaceted implications of AI globally on their and Human Rights. Equally, we would need robust
own. At the national and regional levels, such gaps engagement of civil society and scientific experts
are being addressed by new institutions, such as 19
to keep governments and private companies honest
AI safety institutes or AI offices for an appropriately about their commitments and claims.
transversal approach.
80 Missing enablers for harnessing AI’s benefits for the
C. Implementation gaps
public good within and between countries constitute
a key implementation gap. Many countries have
put in place national strategies to boost AI-related
78 Representation and coordination are not enough, infrastructure and talent, and a few initiatives for
however. Action and follow-up processes are international assistance are emerging.20 However,
required to ensure that commitments to good these are under-networked and under-resourced.
governance translate into tangible outcomes in
practice. More is needed to ensure accountability. 81 At the global level, connecting national and regional
Peer pressure and peer-to-peer learning are two capacity development initiatives, and pooling
elements that can spur accountability. resources to support those countries left out from
such efforts, can help to ensure that no country is
79 Engaging with the private sector will be equally left behind in the sharing of opportunities associated
important for meaningful accountability and remedy with AI. Another key implementation gap is the
for harm. The United Nations has experience of this absence of a dedicated fund for AI capacity-building
in the United Nations Guiding Principles on Business despite the existence of some funding mechanisms
for digital capacity (box 8).
19 Including those set up by Canada, Japan, Singapore, the Republic of Korea, the United Kingdom, the United States and the European Union.
20 National-level efforts could continue to employ diagnosis tools, such as the UNESCO AI Readiness Assessment Methodology to help to identify gaps at the
country level, with the international network helping to address them.
Final Report 45
Box 8: Gaps in global financing of AI capacity
The Advisory Body believes that there are no existing global funds for AI capacity-building with the scale and
mandate to fund the significant investment required to put a floor under the AI divide.
Indicative estimates place the amount needed in the range of $350 million to $1 billion annually,a including in-
kind contributions from the private sector, mandated to target AI capacity across all AI enablers, including talent,
compute, training data, model development and interdisciplinary collaboration for applications. Examples of
existing multilateral funding mechanisms include:
This fund is broad and encompasses every SDG, as well as emergency response. It supports country-level
initiatives for integrated United Nations policy and strategic financing support to countries to advance the SDGs.
The fund helps the United Nations to deliver and catalyse SDG financing and programming. Since 2017, 30
participating United Nations entities have received a total of $223 million. It does not fund national governments,
communities or entities directly, and it does not fund cross-border initiatives.
In 2023, the fund had around 16 donors for a total of $57.7 million, and an estimated $58.8 million in 2024. The
private sector has contributed $83,155 since 2017, and none in 2023 or 2024 to date.
Most of the fund, 60 per cent, go to actions in five SDGs: Goals 2 (zero hunger), 5 (gender equality), 7 (affordable
and clean energy), 9 (industry, innovation and infrastructure) and 17 (partnerships).
The fund’s Policy Digital Transformation stream (launched in 2023) has funded one project of $250,000,
disbursed equally between the International Telecommunication Union (ITU) and the United Nations Development
Programme (UNDP). At the end of financial year 2023, its delivery rate was 2.27 per cent. Digital transformation
activities form a small part of the fund’s activities, and typically in relation to other SDGs (e.g. connectivity and
digital infrastructure to support service delivery, such as in small island developing States).
This fund supports countries in developing and implementing the digital transformation with a focus on
broadband infrastructure, access and use, digital public infrastructure and data production, accessibility and use.
By the end of 2022, it had invested $10.7 billion in more than 80 countries.
The partnership includes a cybersecurity associated multi-donor trust fund (Estonia, Germany, Japan and the
Kingdom of the Netherlands) to support national cybersecurity capacity development.
a Less than 1 per cent of estimated annual private sector AI investment in 2023.
Common understanding
International scientific panel on AI
()
Common ground
Policy dialogue on AI governance
AI standards exchange
Common benefits
Capacity development network
Global fund for AI
Global AI data framework
Coherent effort Advising the Secretary-General on matters related to AI, working to promote a coherent voice within the United Nations system,
engaging States and stakeholders, partnering and interfacing with other processes and institutions, and supporting other proposals
AI office within the Secretariat as required.
21 It should also be inclusive and cohesive, and enhance global peace and security.
Final Report 47
A. Common understanding
a. Issuing an annual report surveying AI-
related capabilities, opportunities, risks and
uncertainties, identifying areas of scientific
86 A global approach to governing AI starts with consensus on technology trends and areas
a common understanding of its capabilities, where additional research is needed;
opportunities, risks and uncertainties. b. Producing quarterly thematic research digests
on areas in which AI could help to achieve the
87 The AI field has been evolving quickly, producing an SDGs, focusing on areas of public interest which
overwhelming amount of information and making it may be under-served; and
difficult to decipher hype from reality. This can fuel c. Issuing ad hoc reports on emerging issues,
confusion, forestall common understanding and in particular the emergence of new risks or
advantage major AI companies at the expense of significant gaps in the governance landscape.
policymakers, civil society and the public.
91 There is precedent for such an institution. Some
88 In addition, a dearth of international scientific examples include the United Nations Scientific
collaboration and information exchange can breed Committee on the Effects of Atomic Radiation,
global misperceptions and undermine international the Intergovernmental Science-Policy Platform on
trust. Biodiversity and Ecosystem Services (IPBES), the
Scientific Committee on Antarctic Research, and the
89 There is a need for timely, impartial and reliable
Intergovernmental Panel on Climate Change (IPCC).
scientific knowledge and information about AI
for Member States to build a shared foundational
92 These models are known for their systematic
understanding worldwide, and to balance approaches to complex, pervasive issues affecting
information asymmetries between companies various sectors and global populations. However,
housing expensive AI labs and the rest of the while they can provide inspiration, none is perfectly
world, including via information-sharing between AI suited to assessing AI technology and should not
companies and the broader AI community. be replicated directly. Instead, a tailored approach is
required.
90 This is most efficient at the global level, enabling
joint investment in a global public good and public
93 Learning from such precedents, an independent,
interest collaboration across otherwise fragmented international and multidisciplinary scientific panel on
and duplicative efforts. AI could collate and catalyse leading-edge research
to inform those seeking scientific perspectives on
International scientific panel AI technology or its applications from an impartial,
on AI credible source. An example of one kind of issue
to which the panel could contribute is the ongoing
Recommendation 1: An international scientific debate over open versus closed AI systems,
panel on AI discussed in box 9.
We recommend the creation of an independent 94 A scientific panel under the auspices of the United
international scientific panel on AI, made up of Nations would have a broad focus to cover an
diverse multidisciplinary experts in the field serving inclusive range of priorities holistically. This
in their personal capacity on a voluntary basis. could include sourcing expertise on AI-related
Supported by the proposed United Nations AI opportunities, and facilitating “deep dives” into
office and other relevant United Nations agencies, applied domains of the SDGs, such as health care,
partnering with other relevant international energy, education, finance, agriculture, climate, trade
organizations, its mandate would include: and employment.
22 It could build, in particular, upon existing sectoral or regional panels already operating.
23 It could also conduct outreach to broader audiences, including civil society and the general public.
24 For a list of United Nations entities active in this area, see figure 9.
Final Report 49
processes such as the recent scientific report 100 By drawing on the unique convening power of the
on the risks of advanced AI commissioned by United Nations and inclusive global reach across
the United Kingdom,25 and relevant regional stakeholder groups, an international scientific panel
organizations. can deliver trusted scientific collaboration processes
e. A steering committee would develop a research and outputs and correct information asymmetries
agenda ensuring the inclusivity of views and in ways that address the representation and
incorporation of ethical considerations, oversee coordination gaps identified in paragraphs 66 and
the allocation of resources, foster collaboration 73, thereby promoting equitable and effective
with a network of academic institutions and international AI governance.
other stakeholders, and review the panel’s
activities and deliverables.
One article explained that a “fully closed AI system is only accessible to a particular group. It could be an AI
developer company or a specific group within it, mainly for internal research and development purposes. On the
other hand, more open systems may allow public access or make available certain parts, such as data, code, or
model characteristics, to facilitate external AI development.”a
Open-source AI systems in the generative AI field present both risks and opportunities. Companies often cite “AI
safety” as a reason for not disclosing system specifications, reflecting the ongoing tension between open and
closed approaches in the industry. Debates typically revolve around two extremes: full openness, which entails
sharing all model components and data sets; and partial openness, which involves disclosing only model weights.
Open-source AI systems encourage innovation and are often a requirement for public funding. On the open
extreme of the spectrum, when the underlying code is made freely available, developers around the world can
experiment, improve and create new applications. This fosters a collaborative environment where ideas and
expertise are readily shared. Some industry leaders argue that this openness is vital to innovation and economic
growth.
However, in most cases, open-source AI models are available as application programming interfaces. In this case,
the original code is not shared, the original weights are never changed and model updates become new models.
Additionally, open-source models tend to be smaller and more transparent. This transparency can build trust,
allow for ethical considerations to be proactively addressed, and support validation and replication because users
can examine the inner workings of the AI system, understand its decision-making process and identify potential
biases.
a Angela Luna, “The open or closed AI dilemma”, 2 May 2024. Available at https://bipartisanpolicy.org/blog/the-open-or-closed-ai-dilemma.
25 International Scientific Report on the Safety of Advanced AI: Interim Report. Available at https://gov.uk/government/publications/international-scientific-report-
on-the-safety-of-advanced-ai.
Meaningful openness exists between the two extremes of the spectrum and can be tailored to different use cases.
This balanced method fosters safe, innovative and inclusive AI development by enabling public scrutiny and
independent auditing of disclosed training and fine-tuning data. Openness, being more than merely sharing model
weights, can propel innovation and inclusion, helping applications in research and education.
The definition of “open-source AI” is evolving,c and is often influenced by corporate interests as illustrated in figure
11. To address this, we recommend initiating a process, coordinated by the above-proposed international scientific
panel, to develop a well-rounded and gradient approach to openness. This would enable meaningful, evidence-
based approaches to openness, helping users and policymakers to make informed choices about AI models and
architectures.
Data disclosure – even if limited to key elements – is essential for understanding model performance, ensuring
reproducibility and assessing legal risks. Clarification around gradations of openness can help to counter
corporate “open-washing” and foster a transparent tech ecosystem.
It is also important that, as the technology matures, we consider the governance regimes for the application of
both open and closed AI systems. We need to develop responsible AI guidelines, binding norms and measurable
standards for developers and designers of products and services that incorporate AI technologies, as well as for
their users and all actors involved throughout their life cycle.
Gated to public
Levels of
Gradual /
Access
Hosted Cloud-based /
Fully closed staged Downloadable Fully open
access API access
release
(developer)
PaLM (Google)
System
Source: Irene Solaiman, “The gradient of generative AI release: methods and considerations”, Proceedings of the 2023 Association for Computing Machinery
(ACM) Conference on Fairness, Accountability, and Transparency (June 2023), pp. 111–122.
b Inspired by Andreas Liesenfeld and Mark Dingemanse, “Rethinking open source generative AI: open-washing and the EU AI Act”, The 2024 ACM
Conference on Fairness, Accountability, and Transparency (FAccT ’24) (June 2024).
c The Open Source AI Definition – draft v. 0.0.3. Available at https://opensource.org/deepdive/drafts/the-open-source-ai-definition-draft-v-0-0-3.
Final Report 51
B. Common ground
c. Share voluntarily significant AI incidents that
stretched or exceeded the capacity of State
agencies to respond; and
101 Alongside a common understanding of AI, d. Discuss reports of the international scientific
common ground is needed to establish governance panel on AI, as appropriate.
approaches that are interoperable across
jurisdictions and grounded in international norms, 103 International governance of AI is currently a
such as the Universal Declaration of Human Rights fragmented patchwork at best. There are 118
(see principle 5 above). countries that are not parties to any of the seven
recent prominent non-United Nations AI governance
102 This is required at the global level not only for initiatives with intergovernmental tracks26 (see
equitable representation, but also for averting fig. 8). Representation gaps occur even among
regulatory “races to the bottom” while reducing the top 60 AI capacity countries, highlighting the
regulatory friction across borders, maximizing selectiveness of international AI governance today
technical and ontological interoperability, and (see fig. 12).
detecting and responding to incidents emanating
from decisions along AI’s life cycle which span 104 An inclusive policy forum is needed so that all
multiple jurisdictions. Member States, drawing on the expertise of
stakeholders, can share best practices that foster
Policy dialogue on AI development while furthering respect, protection and
governance
105 This does not mean global governance of all
aspects of AI (which is impossible and undesirable,
We recommend the launch of a twice-yearly
given States’ diverging interests and priorities). Yet,
intergovernmental and multi-stakeholder policy
exchanging views on AI developments and policy
dialogue on AI governance on the margins of
responses can set the framework for international
existing meetings at the United Nations. Its purpose
cooperation.
would be to:
a. Share best practices on AI governance that
foster development while furthering respect,
106 The United Nations is uniquely placed to facilitate
such dialogues inclusively in ways that help Member
protection and fulfilment of all human rights,
States to work together effectively. The United
including pursuing opportunities as well as
Nations system’s existing and emerging suite of
managing risks;
norms can offer strong normative foundations for
b. Promote common understandings on the
concerted action, grounded in the Charter of the
implementation of AI governance measures by
United Nations, human rights and other international
private and public sector developers and users
law, including environmental law and international
to enhance international interoperability of AI
humanitarian law, as well as the SDGs and other
governance;
international commitments.27
26 These initiatives are not always directly comparable. Some reflect the work of existing international or regional organizations, while others are based on ad hoc invitations
from like-minded countries.
27 See, for example, the Charter of the United Nations (preamble, purposes and principles, and Articles 13, 55, 58 and 59). See also core international instruments on human
rights (Universal Declaration of Human Rights; International Covenant on Civil and Political Rights; International Covenant on Economic, Social and Cultural Rights;
International Convention on the Elimination of All Forms of Racial Discrimination; Convention on the Rights of the Child; Convention on the Elimination of All Forms of
Discrimination against Women; Convention against Torture; Convention on the Rights of Persons with Disabilities; Convention on the Rights of Migrants; International
Convention for the Protection of All Persons from Enforced Disappearance); instruments on international human rights law (Geneva Conventions; Convention on Certain
Conventional Weapons; Genocide Convention; Hague Convention); instruments on related principles such as distinction, proportionality and precaution and the 11
principles on Lethal Autonomous Weapons Systems adopted within the Convention on Certain Conventional Weapons); disarmament and arms control instruments
in terms of prohibitions on weapons of mass destruction (Treaty on the Non-Proliferation of Nuclear Weapons; Chemical Weapons Convention; Biological Weapons
Convention); environmental law instruments (United Nations Framework Convention on Climate Change; Convention on the Prohibition of Military or Any Other Hostile Use of
Environmental Modification Techniques); the Paris Agreement and related principles such as precautionary principle, integration principle and public participation; and non-
binding commitments on the 2030 Agenda for Sustainable Development, gender and ethics, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence.
Figure 12: Top 60 AI countries (2023 Tortoise Index) in the sample of major plurilateral
AI governance initiatives with intergovernmental tracks
party to
*Including jurisdictions such as the Holy See and the European Union.
Sources:
• OECD, Recommendation of the Council on Artificial Intelligence (adopted 21 May 2019), available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
• G20, AI Principles (June 2019), available at https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/pdf/documents/en/annex_08.pdf.
• GPAI, 2022 ministerial declaration (22 November 2022), available at https://one.oecd.org/document/GPAI/C(2022)7/FINAL/en/pdf.
• Bletchley Declaration (1 Nov 2023), available at https://gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
• G7, Hiroshima AI Process G7 Digital & Tech Ministers’ Statement (1 Dec 2023), available at https://www.soumu.go.jp/hiroshimaaiprocess/pdf/document02_en.pdf.
• Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (adopted 17 May 2024), available at https://coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.
• Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity, AI Seoul Summit (22 May 2024).
• Tortoise Media, Global AI Index (2023), available at https://tortoisemedia.com/intelligence/global-ai/#rankings.
107 Combined with expertise from the international b. One portion of each dialogue session might
scientific panel and capacity development (see focus on national approaches led by Member
recommendations 1, 4 and 5), inclusive dialogue at States, with a second portion sourcing expertise
the United Nations can help States and companies and inputs from key stakeholders – in particular,
to update their regulatory approaches and technology companies and civil society
methodologies to keep pace with accelerating AI representatives.
in an interoperable way that promotes common c. Governmental participation could be open to
ground. Some of the distinctive features of the all Member States, or a regionally balanced
United Nations can be helpful in this regard: grouping (for more focused discussion
a. Anchoring inclusive dialogue in the United among a rotating, representative interested
Nations suite of norms, including the Charter subset), or a combination of both, calibrated
of the United Nations and human rights and as appropriate to different agenda items or
international law, can promote a “race to the segments over time, as the technology evolves
top” in governance approaches. Conversely, and global concerns emerge or gain salience. A
without the universal global membership of the fixed geometry might not be helpful, given the
United Nations, international collective action dynamic nature of the technology and the policy
faces greater pressure to succumb to regulatory context.
“races to the bottom” between jurisdictions on d. In addition to the formal dialogue sessions,
AI safety and scope of use. multi-stakeholder engagement on AI policy
b. The global membership of the United Nations could also leverage other existing mechanisms
can also enable coordination between existing such as the ITU AI for Good meeting, the
sub-global initiatives for greater compatibility annual Internet Governance Forum meeting,
between them. Many in our consultations the UNESCO AI ethics forum and the United
called for the United Nations to be a key space Nations Conference on Trade and Development
for enabling soft coordination across existing (UNCTAD) eWeek, open for participation to
regional and plurilateral initiatives, taking representatives of all Member States on a
into account diverse values across different voluntary basis.
cultures, languages and regions. e. In line with the inclusive nature of the dialogue,
c. The Organization’s predictable, transparent, rule- discussion agendas could be broad to
based and justifiable procedures can enable encompass diverse perspectives and concerns.
continuous political engagement to bridge non- For instance, twice-yearly meetings could focus
likeminded countries, and moderate dangerous more on opportunities across diverse sectors
contestation. In addition to building confidence, in one meeting, and more on risk trends in the
relationships and communication lines for times other.29 This could include uses of AI to achieve
of crisis, reliably inclusive dialogues can foster the SDGs, how to protect children, minimize
new norms, customary law and agreements that climate impact, as well as an exchange on
enhance cooperation among States. approaches to manage risks. Meetings could
also include a discussion of definitions of
108 Operationally: terms used in AI governance and AI technical
a. A policy dialogue could begin on the margins standards, as well as reports of the international
of existing meetings in New York, such as the scientific panel, as appropriate.
General Assembly, Geneva and locations in the
28
global South.
28 Analogous to the high-level political forum in the context of the SDGs that takes place under the auspices of the Economic and Social Council.
29 Relevant parts of the United Nations system could be engaged to highlight opportunities and risks, including ITU on AI standards; ITU, UNCTAD, UNDP and
the Development Coordination Office on AI applications for the SDGs; UNESCO on ethics and governance capacity; the Office of the United Nations High
Commissioner for Human Rights (OHCHR) on human rights accountability based on existing norms and mechanisms; the Office for Disarmament Affairs on
regulating AI in military systems; UNDP on support to national capacity for development; the Internet Governance Forum for multi-stakeholder engagement and
dialogue; WIPO, ILO, WHO, FAO, the World Food Programme, UNHCR, UNESCO, the United Nations Children’s Fund, the World Meteorological Organization and
others on sectoral applications and governance.
30 Such a gathering could also provide an opportunity for multi-stakeholder debate of any hardening of the global governance of AI. These might include, for
example, prohibitions on the development of uncontainable or uncontrollable AI systems, or requirements that all AI systems be sufficiently transparent so that
their consequences can be traced back to a legal actor that can assume responsibility for them.
31 Although multiple AI summits have helped a subset of 20–30 countries to align on AI safety issues, participation has been inconsistent: Brazil, China and
Ireland endorsed the Bletchley Declaration in November 2023, but not the Seoul Ministerial Statement six months later (see fig. 12). Conversely, Mexico and
New Zealand endorsed the Seoul Ministerial Statement, but did not endorse the Bletchley Declaration.
32 Many new standards are also emerging at the national and multinational levels, such as the United States White House Voluntary AI Commitments and the
European Union Codes of Practice for the AI Act.
Final Report 55
Figure 13: Number of standards related to AI
120 117
110 9
IEEE
101
100 ISO/IEC 8
ITU 29
90
Further ISO and IEEE standards under development 22
80
70
60 58
3
50
16
40 79
32 71
30 2
10
20 16 39
6
10 6 20
3
3 10
0 1 2 3
2018 2019 2020 2021 2022 2023 2024
(Jan.–Jun.)
Sources: IEEE, ISO/IEC, ITU, World Standards Cooperation (based on June 2023 mapping, extended through inclusion of standards related to AI).
113 Two trends stand out. First, these standards were there are few agreed standards concerning energy
largely developed to address specific questions. consumption and AI. A lack of integration of
There is no common language and many terms human rights considerations into standard-setting
that are routinely used with respect to AI – fairness, processes is another gap to be bridged.33
safety, transparency – do not have agreed
definitions or measurability (despite recent work by 116 This has real costs. In addition to the concerns of
OECD and the National Institute of Standards and Member States and diverse individuals, many of our
Technology adopting a new approach for dynamic consultations revealed the concern of businesses
systems, such as AI). (including small and medium-sized enterprises in
the developing world) that fragmented governance
114 Secondly, there is a disjunction between those and inconsistent standards raise the costs of doing
standards that were adopted for narrow technical business in an increasingly globalized world.
or internal validation purposes, and those that are
intended to incorporate broader ethical principles. 117 This report is not proposing that the United Nations
Computer scientists and social scientists often adds to this proliferation of standards. Instead,
advance different interpretations of the same drawing on the expertise of the international
concept, and a joined-up paradigm of socio- scientific panel (proposed in recommendation 1),
technical standards is promising but remains and incorporating members from the various entities
aspirational (see box 10). that have contributed to standard-setting, as well
as representatives from technology companies and
115 The result is that we have an emerging set of civil society, the United Nations system could serve
standards that are not grounded in a common as a clearing house for AI standards that would
understanding of meaning or are divorced from apply globally.34
the values they were intended to uphold. Crucially,
33 See A/HRC/53/42 (Human rights and technical standard-setting processes for new and emerging digital technologies: Report of the Office of the United
Nations High Commissioner for Human Rights) and Human Rights Council resolution 53/29 (New and emerging digital technologies and human rights).
34 Even this may seem a challenging task, but progress towards a global minimum tax deal shows the possibility of collective action even in economically and
politically complex areas.
In the past, AI standards focused mainly on technical specifications, detailing how systems should be built and
operated. However, as AI technologies increasingly impact society, there is a need to shift to a socio-technical
paradigm. This shift acknowledges that AI systems do not exist in a vacuum; they interact with human users
and affect societal structures. Modern AI standards can integrate ethical, cultural and societal considerations
alongside technical requirements. In the context of safety, this includes ensuring reliability and interpretability, as
well as assessing and mitigating risks to individual and collective rights,a national and international security, and
public safety in different contexts.
A primary objective of the recently established AI safety national institutes is to ensure consistent and effective
approaches to AI safety. Harmonizing such approaches would allow AI systems to meet high safety benchmarks
internationally, enabling cross-border innovation and trade while maintaining rigorous safety protocols.
As far as “safety” is contextual, involving various stakeholders and cultures in creating such standards enhances
their relevance and effectiveness and helps with shared understanding of definitions and concepts. By
incorporating diverse perspectives, protocols can more thoroughly address the wide range of potential risks and
benefits associated with AI technologies.
a See A/HRC/53/42 (Human rights and technical standard-setting processes for new and emerging digital technologies: Report of the Office of the
United Nations High Commissioner for Human Rights) and Human Rights Council resolution 53/29 (New and emerging digital technologies and
human rights).
118 The Organization’s added-value would be to foster 120 Supported by the proposed AI office, the standards
exchange among the broadest set of standards exchange would also benefit from strong ties to the
development organizations to maximize global international scientific panel on technical questions
interoperability across technical standards, and the policy dialogue on moral, ethical, regulatory,
while infusing emerging knowledge on socio- legal and political questions.
technical standards development into AI standards
discussions. 121 If appropriately agreed, ITU, ISO/IEC and IEEE
could jointly lead on an initial AI standards summit,
119 Collecting and distributing information on AI with annual follow-up to maintain salience and
standards, drawing on and working with existing momentum. To build foundations for a socio-
efforts such as the AI Standards Hub, would enable 35
technical approach incorporating economic, ethical
participants from across standards development and human rights considerations, OECD, the World
organizations to converge on common language in Intellectual Property Organization (WIPO), the World
key areas. Trade Organization, the Office of the United Nations
High Commissioner for Human Rights (OHCHR), ILO,
UNESCO and other relevant United Nations entities
should also be involved.36
35 See https://aistandardshub.org.
36 This could include relevant sectoral, national and regional standards organizations.
Final Report 57
122 The standards exchange should also inform the
capacity-building work in recommendation 4, C. Common benefits
ensuring that the standards support practice on
the ground. It could share information about tools 124 The 2030 Agenda with its 17 SDGs can lend a unique
developed nationally or regionally that enable self- purpose to AI, bending the arc of investments away
assessment of compliance with standards. from wasteful and harmful use and towards global
development challenges. Otherwise, investments
123 The report does not presently propose that the will chase profits even at the cost of imposing
United Nations should do more than serve as a negative externalities on others. Another signal
forum for discussing and agreeing on standards. To contribution that the United Nations can make is
the extent that safety standards are formalized over linking the positive application of AI to an assurance
time, these could serve as the basis for monitoring of the equitable distribution of its opportunities (box
and verification by an eventual agency. 11).
An overview of current expert perceptions is illustrated by the results of an opportunity scan exercise
commissioned for our work, which surveyed over 120 experts from 38 countries about their expectations for AI’s
positive impact in terms of scientific breakthroughs, economic activities and the SDGs. The survey asked only
about possible positive implications of AI.
Overall, experts had mixed expectations on how soon AI could have a major positive impact (see also fig. 14):
• They were most optimistic about accelerating scientific discoveries, with 7 in 10 saying that it is likely
that AI would cause a major positive impact in the next three years or sooner in high/upper-middle-
income countries, and 28 per cent predicting the same for lower-middle/lower-income countries.
• Around 5 in 10 expected major positive impact on increasing economic activity as likely in the next three
years or sooner in high/upper-middle-income countries, and 32 per cent expected the same in lower-
middle/lower-income countries.
• A total of 46 per cent expected major positive impact on progress on the SDGs as likely in the next three
years or sooner in high/upper-middle-income countries. However, only 21 per cent expected this in lower-
middle/lower-income countries, with 4 in 10 experts gauging such major positive impact on the SDGs as
likely to be at least 10 years away in such places.
a See Ricardo Vinuesa and others, “The role of artificial intelligence in achieving the Sustainable Development Goals”. Nature Communication,
vol. 11, No. 233 (January 2020). This study also argued that 59 targets (35%, also across all SDGs) may experience a negative impact from the
development of AI.
“By when do you expect it likely (50% chance or Already occurring Within next 3 years Longer than 10 years / never
more) that AI will cause a major positive impact…?” Within next 18 months Within next 10 years
High/upper-middle
24% 10% 36% 24% 5% n = 111
Accelerating -income countries
scientific
discoveries Lower-middle/lower
9% 2% 17% 37% 35% 65
-income countries
High/upper-middle
Generally 15% 5% 32% 41% 8% 101
-income countries
increasing
economic
activity Lower-middle/lower
9% 4% 19% 43% 25% 53
-income countries
High/upper-middle
9% 10% 27% 31% 24% 93
Progress -income countries
on the
SDGs Lower-middle/lower
9% 12% 37% 42% 65
-income countries
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
4%
11%
17%
19%
20%
90%
29%
31%
33%
35%
29%
38%
38%
39%
42%
42%
80%
44%
44%
44%
45%
47%
50%
52%
52%
53%
54%
34%
60%
61%
63%
70%
28%
64%
65%
67%
33%
74%
42%
60%
40%
24%
32%
36%
33%
50%
27%
27%
41%
28%
29%
34%
29%
29%
30%
30%
40%
29%
34%
38%
22%
25%
29%
33%
38%
37%
23%
30%
22%
21%
32%
25%
21%
26%
24%
22%
31%
20%
25%
23%
11%
10% 15%
31%
20%
7% 18%
26%
25%
22%
23%
7% 16%
18%
10% 11%
17%
6% 15%
23%
2% 14%
5% 14%
3% 15%
4% 13%
10%
4% 10%
17%
17%
2% 13%
10%
11%
1% 9%
10%
12%
11%
11%
9%
8%
8%
5%
6%
7%
5%
3%
5%
4%
3%
4%
3%
3%
2%
2%
2%
2%
2%
2%
1%
1%
1%
1%
1%
0%
H L H L H L H L H L H L H L H L H L H L H L H L H L H L H L H L
SDG 7 -
scientific
activity
SDG 10 - Reduced
SDG 3 - Good
clean energy
SDG 6 - Clean
equality
well-being
Sustainable
SDG 5 - Gender
economic
health and
SDG 4 -
Affordable and
SDG 13 -
SDG 11 -
water and
poverty
production
sanitation
SDG 15 -
below water
SDG 2 -
Zero hunger
SDG 1 - No
institutions
discoveries
Increasing
SDG 14 - Life
SDG 12 - Responsible
Quality
cities and
consumption and
SDG 16 - Peace,
Accelerating
education
Climate
Life on land
action
inequalities
communities
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about
lower-middle/lower-income countries. Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
Final Report 59
Box 11: AI and the SDGs (continued)
Experts expected greater positive impact of AI in the next three years in higher-income countries across all areas
surveyed, including accelerating scientific discoveries, increasing economic activityb and in the 14 SDG areas
asked about (see fig. 15). Experts were most optimistic about AI’s positive impact on health and education (SDGs
3 and 4), where 20–25 per cent of experts expected major or transformative positive impact of AI in the next
three years in high/upper-middle-income countries. They were least optimistic regarding AI’s positive impact on
gender equality and inequalities (SDGs 5 and 10), with 2 in 3 expecting AI to have no positive impact on reducing
inequalities within or between countries in either higher or lower-income countries.
AI may be expected to have earlier and greater impacts in higher-income countries, in part due to barriers holding
back lower-middle and lower-income countries (see fig. 16). Missing enablers – from poorer infrastructure, to lack
of domestic policy and international governance – were cited by more than half of respondents as important factors
causing additional difficulty for lower-income countries in harnessing AI for economic activity and SDG progress.
“How important do you consider the below factors in causing 1 Not important 3 Somewhat important 5 Very important
additional difficulty for lower-middle/lower-income countries 2 Slightly important 4 Important
(compared with high/upper-middle-income countries) in
harnessing AI to drive additional economic activity and progress
1 2 3 4 5
on the SDGs?”
Poorer technological / communications infrastructure 8% 23% 65% n = 71 4.46
Less access to compute 8% 24% 63% 71 4.44
Less ability to train domestic talent to train and
8% 27% 59% 71 4.38
develop new models
Less ability to retain local talent when trained (“brain drain”) 7% 24% 61% 70 4.37
Less ability to train domestic talent to apply
13% 30% 54% 71 4.30
and deploy existing models
More difficulty collecting new necessary data 14% 30% 52% 71 4.28
Less ability to access existing datasets
6% 17% 22% 53% 72 4.17
(e.g. proprietary data)
Less access to existing models 7% 21% 28% 40% 72 3.93
Less ability to combine fragmented data 9% 19% 22% 45% 69 3.91
Lack of partnership between domestic actors
10% 20% 27% 39% 70 3.86
(e.g. domestic government and businesses)
Lack of partnership between domestic & regional/international actors 14% 17% 37% 30% 70 3.80
Lack of effective domestic policy to enable AI 14% 17% 28% 36% 69 3.77
Lack of international governance / interoperability / standards 14% 13% 31% 34% 70 3.71
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
These results underline the tentativeness of AI’s eventual contribution to the SDGs, and how it remains highly
dependent on missing enablers. This is particularly so in less developed countries, which already lack much of
what more-developed countries have, from infrastructure to policy. Without cooperation to build capacity and
facilitate access to key enablers, existing AI divides could further widen and become entrenched, limiting AI’s
ability to meaningfully contribute to progress on science, economic benefit and progress on the SDGs before 2030.
b The share of experts expecting “major positive impact” on increasing economic activity and accelerating scientific discovery over three years is
higher in the first chart than the second chart. This may be due to the qualifier “by when do you expect it likely (50% chance or more) that AI will
cause a major positive impact” (emphasis added) in the question responses depicted in the first chart, which is absent in the second.
Final Report 61
future global talent pipeline. Enhancing the capacity configurations and scheduling demanding tasks,
of women in tech needs to be focused on closing while preserving priority of time-critical use (such as
the existing gender gap, on the one hand, and for meteorological predictions).
avoiding the gender gap in AI, on the other hand.
The AI sector also needs more women in leadership 137 Moreover, without talent and data, compute alone
positions to embed gender perspectives in AI is of no value. In the proposed global fund for AI,
governance. This starts with enabling increasing AI we consider how to address all three through a
talent opportunities for girls. combination of financial and in-kind support.
Compute Data
134 Despite ongoing efforts to develop less compute- 138 Although many discussions about the economics of
hungry approaches to AI, the need for access to AI focus on the “war for talent” and competition over
affordable compute remains acute for training hardware, such as graphics processing units (GPUs),
capable AI models. This is one of the biggest
37 data are no less vital. Facilitating access to quality
barriers to entry in the field of AI for companies in training data at scale for training AI models by start-
the global South, but also many start-ups and small ups and small and medium-sized enterprises, as well
and medium-sized enterprises in the global North. as mechanisms to compensate data holders and
Of the top 100 high-performance computing clusters creators of training data in rights-respecting ways,
in the world capable of training large AI models, might be the most important enabler of a flourishing
none is hosted in a developing country. There is 38 AI economy. Pooling data for the public interest in
only one African country among the top 300. Two furthering specific SDGs is one key aspect (outlined
countries account for half of the world’s hyperscale in box 12), although it is not enough.
data centres. 39
37 The Advisory Body is aware of a recent case where a company based in the global South spent $70 million for a 3-month training run for a large language
model. Owning the graphics processing units (GPUs) instead of renting them from cloud service providers would have cost many times less.
38 See https://top500.org/statistics/sublist; proxy indicator since most high-performance computing clusters do not have GPUs and are of limited use for
advanced AI.
39 UNCTAD, Digital Economy Report 2021 (Geneva, 2021).
40 See https://2022.internethealthreport.org/facts.
As an example, we can consider the complex issue of assessing the health impacts of climate change. To
effectively address this challenge, a transdisciplinary approach is essential, integrating epidemiological data on
the prevalence of diseases with meteorological data tracking climate variations. By pooling these distinct types
of data from countries worldwide, in a privacy-preserving manner, researchers may be able to use AI to identify
patterns and correlations that are not evident from isolated data sets.
Including data from all countries ensures comprehensive coverage, reflecting the global nature of climate change
and capturing diverse environmental impacts and health outcomes across different regions. The transdisciplinary
origins of the data enhance the predictive accuracy of models that aim to forecast future public health crises or
natural disasters driven by climate change.
141 Analogous to the problem of informal capital, those broad-based digital economy, as well as quality
whose data are not captured – from birth records to talent and data flows. Importantly, everyone
financial transactions – may be unable to participate will benefit from the mainstreaming of good AI
in the benefits of the AI economy, obtain government governance through such collaboration.
benefits or access credit. Use of synthetic data may
only partially offset the need for new data sets. 144 Cooperation should focus on nurturing AI talent,
boosting public AI literacy, improving capacity for AI
142 Feedback on our interim report noted that there governance, broadening access to AI infrastructure,
was insufficient articulation of how current cross- promoting data and knowledge platforms suited to
jurisdictional practices around sourcing, use and diverse cultural and regional needs, and enhancing
non-disclosure of AI training data threaten rights uptake of AI applications and service capabilities.
and result in economic concentration. It was Only such a comprehensive approach can ensure
recommended that we consider how international AI equitable access to AI benefits, so that no nation is
governance could enable and catalyse more diverse left behind.
participation in the leveraging of data for AI.
145 Many of the stakeholders we consulted emphasized
that detailed strategies should be outlined to pool
Building a core public international AI
global resources together to build capacity, catalyse
capacity for common benefit
collective action towards equitable sharing of
opportunities and close the digital divide.
143 Cutting across the above three enablers, advanced
economies have both the capability and duty to
facilitate AI capacity-building through international
collaboration. In turn, they will benefit from a more
Final Report 63
146
Capacity development From the Millennium Development Goals to the
SDGs, the United Nations has long contributed to
network the development of capacities of individuals and
institutions.41 Through the work of UNESCO, WIPO
Recommendation 4: Capacity development network and others, the United Nations has helped to uphold
the rich diversity of cultures and knowledge-making
We recommend the creation of an AI capacity traditions across the globe.
development network to link up a set of
collaborating, United Nations-affiliated capacity 147 At the same time, capacity development for AI
development centres making available expertise, would require a fresh approach, in particular
compute and AI training data to key actors. The cross-domain training to build a new generation of
purpose of the network would be to: multidisciplinary experts in areas such as public
a. Catalyse and align regional and global AI health and AI, or food and energy systems and AI.
capacity efforts by supporting networking
among them; 148 Capacity would also have to be linked to outcomes
b. Build AI governance capacity of public officials through hands-on training in sandboxes42 and
to foster development while furthering respect, collaborative projects pooling data and compute to
protection and fulfilment of all human rights; solve shared problems. Risk assessments, safety
c. Make available trainers, compute and AI training testing and other governance methodologies would
data across multiple centres to researchers and have to be built into this collaborative training
social entrepreneurs seeking to apply AI to local infrastructure.
public interest use cases, including via:
i. Protocols to allow cross-disciplinary 149 Given the urgency and scale of the challenge, we
research teams and entrepreneurs in suggest pursuing a strategic approach that pools
compute-scarce settings to access and brokers access to compute through a network
compute made available for training/ of high-performance computing nodes, incentivizes
tuning and applying their models the development of critical data sets in SDG-
appropriately to local contexts; relevant domains, promotes sharing of AI models,
ii. Sandboxes to test potential AI mainstreams best practices on AI governance and
solutions and learn by doing; creates cross-domain talent for public interest AI,
iii. A suite of online educational thus ensuring cross-cutting integration of human
opportunities on AI targeted at rights expertise.
university students, young researchers,
social entrepreneurs and public sector 150 In other words, instead of chasing critical
officials; and enablers one at a time through disjointed projects,
iv. A fellowship programme for promising we propose an all-at-once, holistic strategy
individuals to spend time in academic implemented through a chain of collaborating
institutions or technology companies. centres. Emerging initiatives on capacity
development and AI for the SDGs such as the
International Computation and AI Network (ICAIN)
initiative launched by Switzerland can help to create
the initial critical mass for this strategy.
41 The United Nations University has long been committed to capacity-building through higher education and research, and the United Nations Institute for
Training and Research has helped to train officials in domains critical to sustainable development. The UNESCO Readiness Assessment Methodology is a
key tool to support Member States in their implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence. Other examples include
the WHO Academy in Lyon, the UNCTAD Virtual Institute, the United Nations Disarmament Fellowship run by the Office for Disarmament Affairs and capacity-
development programmes led by ITU and UNDP.
42 Sandboxes have been developed by various national institutions, including financial and medical authorities, such as the Infocomm Media Development
Authority of Singapore.
Final Report 65
Box 13: Global fund for AI: examples of possible investments
A relatively modest fund could help to create a minimum shared compute infrastructure for training small to
medium-sized models. Such models have important SDG potential, for example, for training farmers in their local
language.
This investment would also create a sandbox environment for developers to fine-tune existing open-source
models with their own contextual and high-quality data. Access to the compute and sandbox infrastructure could
be on a time share basis with reasonable usage fees contributing to meeting the maintenance and running costs.
A third use of the funding would be to help to curate gold standard data sets for select SDGs where the
commercial incentive is absent. The model development, testing and data curation efforts could come together
strategically in a powerful hands-on AI empowerment approach linked to concrete outcomes.
Finally, the fund could stimulate research and development, not only for contextually relevant development
and SDG-related applications of AI, but also for interlinking of compute and models as well as new governance
assessments.
157 We approach this recommendation with humility, infrastructures, which are built for peak usage and
conscious of the powerful market forces shaping not necessarily designed for AI. Perhaps there could
access to talent and compute, and of geopolitical be better ways to connect talent, compute and data.
competition pushing back against collaboration in
the field of science and technology. Unfortunately, 160 The purpose is, therefore, to address the underlying
many countries may be unable to access training, coordination and implementation gaps in
compute, models and training data without paragraphs 73, 80 and 81 for those unable to access
international support. Existing funding efforts might the requisite enablers through other means, to
also not be able to scale without such support. ensure that:
a. Countries in need can access AI enablers,
158 Levelling the playing field is, in part, a question putting a floor under the AI divide;
of fairness. It is also in our collective interest to b. Collaboration on AI capacity development
create a world in which all contribute to and benefit leads to habits of cooperation and mitigates
from a shared ecosystem. This is true not merely geopolitical competition;
across States. Ensuring diverse access to AI model c. Countries with divergent regulatory approaches
development and testing infrastructure would also have incentives to develop common templates
help to address concerns about the concentration of for governing data, models and applications for
disproportionate power in the hands of a handful of societal-level challenges related to the SDGs
technology companies. and scientific breakthroughs.
Fund purpose and objective 161 The capacity built with resources from the global
fund would be oriented towards the SDGs and the
shared global governance of AI (box 13). It could,
159 Our intention in proposing a fund is not to guarantee
access to compute resources and capabilities for instance, incorporate a “governance stack”
that even the wealthiest countries and companies for security and safety testing. This would help to
struggle to acquire. The answer may not always be mainstream best practices across the user base,
more compute. We may also need different ways while reducing the burden of validation for small
to leverage existing high-performance computing users.
43 See https://www.theglobalfund.org/en/about-the-global-fund.
Final Report 67
168 Part of the answer is in transparency on cultural, 172 We are mindful that antitrust and competition
linguistic and other traits of AI training data. policy remains domains of national and regional
Identifying underrepresented or “missing” data is authorities. However, international collective action
also helpful. Related to this is the promotion of “data can facilitate cross-border access to training data
commons” that incentivize curation of training data for local AI start-ups not available domestically.
for multiple actors. Such initiatives could create best
practices by demonstrating how design can embed 173 The United Nations is uniquely positioned to
techno-legal frameworks for privacy, data protection, support the establishment of global principles
interoperability and the equitable use of data, and and practical arrangements for the governance
human rights. and use of AI training data, building on years of
work by the data community and integrating it with
169 The data marketplaces for AI are something of a recent developments on AI ethics and governance.
“wild west” today. The idea of “grab what you can This is analogous to efforts of the United Nations
and hide it in opaque algorithms” seems to be one Commission on International Trade Law on
operating principle; another is exclusive contractual international trade, including on legal and non-legal
arrangements for access to proprietary data cross-border frameworks, and enabling digital trade
enforceable in select jurisdictions. Such exclusive and investment via model laws on e-commerce,
relationships lie behind the United Kingdom cloud-computing and identity management.
Competition and Market Authority’s concern that
“the [Frontier Model] sector is developing in ways 174 Likewise, the Commission on Science and
that risk negative market outcomes”. 44 Technology for Development and the Statistical
Commission have on their agenda data for
170 We consider it thus vital to launch a global process development and data on the SDGs. There are
that involves a variety of actors, including nations also important issues of content, copyright and
at different levels of development, supported by protection of indigenous knowledge and cultural
relevant international organizations from the United expression being considered by WIPO.
Nations family and beyond (OECD, WIPO and the
World Trade Organization), to create “guard rails” 175 The framework proposed here would be without
and “common rails” for flourishing AI training data prejudice to national or regional frameworks for
ecosystems. The outcomes of this process need data protection and would not create new data-
not be binding law but model contracts and techno- related rights nor prescribe how existing rights apply
legal arrangements. These facilitative arrangements internationally, but would have to be designed in a
can be developed one by one, as protocols to a way that prevents capture by commercial or other
framework of principles and definitions. interests that could undermine or preclude rights
protections. Rather, a global AI data framework
171 While the full details are beyond our scope, key would address transversal issues of availability,
principles for a global AI data framework would interoperability and use of AI training data. It would
include interoperability, stewardship, privacy help to build common understanding on how to
preservation, empowerment, rights enhancement align different national and regional data protection
and AI ecosystem enablement. frameworks.
44 Competition and Markets Authority, AI Foundation Models: Technical Update Report (London, 2024).
There are many circumstances in which data need to be protected (including for privacy, commercial
confidentiality, intellectual property, safety and security), but where there would also be benefits to individuals and
society in making it available for training AI models.
Data rights in law are generally rights to prevent actions in relation to data. Data privacy rights are also personal
to individuals. The constitution of data rights can make it difficult to exercise data rights in a flexible way that
enables data to be used for some purposes without losing the rights, and to do that collectively as a group.
Even when it is possible to control permissions flexibly and positively, this tends to require more time, technical
expertise and confidence than most people and organizations have.
Mechanisms that enable owners and subjects of data to allow safe and limited use of their data, while maintaining
their rights, can be described as means of data empowerment. Data empowerment can make many more people
and groups in society into active partners and stakeholders in AI, and not only subjects of data. There are already
tools in development for managing access securely, including data trusts and privacy protecting applications for
steering cross-border data flows.
Data trusts are mechanisms that make it possible for individuals and organizations to provide access to their data
collectively, with access in the control of trustees. The data-owners can set the terms for access, use and purpose,
which the trustees exercise. The owners and subjects of the data retain their legal rights while contributing to
shared objectives. An AI model trained on this data could be expected to perform more accurately than one that
lacked this specific input, and thus better serve the well-being of that particular group or of society more broadly.
Mechanisms for managing access and use, and access across borders in particular, all rely on dedicated legal
frameworks. Using these mechanisms in practice also requires adaptation to the needs and contexts of sectors
and communities. Gaps in data stewardship should be identified and closed. Successful and widespread use of
these mechanisms in the future would depend on technical assurance and maintaining the trust of contributors of
data.
We thus propose that more support is given to the further development of these tools, and to identifying the areas
where use of them for training AI could deliver the greatest public value.
176 Steps to address these issues at the national 177 Equally, such action is necessary to promote
and regional level are promising, with the public flourishing local AI ecosystems and limit further
and private sector paying more attention to best economic concentration. These measures could be
practices. Yet without a global framework governing complemented by promotion of data commons and
AI training data sets, commercial competition provisions for hosting data trusts in areas relevant
invites a race to the bottom between jurisdictions on to the SDGs (see box 14). The development of these
access and use requirements, making it difficult to templates and the actual storage and analysis
govern the AI value chain internationally. Only global of data held in commons or in trusts could be
collective action can promote a race to the top in supported by the capacity development network and
the governance of the collection, creation, use and the global fund for AI.
monetization of AI training data in ways that further
interoperability, stewardship, privacy preservation,
empowerment and rights enhancement.
Final Report 69
D. Coherent effort
c. Advising the Secretary-General on matters
related to AI, in coordination with other relevant
parts of the United Nations system to offer a
178 By promoting a common understanding, common whole-of-United Nations response.
ground and common benefits, the proposals above
seek to address the gaps identified in the emerging 181 During our consultations, it became clear that the
international AI governance regime. The gaps in case for an agency with reporting, monitoring,
representation, coordination and implementation verification and enforcement powers has not been
can be addressed through partnerships and made thus far, and there has not yet been much
collaboration with existing institutions and appetite on the part of Member States for an
mechanisms. expensive new organization.
179 However, without a dedicated focal point in 182 We, therefore, focus on the value that the United
the United Nations to support and enable soft Nations can offer, mindful of the shortcomings of
coordination among such and other efforts, and the United Nations system, as well as what could
to ensure that the United Nations system speaks realistically be achieved within a year. In this regard,
with one voice regarding AI, the world will lack the we propose a light, agile mechanism to act as the
inclusively networked, agile and coherent approach “glue” that holds together processes promoting
required for effective and equitable governance of AI. a common understanding, common ground and
common benefits, and enables the United Nations
180 For these reasons, we propose the creation of a system to speak with one voice in the evolving
small, agile capacity in the form of an AI office international AI governance ecosystem.
within the United Nations Secretariat.
183 Just as countries have set up dedicated institutes
AI office in the United and offices focused on the national, regional and
international governance of AI,45 we see the need
Nations Secretariat for a capacity that services and supports the
international scientific panel on AI and AI policy
Recommendation 7: AI office within the Secretariat
dialogue, and catalyses the AI standards exchange
and capacity development network – with lower
We recommend the creation of an AI office
overheads and transaction costs than if each were
within the Secretariat, reporting to the Secretary-
supported by different organizations.
General. It should be light and agile in organization,
drawing, wherever possible, on relevant existing
184 An AI office within the United Nations Secretariat,
United Nations entities. Acting as the “glue” that reporting to the Secretary-General, would have
supports and catalyses the proposals in this report, the benefit of connections throughout the United
partnering and interfacing with other processes and Nations system, without being tied to one part of it.
institutions, the office’s mandate would include: That is important because of the uncertain future of
a. Providing support for the proposed international AI and the strong likelihood that it will permeate all
scientific panel, policy dialogue, standards aspects of human endeavour.
exchange, capacity development network and,
to the extent required, the global fund and global 185 A small and agile AI office would be well positioned
AI data framework; to connect various domains and organizations
b. Engaging in outreach to diverse stakeholders, on AI governance issues to help to address gaps
including technology companies, civil society dynamically, working to amplify existing efforts
and academia, on emerging AI issues; and within and beyond the United Nations. By bridging
45 Including Canada, Japan, the Republic of Korea, Singapore, the United Kingdom, the United States and the European Union.
AI Council of
GPAI OECD
summits Europe
National &
National & Group of Group of regional
regional 20 Seven Initiatives
Initiatives
…
Regional
SDOs
United Nations organizations AI data
engagement
framework
International Capacity
Governance Standards Global fund
scientific development
dialogue exchange for AI
panel network
Abbreviations: GPAI, Global Partnership on Artificial Intelligence; OECD, Organisation for Economic Co-operation and Development;
SDOs, standards development organizations.
and connecting other initiatives, such as those led 188 This recommendation is made on the basis of a
by regional organizations and other plurilateral clear-eyed assessment as to where the United
initiatives, it can help to lower the costs of Nations can add value, including where it can lead,
cooperation between them. where it can fill gaps, where it can aid coordination
and where it should step aside, working in close
186 Such a body should champion inclusion and partnership with existing efforts (see fig. 17). It
partner rapidly to accelerate coordination and also brings the benefits of existing institutional
implementation, drawing, as a first priority, on arrangements, including pre-negotiated funding and
existing resources and functions within the United administrative processes that are well understood.
Nations system. It could be staffed in part by
United Nations personnel seconded from relevant 189 The evolving characteristics of AI technology
specialized agencies and other parts of the should be considered. There is a high probability
United Nations system. It should engage multiple of technical breakthroughs that will dramatically
stakeholders, including civil society, industry and change the current AI model landscape. Such an
academia, and develop partnerships with leading AI office should be effectively in place to adjust
organizations outside of the United Nations, such as governance frameworks to the evolving landscapes
OECD. and respond to unforeseen developments
concerning AI technology.
187 The AI office would ensure information-sharing
across the United Nations system and enable the
system to speak with authority and with one voice.
Box 15 lists possible functions and early deliverables
of such an office.
Final Report 71
Box 15: Possible functions and first-year deliverables of the AI office
The AI office should have a light structure and aim to be agile, trusted and networked. Where necessary, it should
operate in a “hub and spoke” manner to connect to other parts of the United Nations system and beyond.
Outreach could include serving as a key node in a so-called soft coordination architecture between Member
States, plurilateral networks, civil society organizations, academia and technology companies in a regime complex
that weaves together to solve problems collaboratively through networking, and as a safe, trusted place to
convene on relevant topics. Ambitiously, it could become the glue that helps to hold such other evolving networks
together.
Supporting the various initiatives proposed in this report includes the important function of ensuring inclusiveness
at speed in delivering outputs such as scientific reports, governance dialogue and identifying appropriate follow-
up entities.
Common understanding:
• Facilitate recruitment of and support the international scientific panel.
Common ground:
• Service policy dialogues with multi-stakeholder inputs in support of interoperability and policy learning.
An initial priority topic is the articulation of risk thresholds and safety frameworks across jurisdictions
• Support ITU, ISO/IEC and IEEE on setting up the AI standards exchange.
Common benefits:
• Support the AI capacity development network with an initial focus on building public interest AI capacity
among public officials and social entrepreneurs. Define the initial network vision, outcomes, governance
structure, partnerships and operational mechanisms.
• Define the vision, outcomes, governance structure and operational mechanisms for the global fund for AI,
and seek feedback from Member States, industry and civil society stakeholders on the proposal, with a
view to funding initial projects within six months of establishment.
• Prepare and publish an annual list of prioritized investment areas to guide both the global fund for AI and
investments outside that structure.
Coherent effort:
• Establish lightweight mechanisms that support Member States and other relevant organizations to be
more connected, coordinated and effective in pursuing their global AI governance efforts.
• Prepare initial frameworks to guide and monitor the AI office’s work, including a global governance risk
taxonomy, a global AI policy landscape review and a global stakeholder map.
• Develop and implement quarterly reporting and periodic in-person presentations to Member States on
the AI office’s progress against its workplan and establish feedback channels to support adjustments as
needed.
• Establish a steering committee jointly led by the AI office, ITU, UNCTAD, UNESCO and other relevant
United Nations entities and organizations to accelerate the work of the United Nations in service of the
functions above, and review progress of the accelerated efforts every three months.
• Promote joint learning and development opportunities for Member State representatives to support them
to carry out their responsibilities for global AI governance, in cooperation with relevant United Nations
entities and organizations such as the United Nations Institute for Training and Research and the United
Nations University.
Final Report 73
200 The grand bargain at the heart of the International 205 Eventually, some kind of mechanism at the global
Atomic Energy Agency (IAEA) was that nuclear level might become essential to formalize red
energy’s beneficial purposes could be shared – in lines if regulation of AI needs to be enforceable.
energy production, agriculture and medicine – in Such a mechanism might include formal CERN-
exchange for guarantees that it would not be further like commitments for pooling resources for
weaponized. As the nuclear non-proliferation regime collaboration on AI research and sharing of benefits
shows, good norms are necessary but not sufficient as part of the bargain.
for effective regulation.
206 Given the speed, autonomy and opacity of AI
201 The limits of the analogy are clear. Nuclear energy systems, however, waiting for a threat to emerge
involves a well-defined set of processes related to may mean that any response will come too late.
specific materials that are unevenly distributed, and Continued scientific assessments and policy
much of the materials and infrastructure needed to dialogue would ensure that the world is not
create nuclear capability are controlled by nation surprised. Any decision to begin a formal process
States. AI is an amorphous term; its applications are would, naturally, lie with Member States.
extremely wide and its most powerful capabilities
span industry and States. The grand bargain of IAEA 207 Possible thresholds for such a move could include
focused on weapons that are expensive to build and the prospect of uncontrollable or uncontainable
difficult to hide; weaponization of AI promises to be AI systems being developed, or the deployment
neither. of systems that are unable to be traced back to
human, corporate or State actors. They could also
202 An early idea – pooling of nuclear fuel for peaceful include indications that AI systems exhibit qualities
purposes – did not work out as planned. On that suggest the emergence of “superintelligence”,
the pooling of resources for sharing benefits of although this is not present in today’s AI systems.
technology, a more AI-appropriate analogy may be
CERN, which pools funding, talent and infrastructure. 208 Establishing a watching brief, drawing on diverse
However, there are limits to the comparison, given the and distinguished experts to monitor the horizon,
difference between experimental fundamental physics is a reasonable first step. The scientific panel could
and AI, which requires a more distributed approach. be tasked with commissioning research on this
question, as part of its quarterly research digest
203 Another imperfect analogy is organizations such as series. Over time, the policy dialogue could be an
the International Civil Aviation Organization (ICAO) appropriate forum for sharing information about AI
and the International Maritime Organization (IMO). incidents, such as those that stretch or exceed the
The underlying technologies of transportation are capacities of existing agencies, analogous to the
well established, and their civilian applications practices of IAEA for mutual reassurance on nuclear
can be easily demarcated from military ones – safety and nuclear security, or the World Health
this is not the case with general-purpose AI. The Organization (WHO) on disease surveillance.
network of national regulatory authorities that
apply the international norms developed in the 209 The functions of a proposed international AI agency
framework of ICAO and IMO is also well established. could draw on the experience of relevant agencies,
Safety, facilitation of commercial activity, and such as IAEA, the Organisation for the Prohibition
interoperability are in focus. Compliance is not of Chemical Weapons, ICAO, IMO, CERN and the
handled in a top-down manner. Biological Weapons Convention. They could include:
• Developing and promulgating standards and
204 There are other approaches to compliance that can norms for AI safety;
inspire. Financial risk management benefits from • Monitoring AI systems that have the potential
mechanisms such as the Financial Stability Board to threaten international peace and security,
(FSB) and the Financial Action Task Force (FATF), or cause grave breaches of human rights or
without recourse to treaties. international humanitarian law;
Some of these domains, such as civil aviation, climate change and nuclear power, have led to the creation of new
United Nations institutions. Others, notably the protection of global financial flows, have led to bodies that are
not treaty-based and yet they have delivered robust normative frameworks, effective market-based enforcement
mechanisms and strong public-private partnerships.
As we draw parallels between these institutional responses and nascent efforts to do the same for AI, we should
not focus too heavily on which institutional analogue is most suitable for the AI problem set. Our interim report
foreshadowed that we should look instead at which governance functions are needed for effective and inclusive
global AI governance, and what we can learn from past global governance endeavours.
One lesson is that the development of a shared scientific and technical understanding of the problem is necessary
to trigger a commonly accepted policy response. Here, IPCC, which continues to address the risks of climate
change, is a useful model. It offers an example of how an inclusive approach to crafting reports and developing
scientific consensus in a constantly evolving area can level the playing field for researchers and policymakers
and create the shared understanding that is essential for effective policymaking. The process of drafting and
disseminating IPCC reports and global stock takes, although not without challenges, has been centrally important
to building a shared understanding and common knowledge base, lowering the costs of cooperation and steering
the Conference of the Parties to the United Nations Framework Convention on Climate Change towards concrete
policy deliverables.
For AI, as the technology evolves, it will be just as important to develop a shared scientific understanding. As the
capabilities of AI systems continue to advance and potential risks may exceed known effective approaches to
mitigating them, the international scientific panel could be evolved to match emerging needs.
A second lesson is that multi-stakeholder collaboration can deliver strong standards and promote quick
responses. Here, ICAO and FATF offer useful examples of how to govern a highly technical issue across borders.
In civil aviation, the ICAO safety and security standards, developed by industry and government experts and
enforced through market access restrictions, ensure that a plane that takes off from, for example, New York can
land in Geneva without triggering new safety audits. A combination of ICAO-led safety audits and Member State-
driven audits ensure consistent implementation, even as the technology evolves.
FATF – established by the G7 in 1989 to address money-laundering – offers another example of how soft law
institutions can promote common standards and implementation. Its peer review system for monitoring is
Final Report 75
Box 16: Lessons learned from past global governance institutions
(continued)
flexible; and widespread acceptance of its recommendations has created reputational costs for those companies
and Member States that fail to comply. Even as the risks to international financial flows have evolved, most
significantly with the rise of terrorism and proliferation finance, the nimble structure and normative framework of
FATF have allowed it to respond quickly and keep pace with complex challenges.
In their own unique ways, both ICAO and FATF have created widely recognized international standards, domestic
frameworks for measuring compliance, and interoperable systems for responding to certain classes of risks and
challenges that manifest across jurisdictions. ICAO enforces via market access incentives and restrictions, while
FATF creates reputational risk for non-compliance. Both offer useful templates for AI, as they demonstrate how
governments and other stakeholders can work together to create a web of interconnected norms and regulations
and create costs for non-compliance.
A third lesson is that global coordination is often vital for monitoring and taking action in response to severe risks
with the potential for widespread impact. FSB and IAEA models offer key examples. Established in 2009, FSB was
created by the G20 countries to monitor and warn against systemic risks to the international financial system.
Its unique composition of G20 finance officials and international financial and development organizations has
allowed it to be nimble, adept and inclusive when coordinating efforts to identify global financial risks.
The IAEA approach to nuclear safeguards offers a different model. Its comprehensive safeguards agreements,
signed by 182 States, are part of the most wide-ranging United Nations regime for ensuring compliance. By using
a combination of inspections and monitoring – as well as the threat of Security Council action – IAEA offers
perhaps the most visible censure of Member States who fail to comply.
Both FSB and IAEA demonstrate how international coordination is fundamental to monitoring severe risks. As
the risks of AI become clearer and more pronounced, there may be a similar need to create a new AI-focused
institution to maximize coordination efforts and monitor severe and systemic risks, so that Member States can,
wherever possible, intervene to stay ahead of those risks.
A fourth lesson is that it is important to create inclusive access to the resources needed for research and
development, along with their benefits. The experiences of CERN and IAEA are both instructive. CERN brings
together world-class scholars and physicists to perform complex research into particle accelerators and other
projects that are meant to benefit humanity. It also offers training to physicists and engineers.
Similarly, IAEA facilitates access to technology, in this case nuclear energy and ionizing radiation. The basic
trade-off is simple: Member States comply with nuclear safeguards and IAEA offers technical assistance towards
the use of peaceful nuclear power. In this regard, IAEA provides an inclusive approach to spreading the benefits
of technology to developing countries. Its facilitation of a network of centres of excellence on nuclear security is
similar to our recommendation for a networked approach to capacity-building.
As we have explained above, AI is a set of technologies whose benefits need to be shared in a more inclusive
and equitable manner, especially with countries in the global South. This is why we have recommended both an
AI capacity development network and a global fund for AI. As we learn more about AI through the work of the
international scientific panel, and as the responsible deployment of AI in support of the SDGs becomes even more
pressing, United Nations Member States may want to institutionalize this function more widely. If they do so, they
should look to draw lessons from CERN and IAEA as useful models for supporting broader access to resources,
as part of an overall global AI governance structure.
Final Report 77
218 The implementation of the recommendations in the
present report may also encourage new ways of
thinking: a collaborative and learning mindset, multi-
stakeholder engagement and broad-based public
engagement. The United Nations can be the vehicle
for a new social contract for AI that ensures global
buy-in for a governance regime that protects and
empowers us all. Such a contract will ensure that
opportunities are fairly accessed and distributed,
and the risks are not loaded onto the most
vulnerable – or passed on to future generations, as
we have seen tragically with climate change.
Ruimin He
Final Report 79
Annex B: Terms of reference of the
High-level Advisory Body on Artificial
Intelligence
The High-level Advisory Body on Artificial Intelligence, convened by the United Nations
Secretary-General, will undertake analysis and advance recommendations for the
international governance of artificial intelligence. The Body’s initial reports will provide high-
level expert and independent contributions to ongoing national, regional, and multilateral
debates.
The Body will consist of 38 members from governments, private sector, civil society, and
academia, as well as a member Secretary. Its composition will be balanced by gender, age,
geographic representation, and area of expertise related to the risks and applications of
artificial intelligence. The members of the Body will serve in their personal capacity.
The Body will engage and consult widely with governments, private sector, academia, civil
society, and international organizations. It will be agile and innovative in interacting with
existing processes and platforms as well as in harnessing inputs from diverse stakeholders.
It could set up working parties or groups on specific topics.
The members of the Body will be selected by the Secretary-General based on nominations
from Member States and a public call for candidates. It will have two Co-Chairs and
an Executive Committee. All stakeholder groups will be represented in the Executive
Committee.
The Body shall be convened for an initial period of one year, with the possibility of extension
by the Secretary-General. It will have both in-person and online meetings.
The Body will prepare a first report by 31 December 2023 for the consideration of the
Secretary-General and the Member States of the United Nations. This first report will present
a high-level analysis of options for the international governance of artificial intelligence.
Based on feedback to the first report, the Body will submit a second report by 31 August
2024 which may provide detailed recommendations on the functions, form, and timelines for
a new international agency for the governance of artificial intelligence.
The Body shall avoid duplication with existing forums and processes where issues of
artificial intelligence are considered. Instead, it shall seek to leverage existing platforms
and partners, including UN entities, working in related domains. It shall fully respect current
UN structures as well as national, regional, and industry prerogatives in the governance of
artificial intelligence.
The deliberations of the Body will be supported by a small secretariat based in the Office
of the Secretary-General’s Envoy on Technology and be funded by extrabudgetary donor
resources.
Final Report 81
Annex D: List of “deep dives”
Domain Date (Eastern Daylight Time)
Education 29 March
Children 4 April
Faith-based 1 May
Gender 7 May
Data 13 May
Environment 20 May
Health 22 May
They were also asked to rate their overall level of concern that harms (existing or new) resulting from AI would
become substantially more serious and/or widespread, and how much that concern had recently increased or
decreased. Respondents were given a list of 14 sample areas of harm (such as “Intentional malicious use of AI
by non-State actors”) to rate their level of concern. Finally, many text-response prompts were provided, inviting
experts to comment on emerging trends, and individuals, groups and (eco)systems at particular risk from AI, and to
elaborate on their rated answers.
The survey was fielded from 13 to 25 May 2024, with the invitee list constructed from OSET and the Advisory Body’s
networks, including participants in Advisory Body deep dives. During the fielding period, additional experts were
continually invited, particularly from regions often less represented in discussions around AI, based on referrals from
initial respondents and outreach to regional networks. More than 340 respondents replied to the survey, providing a
rich and diverse perspective (including across regions and gender) on risks posed by AI.
Overview of sample
175 173
Man 96 95
Woman 78 77
Non-binary
1 1
WEOG nationality Non-WEOG
* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
Final Report 83
Sample remains global if considered by residence
84% of respondents reside in the same region as their nationality.
175 (50%)
WEOG
198 (57%)
63 (18%)
Asia-Pacific
54 (16%) Nationality*
Residence
Latin America and 30 (9%)
the Caribbean 28 (8%)
13 (4%)
Eastern Europe
10 (3%)
* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
Africa 38 29 67 (19%)
United States
72 (21%)
Asia-Pacific 36 26 1 63 (18%)
348
United Kingdom
respondents
from 17 (5%)
Latin America and
14 16 30 (9%) Men 68 India 16 (5%)
the Caribbean Women countries
Non-binary Canada 14 (4%)
China 14 (4%)
Eastern Europe 7 6 13 (4%) Germany 13 (4%)
South Africa 11 (3%)
* 43 respondents (12%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (34 of 43).
Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
Affiliation Expertise
WEOG
Non-WEOG 75% 76%
54%
46%
43%
39% 38%
35% 34% 36% 34%
33%
30% 29% 27%
25%
21%
15%
Private sector Public sector Academia Civil society Technical Implement / Scientific or Government / Government /
/ industry / government expertise commercialize technical politics / law politics / law /
training / new AI expertise (not / ethics on AI ethics (not AI
developing AI technology AI specific) / technology / technology
* 43 respondents (12%) indicated multiple nationalities. If respondents were
resident in one of their countries of nationality, that nationality was used for analysis specific)
(34 of 43). Otherwise, the least represented nationality was used (9 of 43).
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
Final Report 85
Perceptions regarding acceleration of AI
1 2 3 4 5
“In the next 18 months, compared to the last 3 months, do BY REGION & GENDER
you expect the pace of adoption and application of AI (e.g.
new uses of AI in business / government) to:” (n = 348) 5 Substantially accelerate 3 Remain same 1 Substantially decelerate
4 Accelerate 2 Decelerate
1 2 3 4 5
Total
average: Men 11%
0%
0% 52% 37% 4.27
4.24 / 5
10%
Women 0%
1% 59% 30% 4.19
34%
11%
WEOG 0%
0% 61% 28% 4.17
9%
Non-WEOG 0%
1% 49% 41% 4.31
55%
11%
Men, WEOG 0%
0% 60% 28% 4.17
10%
Men, Non-WEOG 0%
0% 44% 47% 4.37
9%
Women, Non-WEOG 0%
1% 54% 35% 4.23
Note: Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
BY EXPERTISE
Technical change Adoption & application
“In the next 18 months, compared to the last 3 months, do you expect “In the next 18 months, compared to the last 3 months, do you expect
the pace of technical change in AI (e.g. development / release of new the pace of adoption and application of AI (e.g. new uses of AI in
models) to...” (n = 348) business / government) to...” (n = 348)
94%
87%
76% 34%
71%
35%
30%
30% 5 Substantially accelerate 5 Substantially accelerate
4 Accelerate 4 Accelerate
3 Remain same 3 Remain same
61%
46% 2 Decelerate 52% 2 Decelerate
41%
1 Substantially decelerate 1 Substantially decelerate
0% 0% 6% 13%
20% 21% 0% 0%
7% 0% 3%
1%
Note: Numbers may not add up to 100% owing to rounding. Excludes “Don’t know” / “No opinion” and blank responses.
Source: OSET AI Risk Pulse Check, 13-25 May 2024.
“What is your current level of concern that harms (existing or new) BY REGION
resulting from AI will become substantially more serious and/or
widespread in the next 18 months for each area?” (n = 348) 5 Very concerned 3 Somewhat concerned 1 Not concerned
4 Concerned 2 Slightly concerned
1 2 3 4 5
Latin America
and the 0% 20% 33% 40% 4.07
31% Caribbean
7%
0%
18% Eastern Europe 8% 23% 54% 4.00
15%
9%
2%
Final Report 87
Non-WEOG more concerned than WEOG in most example areas
Particularly large gaps in inaccurate information, unintended autonomous actions and intentional corporate use.
Shown: Average, where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned, 5 = Very concerned.
Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.
Many concerns highest in Africa and in Latin America and the Caribbean
Especially around State use in armed conflict, enabling discrimination or human rights violations.
5
4.45
4.43
4.29
4.23
4.12
4.28
4.24
3.71
4.20
4.20
4.13
4.02
3.95
4.17
4.10
4.10
3.90
4.08
4.06
4.03
4.03
4.03
4.00
4.00
3.96
3.94
3.93
3.87
3.87
3.85
3.85
3.85
3.80
3.79
4
3.77
3.77
3.75
3.74
3.70
3.69
3.69
3.33
3.62
3.58
3.57
3.54
3.54
3.54
3.53
3.51
3.51
3.49
3.46
3.46
3.38
3.38
3.37
3.36
3.32
3.31
3.02
3.23
3.23
3.23
3.00
3.04
2.95
2.93
2.85
2.85
9%
-0.36
area.” (n = 348)
b. Intentional use of -0.09
0.33
AI in armed conflict 0.32
-0.18
by state actors -0.42
a. Intentional -0.03
0.10
malicious use of AI 0.17
-0.07
0.28
AI by state actors 0.18
“Please rate your current level of concern that harms (existing
-0.25
that harms individuals
-0.36
i. Violation of -0.06
0.19
intellectual -0.32
0.13
property rights -0.02
Asia-Pacific
1
-0.03
Very concerned. Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.
n. Environmental 0.33
0.08
harms -0.30
-0.08
2
Especially around State use in armed conflict, enabling discrimination or human rights violations.
-0.09
g. Harms to labour 0.13
0.51
from adoption of AI -0.08
Latin America & the Caribbean
-0.22
3
e. Unintended -0.17
autonomous actions 0.20
0.22
by AI systems [Excl. 0.20
-0.27
71% concerned / very concerned about AI harms in the next 18 months
autonomous weapons]
4
Many concerns highest in Africa and in Latin America and the Caribbean
f. Unintended 0.03
0.00
multi-agent interactions -0.08
– smaller sample
-0.01
BY REGION
Shown: difference between aggregate (all regions) rating and indicated region’s rating where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned, 5 =
Final Report
Interpret with caution
among AI systems
Eastern Europe
-0.17
5
89
Women more concerned than men about all example areas
There are particularly large gaps on human rights violations, discrimination and the environment.
Total average:
4.07 / 5 4.03 / 5
3.93 / 5 3.89 / 5 4.05 / 5
0.25
Does not report technical expertise training / developing AI (n = 221)
0.11
0.09
0.09
0.08
0.07
0.06
0.05
0.04
0.04
0.03
0.03
0.03
0.02
0.00
0.00
0.00
-0.02
-0.02
-0.03
-0.05
-0.06
-0.06
-0.09
-0.11
-0.12
-0.13
-0.16
-0.16
n. Environmental
malicious use of AI
intellectual
autonomous weapons]
j. Damage to
ownership over
by non-state actors
among AI systems
l. Discrimination /
d. Intentional use of AI
autonomous actions
a. Intentional
by AI systems [Excl.
AI in armed conflict
/ users
from adoption of AI
differential control and
particularly against
AI by state actors
violations
i. Violation of
harms
b. Intentional use of
by corporate actors
c. Intentional use of
/ analysis provided by
AI in critical fields
property rights
by state actors
m. Human rights
marginalized communities
f. Unintended
multi-agent interactions
information integrity
e. Unintended
k. Inaccurate information
disenfranchisement,
h. Inequalities arising from
Shown: Difference between aggregate (all respondents) rating and indicated group’s rating where: 1 = Not concerned, 2 = Slightly concerned, 3 = Somewhat concerned, 4 = Concerned,
5 = Very concerned. Note: Excludes “Don’t know” / “No opinion” and blank responses. Source: OSET AI Risk Pulse Check, 13-25 May 2024.
“What is your current overall level of concern that BY GENDER & EXPERTISE
harms (existing or new) resulting from AI will become
substantially more serious and/or widespread in the 5 Very concerned 3 Somewhat concerned 1 Not concerned
next 18 months?” (n = 348) 4 Concerned 2 Slightly concerned
1 2 3 4 5
Reports technical
expertise training
Total 3% 11% 14% 30% 41% 3.95
/ developing AI
average: (n = 127)
3.98 / 5
Doesn’t report
1% 8% 20% 31% 40% 4.00
(n = 221)
40%
Men, Reports
5% 10% 17% 28% 40% 3.89
(n = 83)
Final Report 91
Change in perception of level of concern in the past three months
regarding risks of AI harms
Respondents were asked to what extent they were aware of specific examples to date
of AI increasing economic activity, accelerating scientific discoveries and contributing
to progress on individual SDGs.1 They were asked to provide details including case
studies, names of organizations, data and links to relevant articles/publications/
papers. Respondents were then asked how much progress they expected in the next
three years along the same dimensions.
The survey was fielded from 9 to 21 August 2024, with the invitee list constructed
from OSET and the Advisory Body’s networks, including participants in Advisory
Body deep dives. Additionally, both the International Telecommunication Union’s AI
for Good meeting and the networks of the United Nations Conference on Trade and
Development were generously used to field the survey. Over 1,000 individuals were
invited overall. More than 120 respondents replied to the survey, providing a rich and
diverse perspective (including across regions and gender) on opportunities from AI.
1 SDG 8 (Decent work and economic growth) and SDG 9 (Innovation, industry and infrastructure) were not asked about separately, given their close link to
increasing economic activity. SDG 17 (Partnerships for the Goals) was also not asked about specifically.
Final Report 93
Overview of sample
Asia-Pacific 15 12 27 (22%)
United States 23 (19%)
Africa 13 10 23 (19%)
Germany 8 (7%)
38
countries
represented
Latin America and India 8 (7%)
7 1 8 (7%)
the Caribbean Men
Women
United Kingdom 8 (7%)
63
58
38
36 (60%)
Men
(62%)
25
22
Women (40%)
(38%)
* 9 respondents (7%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (8 of 9). Otherwise, the
least represented nationality was used (1 of 9).
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
28
(55%)
27
19
Men
(70%)
23
(45%)
8
Women
(30%)
WEOG nationality Non-WEOG
* 9 respondents (7%) indicated multiple nationalities. If respondents were resident in one of their countries of nationality, that nationality was used for analysis (8 of 9). Otherwise, the
least represented nationality was used (1 of 9). Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
Positive impact to date on growth and science, but less on most SDGs
Impact to date in high/upper-middle-income countries.
“To what degree are you aware of 1 Don’t believe AI is causing any positive impact 4 Aware of AI causing major positive impact
specific examples of AI currently 2 Aware of AI causing minor positive impact 5 Aware of AI causing transformative positive impact
or having recently directly
contributed to … in high/upper- 3 Aware of AI causing positive impact
middle-income countries?” 1 2 3 4 5
Note: Excludes “Don’t know” / “No opinion” and blank responses. Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
Final Report 95
Less impact reported in the lower-income world on all fronts
Impact to date in lower-middle/lower-income countries.
“To what degree are you aware of 1 Don’t believe AI is causing any positive impact 4 Aware of AI causing major positive impact
specific examples of AI currently 2 Aware of AI causing minor positive impact 5 Aware of AI causing transformative positive impact
or having recently directly
3 Aware of AI causing positive impact
contributed to … in lower-
middle/lower-income countries?” 1 2 3 4 5
Average rating for “To what degree are you aware of specific
examples of AI currently or having recently directly contributed to … ?”
by country income group, where:
1 = Don’t believe AI is causing any positive impact
5.0 2 = Aware of AI causing minor positive impact
3 = Aware of AI causing positive impact
4.5 4 = Aware of AI causing major positive impact
5 = Aware of AI causing transformative positive impact High/upper-middle-income countries
4.0 Lower-middle/lower-income countries
3.5 3.31
3.00
3.0 2.67
2.42
2.5 2.19 2.13
2.08
1.88 1.92
1.81
2.0 2.17 2.15
1.74 1.78
1.66
2.08 2.09 1.60
1.49 1.43
1.5 1.78 1.81
1.67
1.81
1.72
1.55 1.56
1.33 1.44 1.43 1.42 1.37
1.0
Accelerating scientific
discoveries
SDG 1 - No poverty
SDG 3 - Good health
SDG 11 - Sustainable
Increasing economic
activity
SDG 4 -
clean energy
and sanitation
production
Quality education
SDG 10 - Reduced
cities and communities
SDG 14 - Life
below water
inequalities
and well-being
SDG 12 - Responsible
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
“In the next three years, how much 1 Don’t expect any positive impact 4 Expect major positive impact
do you expect AI to directly 2 Expect minor positive impact 5 Expect transformative positive impact
contribute towards … in
3 Expect positive impact
high/upper-middle-income
countries?” 1 2 3 4 5
Note: Excludes “Don’t know” / “No opinion” and blank responses. Did not ask about SDGs 8, 9 and 17.
Source: OSET AI Opportunity Scan survey, 9-21 August 2024
“In the next three years, how much 1 Don’t expect any positive impact 4 Expect major positive impact
do you expect AI to directly 2 Expect minor positive impact 5 Expect transformative positive impact
contribute towards … in lower-
3 Expect positive impact
middle/lower-income countries?”
1 2 3 4 5
Final Report 97
Less impact expected in the lower-income world on all fronts
Gap most pronounced on economic growth, science, health and education.
Average rating for “In the next three years, how much do you expect AI
to directly contribute towards … ?” by country income group, where:
1 = Don’t expect any positive impact
2 = Expect minor positive impact
5.0 3 = Expect positive impact
4 = Expect major positive impact
4.5 5 = Expect transformative positive impact
High/upper-middle-income countries
4.0 Lower-middle/lower-income countries
3.5 3.25
2.96
3.0 2.75
2.67
clean energy
SDG 1 - No poverty
SDG 11 - Sustainable
and sanitation
activity
SDG 4 -
production
Quality education
SDG 14 - Life
SDG 10 - Reduced
below water
cities and communities
inequalities
SDG 2 - Zero hunger
and well-being
SDG 12 - Responsible
consumption &
SDG 16 - Peace, justice
Note: Excludes “Don’t know” / “No opinion” and blank responses. Only respondents reporting relevant knowledge were asked about lower-middle/lower-income countries.
Did not ask about SDGs 8, 9 and 17. Source: OSET AI Opportunity Scan survey, 9-21 August 2024.
Charts prepared with think-cell
Final Report 99
Donors
The Body gratefully acknowledges the financial and in-kind contributions of the following governments
and partners, without whom it would not have been able to carry out its responsibilities: